1
0
mirror of https://github.com/rclone/rclone.git synced 2025-06-14 22:15:26 +02:00

Version v1.46

This commit is contained in:
Nick Craig-Wood
2019-02-09 10:42:57 +00:00
parent 0dc08e1e61
commit eb85ecc9c4
82 changed files with 24832 additions and 18620 deletions

File diff suppressed because it is too large Load Diff

1416
MANUAL.md

File diff suppressed because it is too large Load Diff

1525
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@ -393,12 +393,21 @@ Upload chunk size. Must fit in memory.
When uploading large files, chunk the file into this size. Note that When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of these chunks are buffered in memory and there might a maximum of
"--transfers" chunks in progress at once. 5,000,000 Bytes is the "--transfers" chunks in progress at once. 5,000,000 Bytes is the
minimim size. minimum size.
- Config: chunk_size - Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE - Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix - Type: SizeSuffix
- Default: 96M - Default: 96M
#### --b2-disable-checksum
Disable checksums for large (> upload cutoff) files
- Config: disable_checksum
- Env Var: RCLONE_B2_DISABLE_CHECKSUM
- Type: bool
- Default: false
<!--- autogenerated options stop --> <!--- autogenerated options stop -->

View File

@ -1,11 +1,140 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Changelog" description: "Rclone Changelog"
date: "2018-11-24" date: "2019-02-09"
--- ---
# Changelog # Changelog
## v1.46 - 2019-02-09
* New backends
* Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood)
* New commands
* serve dlna: serves a remove via DLNA for the local network (nicolov)
* New Features
* copy, move: Restore deprecated `--no-traverse` flag (Nick Craig-Wood)
* This is useful for when transferring a small number of files into a large destination
* genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov)
* Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood)
* Buffer recycling library to replace sync.Pool
* Optionally use memory mapped memory for better memory shrinking
* Enable with `--use-mmap` if having memory problems - not default yet
* Parallelise reading of files specified by `--files-from` (Nick Craig-Wood)
* check: Add stats showing total files matched. (Dario Guzik)
* Allow rename/delete open files under Windows (Nick Craig-Wood)
* lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood)
* Add cookie support with cmdline switch `--use-cookies` for all HTTP based remotes (qip)
* Warn if `--checksum` is set but there are no hashes available (Nick Craig-Wood)
* Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood)
* Improve error reporting for too many/few arguments in commands (Nick Craig-Wood)
* listremotes: Remove `-l` short flag as it conflicts with the new global flag (weetmuts)
* Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood)
* Bug Fixes
* Fix layout of stats (Nick Craig-Wood)
* Fix `--progress` crash under Windows Jenkins (Nick Craig-Wood)
* Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly)
* copyurl: Fix checking of `--dry-run` (Denis Skovpen)
* Mount
* Check that mountpoint and local directory to mount don't overlap (Nick Craig-Wood)
* Fix mount size under 32 bit Windows (Nick Craig-Wood)
* VFS
* Implement renaming of directories for backends without DirMove (Nick Craig-Wood)
* now all backends except b2 support renaming directories
* Implement `--vfs-cache-max-size` to limit the total size of the cache (Nick Craig-Wood)
* Add `--dir-perms` and `--file-perms` flags to set default permissions (Nick Craig-Wood)
* Fix deadlock on concurrent operations on a directory (Nick Craig-Wood)
* Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood)
* Fix renaming/deleting open files with cache mode "writes" under Windows (Nick Craig-Wood)
* Fix panic on rename with `--dry-run` set (Nick Craig-Wood)
* Fix vfs/refresh with recurse=true needing the `--fast-list` flag
* Local
* Add support for `-l`/`--links` (symbolic link translation) (yair@unicorn)
* this works by showing links as `link.rclonelink` - see local backend docs for more info
* this errors if used with `-L`/`--copy-links`
* Fix renaming/deleting open files on Windows (Nick Craig-Wood)
* Crypt
* Check for maximum length before decrypting filename to fix panic (Garry McNulty)
* Azure Blob
* Allow building azureblob backend on *BSD (themylogin)
* Use the rclone HTTP client to support `--dump headers`, `--tpslimit` etc (Nick Craig-Wood)
* Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood)
* Ignore directory markers (Nick Craig-Wood)
* Stop Mkdir attempting to create existing containers (Nick Craig-Wood)
* B2
* cleanup: will remove unfinished large files >24hrs old (Garry McNulty)
* For a bucket limited application key check the bucket name (Nick Craig-Wood)
* before this, rclone would use the authorised bucket regardless of what you put on the command line
* Added `--b2-disable-checksum` flag (Wojciech Smigielski)
* this enables large files to be uploaded without a SHA-1 hash for speed reasons
* Drive
* Set default pacer to 100ms for 10 tps (Nick Craig-Wood)
* This fits the Google defaults much better and reduces the 403 errors massively
* Add `--drive-pacer-min-sleep` and `--drive-pacer-burst` to control the pacer
* Improve ChangeNotify support for items with multiple parents (Fabian Möller)
* Fix ListR for items with multiple parents - this fixes oddities with `vfs/refresh` (Fabian Möller)
* Fix using `--drive-impersonate` and appfolders (Nick Craig-Wood)
* Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood)
* Dropbox
* Retry-After support for Dropbox backend (Mathieu Carbou)
* FTP
* Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood)
* helps with indefinite hangs on some FTP servers
* Google Cloud Storage
* Update google cloud storage endpoints (weetmuts)
* HTTP
* Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood)
* Fix backend with `--files-from` and non-existent files (Nick Craig-Wood)
* Hubic
* Make error message more informative if authentication fails (Nick Craig-Wood)
* Jottacloud
* Resume and deduplication support (Oliver Heyme)
* Use token auth for all API requests Don't store password anymore (Sebastian Bünger)
* Add support for 2-factor authentification (Sebastian Bünger)
* Mega
* Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood)
* Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood)
* Add new error codes for better error reporting (Nick Craig-Wood)
* Onedrive
* Fix broken support for "shared with me" folders (Alex Chen)
* Fix root ID not normalised (Cnly)
* Return err instead of panic on unknown-sized uploads (Cnly)
* Qingstor
* Fix go routine leak on multipart upload errors (Nick Craig-Wood)
* Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood)
* Default `--qingstor-upload-concurrency` to 1 to work around bug (Nick Craig-Wood)
* S3
* Implement `--s3-upload-cutoff` for single part uploads below this (Nick Craig-Wood)
* Change `--s3-upload-concurrency` default to 4 to increase perfomance (Nick Craig-Wood)
* Add `--s3-bucket-acl` to control bucket ACL (Nick Craig-Wood)
* Auto detect region for buckets on operation failure (Nick Craig-Wood)
* Add GLACIER storage class (William Cocker)
* Add Scaleway to s3 documentation (Rémy Léone)
* Add AWS endpoint eu-north-1 (weetmuts)
* SFTP
* Add support for PEM encrypted private keys (Fabian Möller)
* Add option to force the usage of an ssh-agent (Fabian Möller)
* Perform environment variable expansion on key-file (Fabian Möller)
* Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood)
* Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood)
* Fix error on dangling symlinks (Nick Craig-Wood)
* Swift
* Add `--swift-no-chunk` to disable segmented uploads in rcat/mount (Nick Craig-Wood)
* Introduce application credential auth support (kayrus)
* Fix memory usage by slimming Object (Nick Craig-Wood)
* Fix extra requests on upload (Nick Craig-Wood)
* Fix reauth on big files (Nick Craig-Wood)
* Union
* Fix poll-interval not working (Nick Craig-Wood)
* WebDAV
* Support About which means rclone mount will show the correct disk size (Nick Craig-Wood)
* Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood)
* Fail soft on time parsing errors (Nick Craig-Wood)
* Fix infinite loop on failed directory creation (Nick Craig-Wood)
* Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood)
* Fix upload of 0 length files on some servers (Nick Craig-Wood)
* Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood)
## v1.45 - 2018-11-24 ## v1.45 - 2018-11-24
* New backends * New backends

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone" title: "rclone"
slug: rclone slug: rclone
url: /commands/rclone/ url: /commands/rclone/
@ -26,283 +26,301 @@ rclone [flags]
### Options ### Options
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
-h, --help help for rclone --gcs-project-number string Project number.
--http-url string URL of http host to connect to --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-id string Hubic Client Id -h, --help help for rclone
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
-V, --version Print the version number --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-pass string Password. --swift-user string User name to log in (OS_USERNAME).
--webdav-url string URL of http host to connect to --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-user string User name --syslog Use Syslog for logging
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-id string Yandex Client Id --timeout duration IO idle timeout (default 5m0s)
--yandex-client-secret string Yandex Client Secret --tpslimit float Limit HTTP transactions per second to this.
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
@ -355,4 +373,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number. * [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone about" title: "rclone about"
slug: rclone_about slug: rclone_about
url: /commands/rclone_about/ url: /commands/rclone_about/
@ -69,285 +69,303 @@ rclone about remote: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone authorize" title: "rclone authorize"
slug: rclone_authorize slug: rclone_authorize
url: /commands/rclone_authorize/ url: /commands/rclone_authorize/
@ -28,285 +28,303 @@ rclone authorize [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cachestats" title: "rclone cachestats"
slug: rclone_cachestats slug: rclone_cachestats
url: /commands/rclone_cachestats/ url: /commands/rclone_cachestats/
@ -27,285 +27,303 @@ rclone cachestats source: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cat" title: "rclone cat"
slug: rclone_cat slug: rclone_cat
url: /commands/rclone_cat/ url: /commands/rclone_cat/
@ -49,285 +49,303 @@ rclone cat remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone check" title: "rclone check"
slug: rclone_check slug: rclone_check
url: /commands/rclone_check/ url: /commands/rclone_check/
@ -43,285 +43,303 @@ rclone check source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cleanup" title: "rclone cleanup"
slug: rclone_cleanup slug: rclone_cleanup
url: /commands/rclone_cleanup/ url: /commands/rclone_cleanup/
@ -28,285 +28,303 @@ rclone cleanup remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config" title: "rclone config"
slug: rclone_config slug: rclone_config
url: /commands/rclone_config/ url: /commands/rclone_config/
@ -28,281 +28,299 @@ rclone config [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
@ -318,4 +336,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config create" title: "rclone config create"
slug: rclone_config_create slug: rclone_config_create
url: /commands/rclone_config_create/ url: /commands/rclone_config_create/
@ -19,6 +19,15 @@ you would do:
rclone config create myremote swift env_auth true rclone config create myremote swift env_auth true
Note that if the config process would normally ask a question the
default is taken. Each time that happens rclone will print a message
saying how to affect the value taken.
So for example if you wanted to configure a Google Drive remote but
using remote authorization you would do this:
rclone config create mydrive drive config_is_local false
``` ```
rclone config create <name> <type> [<key> <value>]* [flags] rclone config create <name> <type> [<key> <value>]* [flags]
@ -33,285 +42,303 @@ rclone config create <name> <type> [<key> <value>]* [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config delete" title: "rclone config delete"
slug: rclone_config_delete slug: rclone_config_delete
url: /commands/rclone_config_delete/ url: /commands/rclone_config_delete/
@ -25,285 +25,303 @@ rclone config delete <name> [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config dump" title: "rclone config dump"
slug: rclone_config_dump slug: rclone_config_dump
url: /commands/rclone_config_dump/ url: /commands/rclone_config_dump/
@ -25,285 +25,303 @@ rclone config dump [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config edit" title: "rclone config edit"
slug: rclone_config_edit slug: rclone_config_edit
url: /commands/rclone_config_edit/ url: /commands/rclone_config_edit/
@ -28,285 +28,303 @@ rclone config edit [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config file" title: "rclone config file"
slug: rclone_config_file slug: rclone_config_file
url: /commands/rclone_config_file/ url: /commands/rclone_config_file/
@ -25,285 +25,303 @@ rclone config file [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config password" title: "rclone config password"
slug: rclone_config_password slug: rclone_config_password
url: /commands/rclone_config_password/ url: /commands/rclone_config_password/
@ -32,285 +32,303 @@ rclone config password <name> [<key> <value>]+ [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config providers" title: "rclone config providers"
slug: rclone_config_providers slug: rclone_config_providers
url: /commands/rclone_config_providers/ url: /commands/rclone_config_providers/
@ -25,285 +25,303 @@ rclone config providers [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config show" title: "rclone config show"
slug: rclone_config_show slug: rclone_config_show
url: /commands/rclone_config_show/ url: /commands/rclone_config_show/
@ -25,285 +25,303 @@ rclone config show [<remote>] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config update" title: "rclone config update"
slug: rclone_config_update slug: rclone_config_update
url: /commands/rclone_config_update/ url: /commands/rclone_config_update/
@ -18,6 +18,11 @@ For example to update the env_auth field of a remote of name myremote you would
rclone config update myremote swift env_auth true rclone config update myremote swift env_auth true
If the remote uses oauth the token will be updated, if you don't
require this add an extra parameter thus:
rclone config update myremote swift env_auth true config_refresh_token false
``` ```
rclone config update <name> [<key> <value>]+ [flags] rclone config update <name> [<key> <value>]+ [flags]
@ -32,285 +37,303 @@ rclone config update <name> [<key> <value>]+ [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone copy" title: "rclone copy"
slug: rclone_copy slug: rclone_copy
url: /commands/rclone_copy/ url: /commands/rclone_copy/
@ -47,6 +47,17 @@ written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the This applies to all commands and whether you are talking about the
source or destination. source or destination.
See the [--no-traverse](/docs/#no-traverse) option for controlling
whether rclone lists the destination directory or not. Supplying this
option when copying a small number of files into a large destination
can speed transfers up greatly.
For example, if you have many files in /path/to/src but only a few of
them change every day, you can to copy all the files which have
changed recently very efficiently like this:
rclone copy --max-age 24h --no-traverse /path/to/src remote:
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
@ -63,285 +74,303 @@ rclone copy source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone copyto" title: "rclone copyto"
slug: rclone_copyto slug: rclone_copyto
url: /commands/rclone_copyto/ url: /commands/rclone_copyto/
@ -53,285 +53,303 @@ rclone copyto source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone copyurl" title: "rclone copyurl"
slug: rclone_copyurl slug: rclone_copyurl
url: /commands/rclone_copyurl/ url: /commands/rclone_copyurl/
@ -28,285 +28,303 @@ rclone copyurl https://example.com dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cryptcheck" title: "rclone cryptcheck"
slug: rclone_cryptcheck slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/ url: /commands/rclone_cryptcheck/
@ -53,285 +53,303 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cryptdecode" title: "rclone cryptdecode"
slug: rclone_cryptdecode slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/ url: /commands/rclone_cryptdecode/
@ -37,285 +37,303 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone dbhashsum" title: "rclone dbhashsum"
slug: rclone_dbhashsum slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/ url: /commands/rclone_dbhashsum/
@ -30,285 +30,303 @@ rclone dbhashsum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone dedupe" title: "rclone dedupe"
slug: rclone_dedupe slug: rclone_dedupe
url: /commands/rclone_dedupe/ url: /commands/rclone_dedupe/
@ -106,285 +106,303 @@ rclone dedupe [mode] remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone delete" title: "rclone delete"
slug: rclone_delete slug: rclone_delete
url: /commands/rclone_delete/ url: /commands/rclone_delete/
@ -46,285 +46,303 @@ rclone delete remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone deletefile" title: "rclone deletefile"
slug: rclone_deletefile slug: rclone_deletefile
url: /commands/rclone_deletefile/ url: /commands/rclone_deletefile/
@ -29,285 +29,303 @@ rclone deletefile remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone genautocomplete" title: "rclone genautocomplete"
slug: rclone_genautocomplete slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/ url: /commands/rclone_genautocomplete/
@ -24,281 +24,299 @@ Run with --help to list the supported shells.
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
@ -307,4 +325,4 @@ Run with --help to list the supported shells.
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. * [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone genautocomplete bash" title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/ url: /commands/rclone_genautocomplete_bash/
@ -40,285 +40,303 @@ rclone genautocomplete bash [output_file] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone genautocomplete zsh" title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/ url: /commands/rclone_genautocomplete_zsh/
@ -40,285 +40,303 @@ rclone genautocomplete zsh [output_file] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone gendocs" title: "rclone gendocs"
slug: rclone_gendocs slug: rclone_gendocs
url: /commands/rclone_gendocs/ url: /commands/rclone_gendocs/
@ -28,285 +28,303 @@ rclone gendocs output_directory [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone hashsum" title: "rclone hashsum"
slug: rclone_hashsum slug: rclone_hashsum
url: /commands/rclone_hashsum/ url: /commands/rclone_hashsum/
@ -42,285 +42,303 @@ rclone hashsum <hash> remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone link" title: "rclone link"
slug: rclone_link slug: rclone_link
url: /commands/rclone_link/ url: /commands/rclone_link/
@ -35,285 +35,303 @@ rclone link remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone listremotes" title: "rclone listremotes"
slug: rclone_listremotes slug: rclone_listremotes
url: /commands/rclone_listremotes/ url: /commands/rclone_listremotes/
@ -24,291 +24,309 @@ rclone listremotes [flags]
``` ```
-h, --help help for listremotes -h, --help help for listremotes
-l, --long Show the type as well as names. --long Show the type as well as names.
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone ls" title: "rclone ls"
slug: rclone_ls slug: rclone_ls
url: /commands/rclone_ls/ url: /commands/rclone_ls/
@ -59,285 +59,303 @@ rclone ls remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone lsd" title: "rclone lsd"
slug: rclone_lsd slug: rclone_lsd
url: /commands/rclone_lsd/ url: /commands/rclone_lsd/
@ -70,285 +70,303 @@ rclone lsd remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone lsf" title: "rclone lsf"
slug: rclone_lsf slug: rclone_lsf
url: /commands/rclone_lsf/ url: /commands/rclone_lsf/
@ -148,285 +148,303 @@ rclone lsf remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone lsjson" title: "rclone lsjson"
slug: rclone_lsjson slug: rclone_lsjson
url: /commands/rclone_lsjson/ url: /commands/rclone_lsjson/
@ -42,7 +42,13 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
When used without --recursive the Path will always be the same as Name. When used without --recursive the Path will always be the same as Name.
The time is in RFC3339 format with nanosecond precision. The time is in RFC3339 format with up to nanosecond precision. The
number of decimal digits in the seconds will depend on the precision
that the remote can hold the times, so if times are accurate to the
nearest millisecond (eg Google Drive) then 3 digits will always be
shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are
accurate to the nearest second (Dropbox, Box, WebDav etc) no digits
will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written one to a line. can be processed line by line as each item is written one to a line.
@ -88,285 +94,303 @@ rclone lsjson remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone lsl" title: "rclone lsl"
slug: rclone_lsl slug: rclone_lsl
url: /commands/rclone_lsl/ url: /commands/rclone_lsl/
@ -59,285 +59,303 @@ rclone lsl remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone md5sum" title: "rclone md5sum"
slug: rclone_md5sum slug: rclone_md5sum
url: /commands/rclone_md5sum/ url: /commands/rclone_md5sum/
@ -28,285 +28,303 @@ rclone md5sum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone mkdir" title: "rclone mkdir"
slug: rclone_mkdir slug: rclone_mkdir
url: /commands/rclone_mkdir/ url: /commands/rclone_mkdir/
@ -25,285 +25,303 @@ rclone mkdir remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone mount" title: "rclone mount"
slug: rclone_mount slug: rclone_mount
url: /commands/rclone_mount/ url: /commands/rclone_mount/
@ -213,6 +213,7 @@ may find that you need one or the other or both.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with `-vv` rclone will print the location of the file cache. The If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but files are stored in the user cache file area which is OS dependent but
@ -228,6 +229,11 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on get written back to the remote. However they will still be in the on
disk cache. disk cache.
If using --vfs-cache-max-size note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
evicted from the cache.
#### --vfs-cache-mode off #### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write In this mode the cache will read directly from the remote and write
@ -292,318 +298,339 @@ rclone mount remote:path /path/to/mountpoint [flags]
### Options ### Options
``` ```
--allow-non-empty Allow mounting over a non-empty directory. --allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users. --allow-other Allow access to other users.
--allow-root Allow access to root user. --allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode). --daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v. --debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode. --default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --dir-perms FileMode Directory permissions (default 0777)
--gid uint32 Override the gid field set by the filesystem. (default 502) --file-perms FileMode File permissions (default 0666)
-h, --help help for mount --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) --gid uint32 Override the gid field set by the filesystem. (default 502)
--no-checksum Don't compare checksums on up/download. -h, --help help for mount
--no-modtime Don't read/write the modification time (can speed things up). --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-seek Don't allow seeking in files. --no-checksum Don't compare checksums on up/download.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required. --no-modtime Don't read/write the modification time (can speed things up).
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --no-seek Don't allow seeking in files.
--read-only Mount read-only. -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--uid uint32 Override the uid field set by the filesystem. (default 502) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--umask int Override the permission bits set by the filesystem. --read-only Mount read-only.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --uid uint32 Override the uid field set by the filesystem. (default 502)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --umask int Override the permission bits set by the filesystem.
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--volname string Set the volume name (not supported by all OSes). --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone move" title: "rclone move"
slug: rclone_move slug: rclone_move
url: /commands/rclone_move/ url: /commands/rclone_move/
@ -27,6 +27,11 @@ into `dest:path` then delete the original (if no errors on copy) in
If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
See the [--no-traverse](/docs/#no-traverse) option for controlling
whether rclone lists the destination directory or not. Supplying this
option when moving a small number of files into a large destination
can speed transfers up greatly.
**Important**: Since this can cause data loss, test first with the **Important**: Since this can cause data loss, test first with the
--dry-run flag. --dry-run flag.
@ -47,285 +52,303 @@ rclone move source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone moveto" title: "rclone moveto"
slug: rclone_moveto slug: rclone_moveto
url: /commands/rclone_moveto/ url: /commands/rclone_moveto/
@ -56,285 +56,303 @@ rclone moveto source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone ncdu" title: "rclone ncdu"
slug: rclone_ncdu slug: rclone_ncdu
url: /commands/rclone_ncdu/ url: /commands/rclone_ncdu/
@ -56,285 +56,303 @@ rclone ncdu remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone obscure" title: "rclone obscure"
slug: rclone_obscure slug: rclone_obscure
url: /commands/rclone_obscure/ url: /commands/rclone_obscure/
@ -25,285 +25,303 @@ rclone obscure password [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone purge" title: "rclone purge"
slug: rclone_purge slug: rclone_purge
url: /commands/rclone_purge/ url: /commands/rclone_purge/
@ -29,285 +29,303 @@ rclone purge remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone rc" title: "rclone rc"
slug: rclone_rc slug: rclone_rc
url: /commands/rclone_rc/ url: /commands/rclone_rc/
@ -50,285 +50,303 @@ rclone rc commands parameter [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone rcat" title: "rclone rcat"
slug: rclone_rcat slug: rclone_rcat
url: /commands/rclone_rcat/ url: /commands/rclone_rcat/
@ -47,285 +47,303 @@ rclone rcat remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone rcd" title: "rclone rcd"
slug: rclone_rcd slug: rclone_rcd
url: /commands/rclone_rcd/ url: /commands/rclone_rcd/
@ -11,7 +11,7 @@ Run rclone listening to remote control commands only.
### Synopsis ### Synopsis
This runs rclone so that it only listents to remote control commands. This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API. This is useful if you are controlling rclone via the rc API.
@ -35,285 +35,303 @@ rclone rcd <path to files to serve>* [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone rmdir" title: "rclone rmdir"
slug: rclone_rmdir slug: rclone_rmdir
url: /commands/rclone_rmdir/ url: /commands/rclone_rmdir/
@ -27,285 +27,303 @@ rclone rmdir remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone rmdirs" title: "rclone rmdirs"
slug: rclone_rmdirs slug: rclone_rmdirs
url: /commands/rclone_rmdirs/ url: /commands/rclone_rmdirs/
@ -35,285 +35,303 @@ rclone rmdirs remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone serve" title: "rclone serve"
slug: rclone_serve slug: rclone_serve
url: /commands/rclone_serve/ url: /commands/rclone_serve/
@ -31,289 +31,308 @@ rclone serve <protocol> [opts] <remote> [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone serve dlna](/commands/rclone_serve_dlna/) - Serve remote:path over DLNA
* [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP. * [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -0,0 +1,495 @@
---
date: 2019-02-09T10:42:18Z
title: "rclone serve dlna"
slug: rclone_serve_dlna
url: /commands/rclone_serve_dlna/
---
## rclone serve dlna
Serve remote:path over DLNA
### Synopsis
rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many
devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN
and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast
packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or
file extensions. Additionally, there is no media transcoding support. This means that some
players might show files that they are not able to play back correctly.
### Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs.
### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
backend. Changes made locally in the mount may appear immediately or
invalidate the cache. However, changes done on the remote will only
be picked up once the cache expires.
Alternatively, you can send a `SIGHUP` signal to rclone for
it to flush all directory caches, regardless of how old they are.
Assuming only one rclone instance is running, you can reset the cache
like this:
kill -SIGHUP $(pidof rclone)
If you configure rclone with a [remote control](/rc) then you can use
rclone rc to flush the whole directory cache:
rclone rc vfs/forget
Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
can be controlled with `--cache-dir` or setting the appropriate
environment variable.
The cache has 4 different modes selected by `--vfs-cache-mode`.
The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
If using --vfs-cache-max-size note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
evicted from the cache.
#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
This will mean some operations are not possible
* Files can't be opened for both read AND write
* Files opened for write can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files open for read with O_TRUNC will be opened write only
* Files open for write only will behave as if O_TRUNC was supplied
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
* Files opened for write only can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
first.
This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
will be purged on a schedule according to `--vfs-cache-max-age`.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to
--low-level-retries times.
```
rclone serve dlna remote:path [flags]
```
### Options
```
--addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for dlna
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
```
### Options inherited from parent commands
```
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start.
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks. (default 4)
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transferring (default)
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-alternate-export Use alternate export URLs for google documents export.,
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: see export_formats
--drive-impersonate string Impersonate this user when using a service account.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-keep-revision-forever Keep new head revision of each file forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use file created date instead of modified date.,
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
--dropbox-impersonate string Impersonate this user when using a business account.
-n, --dry-run Do a trial run with no permanent changes
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
--fast-list Use recursive list if available. Uses more memory but fewer transactions.
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--hubic-no-chunk Don't chunk files during streaming upload.
--ignore-case Ignore case in filters (case insensitive)
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--qingstor-connection-retries int Number of connection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-files string Path to local files to serve on the HTTP server.
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-no-auth Don't require auth for certain methods.
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-serve Enable the serving of remote objects.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-bucket-acl string Canned ACL used when creating buckets.
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-session-token string An AWS session token
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing new objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--s3-v2-auth If true use v2 authentication.
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--sftp-key-use-agent When set forces the usage of the ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-no-chunk Don't chunk files during streaming upload.
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone serve ftp" title: "rclone serve ftp"
slug: rclone_serve_ftp slug: rclone_serve_ftp
url: /commands/rclone_serve_ftp/ url: /commands/rclone_serve_ftp/
@ -88,6 +88,7 @@ may find that you need one or the other or both.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with `-vv` rclone will print the location of the file cache. The If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but files are stored in the user cache file area which is OS dependent but
@ -103,6 +104,11 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on get written back to the remote. However they will still be in the on
disk cache. disk cache.
If using --vfs-cache-max-size note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
evicted from the cache.
#### --vfs-cache-mode off #### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write In this mode the cache will read directly from the remote and write
@ -167,309 +173,330 @@ rclone serve ftp remote:path [flags]
### Options ### Options
``` ```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502) --dir-perms FileMode Directory permissions (default 0777)
-h, --help help for ftp --file-perms FileMode File permissions (default 0666)
--no-checksum Don't compare checksums on up/download. --gid uint32 Override the gid field set by the filesystem. (default 502)
--no-modtime Don't read/write the modification time (can speed things up). -h, --help help for ftp
--no-seek Don't allow seeking in files. --no-checksum Don't compare checksums on up/download.
--pass string Password for authentication. (empty value allow every password) --no-modtime Don't read/write the modification time (can speed things up).
--passive-port string Passive port range to use. (default "30000-32000") --no-seek Don't allow seeking in files.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --pass string Password for authentication. (empty value allow every password)
--read-only Mount read-only. --passive-port string Passive port range to use. (default "30000-32000")
--uid uint32 Override the uid field set by the filesystem. (default 502) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--umask int Override the permission bits set by the filesystem. (default 2) --read-only Mount read-only.
--user string User name for authentication. (default "anonymous") --uid uint32 Override the uid field set by the filesystem. (default 502)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --umask int Override the permission bits set by the filesystem. (default 2)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --user string User name for authentication. (default "anonymous")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone serve http" title: "rclone serve http"
slug: rclone_serve_http slug: rclone_serve_http
url: /commands/rclone_serve_http/ url: /commands/rclone_serve_http/
@ -129,6 +129,7 @@ may find that you need one or the other or both.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with `-vv` rclone will print the location of the file cache. The If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but files are stored in the user cache file area which is OS dependent but
@ -144,6 +145,11 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on get written back to the remote. However they will still be in the on
disk cache. disk cache.
If using --vfs-cache-max-size note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
evicted from the cache.
#### --vfs-cache-mode off #### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write In this mode the cache will read directly from the remote and write
@ -208,316 +214,337 @@ rclone serve http remote:path [flags]
### Options ### Options
``` ```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--cert string SSL PEM key (concatenation of certificate and CA certificate) --cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502) --dir-perms FileMode Directory permissions (default 0777)
-h, --help help for http --file-perms FileMode File permissions (default 0666)
--htpasswd string htpasswd file - if not provided no authentication is done --gid uint32 Override the gid field set by the filesystem. (default 502)
--key string SSL PEM Private key -h, --help help for http
--max-header-bytes int Maximum size of request header (default 4096) --htpasswd string htpasswd file - if not provided no authentication is done
--no-checksum Don't compare checksums on up/download. --key string SSL PEM Private key
--no-modtime Don't read/write the modification time (can speed things up). --max-header-bytes int Maximum size of request header (default 4096)
--no-seek Don't allow seeking in files. --no-checksum Don't compare checksums on up/download.
--pass string Password for authentication. --no-modtime Don't read/write the modification time (can speed things up).
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --no-seek Don't allow seeking in files.
--read-only Mount read-only. --pass string Password for authentication.
--realm string realm for authentication (default "rclone") --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--server-read-timeout duration Timeout for server reading data (default 1h0m0s) --read-only Mount read-only.
--server-write-timeout duration Timeout for server writing data (default 1h0m0s) --realm string realm for authentication (default "rclone")
--uid uint32 Override the uid field set by the filesystem. (default 502) --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--umask int Override the permission bits set by the filesystem. (default 2) --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--user string User name for authentication. --uid uint32 Override the uid field set by the filesystem. (default 502)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --umask int Override the permission bits set by the filesystem. (default 2)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --user string User name for authentication.
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone serve restic" title: "rclone serve restic"
slug: rclone_serve_restic slug: rclone_serve_restic
url: /commands/rclone_serve_restic/ url: /commands/rclone_serve_restic/
@ -161,285 +161,303 @@ rclone serve restic remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone serve webdav" title: "rclone serve webdav"
slug: rclone_serve_webdav slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/ url: /commands/rclone_serve_webdav/
@ -137,6 +137,7 @@ may find that you need one or the other or both.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with `-vv` rclone will print the location of the file cache. The If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but files are stored in the user cache file area which is OS dependent but
@ -152,6 +153,11 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on get written back to the remote. However they will still be in the on
disk cache. disk cache.
If using --vfs-cache-max-size note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
evicted from the cache.
#### --vfs-cache-mode off #### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write In this mode the cache will read directly from the remote and write
@ -216,317 +222,338 @@ rclone serve webdav remote:path [flags]
### Options ### Options
``` ```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--cert string SSL PEM key (concatenation of certificate and CA certificate) --cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--etag-hash string Which hash to use for the ETag, or auto or blank for off --dir-perms FileMode Directory permissions (default 0777)
--gid uint32 Override the gid field set by the filesystem. (default 502) --etag-hash string Which hash to use for the ETag, or auto or blank for off
-h, --help help for webdav --file-perms FileMode File permissions (default 0666)
--htpasswd string htpasswd file - if not provided no authentication is done --gid uint32 Override the gid field set by the filesystem. (default 502)
--key string SSL PEM Private key -h, --help help for webdav
--max-header-bytes int Maximum size of request header (default 4096) --htpasswd string htpasswd file - if not provided no authentication is done
--no-checksum Don't compare checksums on up/download. --key string SSL PEM Private key
--no-modtime Don't read/write the modification time (can speed things up). --max-header-bytes int Maximum size of request header (default 4096)
--no-seek Don't allow seeking in files. --no-checksum Don't compare checksums on up/download.
--pass string Password for authentication. --no-modtime Don't read/write the modification time (can speed things up).
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --no-seek Don't allow seeking in files.
--read-only Mount read-only. --pass string Password for authentication.
--realm string realm for authentication (default "rclone") --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--server-read-timeout duration Timeout for server reading data (default 1h0m0s) --read-only Mount read-only.
--server-write-timeout duration Timeout for server writing data (default 1h0m0s) --realm string realm for authentication (default "rclone")
--uid uint32 Override the uid field set by the filesystem. (default 502) --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--umask int Override the permission bits set by the filesystem. (default 2) --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--user string User name for authentication. --uid uint32 Override the uid field set by the filesystem. (default 502)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --umask int Override the permission bits set by the filesystem. (default 2)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --user string User name for authentication.
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone settier" title: "rclone settier"
slug: rclone_settier slug: rclone_settier
url: /commands/rclone_settier/ url: /commands/rclone_settier/
@ -47,285 +47,303 @@ rclone settier tier remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone sha1sum" title: "rclone sha1sum"
slug: rclone_sha1sum slug: rclone_sha1sum
url: /commands/rclone_sha1sum/ url: /commands/rclone_sha1sum/
@ -28,285 +28,303 @@ rclone sha1sum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone size" title: "rclone size"
slug: rclone_size slug: rclone_size
url: /commands/rclone_size/ url: /commands/rclone_size/
@ -26,285 +26,303 @@ rclone size remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone sync" title: "rclone sync"
slug: rclone_sync slug: rclone_sync
url: /commands/rclone_sync/ url: /commands/rclone_sync/
@ -46,285 +46,303 @@ rclone sync source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone touch" title: "rclone touch"
slug: rclone_touch slug: rclone_touch
url: /commands/rclone_touch/ url: /commands/rclone_touch/
@ -27,285 +27,303 @@ rclone touch remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone tree" title: "rclone tree"
slug: rclone_tree slug: rclone_tree
url: /commands/rclone_tree/ url: /commands/rclone_tree/
@ -68,285 +68,303 @@ rclone tree remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone version" title: "rclone version"
slug: rclone_version slug: rclone_version
url: /commands/rclone_version/ url: /commands/rclone_version/
@ -53,285 +53,303 @@ rclone version [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@ -787,6 +787,24 @@ If Object's are greater, use drive v2 API to download.
- Type: SizeSuffix - Type: SizeSuffix
- Default: off - Default: off
#### --drive-pacer-min-sleep
Minimum time to sleep between API calls.
- Config: pacer_min_sleep
- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP
- Type: Duration
- Default: 100ms
#### --drive-pacer-burst
Number of API calls to allow without sleeping.
- Config: pacer_burst
- Env Var: RCLONE_DRIVE_PACER_BURST
- Type: int
- Default: 100
<!--- autogenerated options stop --> <!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -347,16 +347,26 @@ Location for the newly created buckets.
- Multi-regional location for United States. - Multi-regional location for United States.
- "asia-east1" - "asia-east1"
- Taiwan. - Taiwan.
- "asia-east2"
- Hong Kong.
- "asia-northeast1" - "asia-northeast1"
- Tokyo. - Tokyo.
- "asia-south1"
- Mumbai.
- "asia-southeast1" - "asia-southeast1"
- Singapore. - Singapore.
- "australia-southeast1" - "australia-southeast1"
- Sydney. - Sydney.
- "europe-north1"
- Finland.
- "europe-west1" - "europe-west1"
- Belgium. - Belgium.
- "europe-west2" - "europe-west2"
- London. - London.
- "europe-west3"
- Frankfurt.
- "europe-west4"
- Netherlands.
- "us-central1" - "us-central1"
- Iowa. - Iowa.
- "us-east1" - "us-east1"
@ -365,6 +375,8 @@ Location for the newly created buckets.
- Northern Virginia. - Northern Virginia.
- "us-west1" - "us-west1"
- Oregon. - Oregon.
- "us-west2"
- California.
#### --gcs-storage-class #### --gcs-storage-class

View File

@ -142,5 +142,7 @@ URL of http host to connect to
- Examples: - Examples:
- "https://example.com" - "https://example.com"
- Connect to example.com - Connect to example.com
- "https://user:pass@example.com"
- Connect to example.com using a username and password
<!--- autogenerated options stop --> <!--- autogenerated options stop -->

View File

@ -169,6 +169,24 @@ default for this is 5GB which is its maximum value.
- Type: SizeSuffix - Type: SizeSuffix
- Default: 5G - Default: 5G
#### --hubic-no-chunk
Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
copy operations.
- Config: no_chunk
- Env Var: RCLONE_HUBIC_NO_CHUNK
- Type: bool
- Default: false
<!--- autogenerated options stop --> <!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -131,22 +131,13 @@ Here are the standard options specific to jottacloud (JottaCloud).
#### --jottacloud-user #### --jottacloud-user
User Name User Name:
- Config: user - Config: user
- Env Var: RCLONE_JOTTACLOUD_USER - Env Var: RCLONE_JOTTACLOUD_USER
- Type: string - Type: string
- Default: "" - Default: ""
#### --jottacloud-pass
Password.
- Config: pass
- Env Var: RCLONE_JOTTACLOUD_PASS
- Type: string
- Default: ""
#### --jottacloud-mountpoint #### --jottacloud-mountpoint
The mountpoint to use. The mountpoint to use.
@ -193,6 +184,15 @@ Default is false, meaning link command will create or retrieve public link.
- Type: bool - Type: bool
- Default: false - Default: false
#### --jottacloud-upload-resume-limit
Files bigger than this can be resumed if the upload fail's.
- Config: upload_resume_limit
- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
- Type: SizeSuffix
- Default: 10M
<!--- autogenerated options stop --> <!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -259,6 +259,15 @@ Follow symlinks and copy the pointed to item.
- Type: bool - Type: bool
- Default: false - Default: false
#### --links
Translate symlinks to/from regular files with a '.rclonelink' extension
- Config: links
- Env Var: RCLONE_LOCAL_LINKS
- Type: bool
- Default: false
#### --skip-links #### --skip-links
Don't warn about skipped symlinks. Don't warn about skipped symlinks.

View File

@ -271,6 +271,9 @@ Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded This is the number of chunks of the same file that are uploaded
concurrently. concurrently.
NB if you set this to > 1 then the checksums of multpart uploads
become corrupted (the uploads themselves are not corrupted though).
If you are uploading small numbers of large file over high speed link If you are uploading small numbers of large file over high speed link
and these uploads do not fully utilize your bandwidth, then increasing and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers. this may help to speed up the transfers.
@ -278,6 +281,6 @@ this may help to speed up the transfers.
- Config: upload_concurrency - Config: upload_concurrency
- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY - Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
- Type: int - Type: int
- Default: 4 - Default: 1
<!--- autogenerated options stop --> <!--- autogenerated options stop -->

View File

@ -226,7 +226,7 @@ The slice indices are similar to Python slices: start[:end]
start is the 0 based chunk number from the beginning of the file start is the 0 based chunk number from the beginning of the file
to fetch inclusive. end is 0 based chunk number from the beginning to fetch inclusive. end is 0 based chunk number from the beginning
of the file to fetch exclisive. of the file to fetch exclusive.
Both values can be negative, in which case they count from the back Both values can be negative, in which case they count from the back
of the file. The value "-5:" represents the last 5 chunks of a file. of the file. The value "-5:" represents the last 5 chunks of a file.
@ -477,9 +477,6 @@ This takes the following parameters
- dstFs - a remote name string eg "drive2:" for the destination - dstFs - a remote name string eg "drive2:" for the destination
- dstRemote - a path within that remote eg "file2.txt" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination
This returns
- jobid - ID of async job to query with job/status
Authentication is required for this call. Authentication is required for this call.
### operations/copyurl: Copy the URL to the object ### operations/copyurl: Copy the URL to the object
@ -557,9 +554,6 @@ This takes the following parameters
- dstFs - a remote name string eg "drive2:" for the destination - dstFs - a remote name string eg "drive2:" for the destination
- dstRemote - a path within that remote eg "file2.txt" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination
This returns
- jobid - ID of async job to query with job/status
Authentication is required for this call. Authentication is required for this call.
### operations/purge: Remove a directory or container and all of its contents ### operations/purge: Remove a directory or container and all of its contents
@ -637,6 +631,20 @@ Only supply the options you wish to change. If an option is unknown
it will be silently ignored. Not all options will have an effect when it will be silently ignored. Not all options will have an effect when
changed like this. changed like this.
For example:
This sets DEBUG level logs (-vv)
rclone rc options/set --json '{"main": {"LogLevel": 8}}'
And this sets INFO level logs (-v)
rclone rc options/set --json '{"main": {"LogLevel": 7}}'
And this sets NOTICE level logs (normal without -v)
rclone rc options/set --json '{"main": {"LogLevel": 6}}'
### rc/error: This returns an error ### rc/error: This returns an error
This returns an error with the input as part of its error string. This returns an error with the input as part of its error string.
@ -668,8 +676,6 @@ This takes the following parameters
- srcFs - a remote name string eg "drive:src" for the source - srcFs - a remote name string eg "drive:src" for the source
- dstFs - a remote name string eg "drive:dst" for the destination - dstFs - a remote name string eg "drive:dst" for the destination
This returns
- jobid - ID of async job to query with job/status
See the [copy command](/commands/rclone_copy/) command for more information on the above. See the [copy command](/commands/rclone_copy/) command for more information on the above.
@ -683,8 +689,6 @@ This takes the following parameters
- dstFs - a remote name string eg "drive:dst" for the destination - dstFs - a remote name string eg "drive:dst" for the destination
- deleteEmptySrcDirs - delete empty src directories if set - deleteEmptySrcDirs - delete empty src directories if set
This returns
- jobid - ID of async job to query with job/status
See the [move command](/commands/rclone_move/) command for more information on the above. See the [move command](/commands/rclone_move/) command for more information on the above.
@ -697,8 +701,6 @@ This takes the following parameters
- srcFs - a remote name string eg "drive:src" for the source - srcFs - a remote name string eg "drive:src" for the source
- dstFs - a remote name string eg "drive:dst" for the destination - dstFs - a remote name string eg "drive:dst" for the destination
This returns
- jobid - ID of async job to query with job/status
See the [sync command](/commands/rclone_sync/) command for more information on the above. See the [sync command](/commands/rclone_sync/) command for more information on the above.

View File

@ -499,6 +499,9 @@ Region to connect to.
- "eu-west-2" - "eu-west-2"
- EU (London) Region - EU (London) Region
- Needs location constraint eu-west-2. - Needs location constraint eu-west-2.
- "eu-north-1"
- EU (Stockholm) Region
- Needs location constraint eu-north-1.
- "eu-central-1" - "eu-central-1"
- EU (Frankfurt) Region - EU (Frankfurt) Region
- Needs location constraint eu-central-1. - Needs location constraint eu-central-1.
@ -597,9 +600,9 @@ Specify if using an IBM COS On Premise.
- "s3.ams-eu-geo.objectstorage.service.networklayer.com" - "s3.ams-eu-geo.objectstorage.service.networklayer.com"
- EU Cross Region Amsterdam Private Endpoint - EU Cross Region Amsterdam Private Endpoint
- "s3.eu-gb.objectstorage.softlayer.net" - "s3.eu-gb.objectstorage.softlayer.net"
- Great Britan Endpoint - Great Britain Endpoint
- "s3.eu-gb.objectstorage.service.networklayer.com" - "s3.eu-gb.objectstorage.service.networklayer.com"
- Great Britan Private Endpoint - Great Britain Private Endpoint
- "s3.ap-geo.objectstorage.softlayer.net" - "s3.ap-geo.objectstorage.softlayer.net"
- APAC Cross Regional Endpoint - APAC Cross Regional Endpoint
- "s3.tok-ap-geo.objectstorage.softlayer.net" - "s3.tok-ap-geo.objectstorage.softlayer.net"
@ -720,6 +723,8 @@ Used when creating buckets only.
- EU (Ireland) Region. - EU (Ireland) Region.
- "eu-west-2" - "eu-west-2"
- EU (London) Region. - EU (London) Region.
- "eu-north-1"
- EU (Stockholm) Region.
- "EU" - "EU"
- EU Region. - EU Region.
- "ap-southeast-1" - "ap-southeast-1"
@ -762,7 +767,7 @@ For on-prem COS, do not make a selection from this list, hit enter
- "us-east-flex" - "us-east-flex"
- US East Region Flex - US East Region Flex
- "us-south-standard" - "us-south-standard"
- US Sout hRegion Standard - US South Region Standard
- "us-south-vault" - "us-south-vault"
- US South Region Vault - US South Region Vault
- "us-south-cold" - "us-south-cold"
@ -778,13 +783,13 @@ For on-prem COS, do not make a selection from this list, hit enter
- "eu-flex" - "eu-flex"
- EU Cross Region Flex - EU Cross Region Flex
- "eu-gb-standard" - "eu-gb-standard"
- Great Britan Standard - Great Britain Standard
- "eu-gb-vault" - "eu-gb-vault"
- Great Britan Vault - Great Britain Vault
- "eu-gb-cold" - "eu-gb-cold"
- Great Britan Cold - Great Britain Cold
- "eu-gb-flex" - "eu-gb-flex"
- Great Britan Flex - Great Britain Flex
- "ap-standard" - "ap-standard"
- APAC Standard - APAC Standard
- "ap-vault" - "ap-vault"
@ -824,6 +829,8 @@ Leave blank if not sure. Used when creating buckets only.
Canned ACL used when creating buckets and storing or copying objects. Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3 Note that this ACL is applied when server side copying objects as S3
@ -919,17 +926,43 @@ The storage class to use when storing new objects in OSS.
- Type: string - Type: string
- Default: "" - Default: ""
- Examples: - Examples:
- "Standard" - ""
- Default
- "STANDARD"
- Standard storage class - Standard storage class
- "Archive" - "GLACIER"
- Archive storage mode. - Archive storage mode.
- "IA" - "STANDARD_IA"
- Infrequent access storage mode. - Infrequent access storage mode.
### Advanced Options ### Advanced Options
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
#### --s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets. If it
isn't set then "acl" is used instead.
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
- Type: string
- Default: ""
- Examples:
- "private"
- Owner gets FULL_CONTROL. No one else has access rights (default).
- "public-read"
- Owner gets FULL_CONTROL. The AllUsers group gets READ access.
- "public-read-write"
- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
- Granting this on a bucket is generally not recommended.
- "authenticated-read"
- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
#### --s3-upload-cutoff #### --s3-upload-cutoff
Cutoff for switching to chunked upload Cutoff for switching to chunked upload

View File

@ -329,33 +329,6 @@ User ID to log in - optional - most swift systems use user and leave this blank
- Type: string - Type: string
- Default: "" - Default: ""
#### --swift-application-credential-id
Application Credential ID to log in - optional (v3 auth) (OS_APPLICATION_CREDENTIAL_ID).
- Config: application_credential_id
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
- Type: string
- Default: ""
#### --swift-application-credential-name
Application Credential name to log in - optional (v3 auth) (OS_APPLICATION_CREDENTIAL_NAME).
- Config: application_credential_name
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
- Type: string
- Default: ""
#### --swift-application-credential-secret
Application Credential secret to log in - optional (v3 auth) (OS_APPLICATION_CREDENTIAL_SECRET).
- Config: application_credential_secret
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
- Type: string
- Default: ""
#### --swift-domain #### --swift-domain
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
@ -419,6 +392,33 @@ Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- Type: string - Type: string
- Default: "" - Default: ""
#### --swift-application-credential-id
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
- Config: application_credential_id
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
- Type: string
- Default: ""
#### --swift-application-credential-name
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
- Config: application_credential_name
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
- Type: string
- Default: ""
#### --swift-application-credential-secret
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
- Config: application_credential_secret
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
- Type: string
- Default: ""
#### --swift-auth-version #### --swift-auth-version
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
@ -481,6 +481,24 @@ default for this is 5GB which is its maximum value.
- Type: SizeSuffix - Type: SizeSuffix
- Default: 5G - Default: 5G
#### --swift-no-chunk
Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
copy operations.
- Config: no_chunk
- Env Var: RCLONE_SWIFT_NO_CHUNK
- Type: bool
- Default: false
<!--- autogenerated options stop --> <!--- autogenerated options stop -->
### Modified time ### ### Modified time ###

View File

@ -1 +1 @@
v1.45 v1.46

View File

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
var Version = "v1.45-DEV" var Version = "v1.46"

2019
rclone.1

File diff suppressed because it is too large Load Diff