diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 781f7e8ed..4978d776d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -633,6 +633,7 @@ You'll need to modify the following files - `docs/content/s3.md` - Add the provider at the top of the page. - Add a section about the provider linked from there. + - Make sure this is in alphabetical order in the `Providers` section. - Add a transcript of a trial `rclone config` session - Edit the transcript to remove things which might change in subsequent versions - **Do not** alter or add to the autogenerated parts of `s3.md` diff --git a/docs/content/s3.md b/docs/content/s3.md index 8b439b4d9..c48462c43 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -2530,6 +2530,224 @@ endpoint = http://[IP of Snowball]:8080 upload_cutoff = 0 ``` +### Alibaba OSS {#alibaba-oss} + +Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/) +configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> oss +Type of storage to configure. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" +[snip] +Storage> s3 +Choose your S3 provider. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun + \ "Alibaba" + 3 / Ceph Object Storage + \ "Ceph" +[snip] +provider> Alibaba +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Enter a boolean value (true or false). Press Enter for the default ("false"). +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +access_key_id> accesskeyid +AWS Secret Access Key (password) +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +secret_access_key> secretaccesskey +Endpoint for OSS API. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / East China 1 (Hangzhou) + \ "oss-cn-hangzhou.aliyuncs.com" + 2 / East China 2 (Shanghai) + \ "oss-cn-shanghai.aliyuncs.com" + 3 / North China 1 (Qingdao) + \ "oss-cn-qingdao.aliyuncs.com" +[snip] +endpoint> 1 +Canned ACL used when creating buckets and storing or copying objects. + +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. + \ "public-read" + / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. +[snip] +acl> 1 +The storage class to use when storing new objects in OSS. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Standard storage class + \ "STANDARD" + 3 / Archive storage mode. + \ "GLACIER" + 4 / Infrequent access storage mode. + \ "STANDARD_IA" +storage_class> 1 +Edit advanced config? (y/n) +y) Yes +n) No +y/n> n +Remote config +-------------------- +[oss] +type = s3 +provider = Alibaba +env_auth = false +access_key_id = accesskeyid +secret_access_key = secretaccesskey +endpoint = oss-cn-hangzhou.aliyuncs.com +acl = private +storage_class = Standard +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +### ArvanCloud {#arvan-cloud} + +[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage. +It gives you access to backup and archived files and allows sharing. +Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service. + +ArvanCloud provides an S3 interface which can be configured for use with +rclone like this. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +n/s> n +name> ArvanCloud +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio) + \ "s3" +[snip] +Storage> s3 +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> YOURACCESSKEY +AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> YOURSECRETACCESSKEY +Region to connect to. +Choose a number from below, or type in your own value + / The default endpoint - a good choice if you are unsure. + 1 | US Region, Northern Virginia, or Pacific Northwest. + | Leave location constraint empty. + \ "us-east-1" +[snip] +region> +Endpoint for S3 API. +Leave blank if using ArvanCloud to use the default endpoint for the region. +Specify if using an S3 clone such as Ceph. +endpoint> s3.arvanstorage.com +Location constraint - must be set to match the Region. Used when creating buckets only. +Choose a number from below, or type in your own value + 1 / Empty for Iran-Tehran Region. + \ "" +[snip] +location_constraint> +Canned ACL used when creating buckets and/or storing objects in S3. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" +[snip] +acl> +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value + 1 / None + \ "" + 2 / AES256 + \ "AES256" +server_side_encryption> +The storage class to use when storing objects in S3. +Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Standard storage class + \ "STANDARD" +storage_class> +Remote config +-------------------- +[ArvanCloud] +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = ir-thr-at1 +endpoint = s3.arvanstorage.com +location_constraint = +acl = +server_side_encryption = +storage_class = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This will leave the config file looking like this. + +``` +[ArvanCloud] +type = s3 +provider = ArvanCloud +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = +endpoint = s3.arvanstorage.com +location_constraint = +acl = +server_side_encryption = +storage_class = +``` + ### Ceph [Ceph](https://ceph.com/) is an open-source, unified, distributed @@ -2587,6 +2805,256 @@ removed). Because this is a json dump, it is encoding the `/` as `\/`, so if you use the secret key as `xxxxxx/xxxx` it will work fine. +### China Mobile Ecloud Elastic Object Storage (EOS) {#china-mobile-ecloud-eos} + +Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/) +configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> ChinaMobile +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. + ... +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + ... +Storage> s3 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + ... + 4 / China Mobile Ecloud Elastic Object Storage (EOS) + \ (ChinaMobile) + ... +provider> ChinaMobile +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> accesskeyid +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> secretaccesskey +Option endpoint. +Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / The default endpoint - a good choice if you are unsure. + 1 | East China (Suzhou) + \ (eos-wuxi-1.cmecloud.cn) + 2 / East China (Jinan) + \ (eos-jinan-1.cmecloud.cn) + 3 / East China (Hangzhou) + \ (eos-ningbo-1.cmecloud.cn) + 4 / East China (Shanghai-1) + \ (eos-shanghai-1.cmecloud.cn) + 5 / Central China (Zhengzhou) + \ (eos-zhengzhou-1.cmecloud.cn) + 6 / Central China (Changsha-1) + \ (eos-hunan-1.cmecloud.cn) + 7 / Central China (Changsha-2) + \ (eos-zhuzhou-1.cmecloud.cn) + 8 / South China (Guangzhou-2) + \ (eos-guangzhou-1.cmecloud.cn) + 9 / South China (Guangzhou-3) + \ (eos-dongguan-1.cmecloud.cn) +10 / North China (Beijing-1) + \ (eos-beijing-1.cmecloud.cn) +11 / North China (Beijing-2) + \ (eos-beijing-2.cmecloud.cn) +12 / North China (Beijing-3) + \ (eos-beijing-4.cmecloud.cn) +13 / North China (Huhehaote) + \ (eos-huhehaote-1.cmecloud.cn) +14 / Southwest China (Chengdu) + \ (eos-chengdu-1.cmecloud.cn) +15 / Southwest China (Chongqing) + \ (eos-chongqing-1.cmecloud.cn) +16 / Southwest China (Guiyang) + \ (eos-guiyang-1.cmecloud.cn) +17 / Nouthwest China (Xian) + \ (eos-xian-1.cmecloud.cn) +18 / Yunnan China (Kunming) + \ (eos-yunnan.cmecloud.cn) +19 / Yunnan China (Kunming-2) + \ (eos-yunnan-2.cmecloud.cn) +20 / Tianjin China (Tianjin) + \ (eos-tianjin-1.cmecloud.cn) +21 / Jilin China (Changchun) + \ (eos-jilin-1.cmecloud.cn) +22 / Hubei China (Xiangyan) + \ (eos-hubei-1.cmecloud.cn) +23 / Jiangxi China (Nanchang) + \ (eos-jiangxi-1.cmecloud.cn) +24 / Gansu China (Lanzhou) + \ (eos-gansu-1.cmecloud.cn) +25 / Shanxi China (Taiyuan) + \ (eos-shanxi-1.cmecloud.cn) +26 / Liaoning China (Shenyang) + \ (eos-liaoning-1.cmecloud.cn) +27 / Hebei China (Shijiazhuang) + \ (eos-hebei-1.cmecloud.cn) +28 / Fujian China (Xiamen) + \ (eos-fujian-1.cmecloud.cn) +29 / Guangxi China (Nanning) + \ (eos-guangxi-1.cmecloud.cn) +30 / Anhui China (Huainan) + \ (eos-anhui-1.cmecloud.cn) +endpoint> 1 +Option location_constraint. +Location constraint - must match endpoint. +Used when creating buckets only. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / East China (Suzhou) + \ (wuxi1) + 2 / East China (Jinan) + \ (jinan1) + 3 / East China (Hangzhou) + \ (ningbo1) + 4 / East China (Shanghai-1) + \ (shanghai1) + 5 / Central China (Zhengzhou) + \ (zhengzhou1) + 6 / Central China (Changsha-1) + \ (hunan1) + 7 / Central China (Changsha-2) + \ (zhuzhou1) + 8 / South China (Guangzhou-2) + \ (guangzhou1) + 9 / South China (Guangzhou-3) + \ (dongguan1) +10 / North China (Beijing-1) + \ (beijing1) +11 / North China (Beijing-2) + \ (beijing2) +12 / North China (Beijing-3) + \ (beijing4) +13 / North China (Huhehaote) + \ (huhehaote1) +14 / Southwest China (Chengdu) + \ (chengdu1) +15 / Southwest China (Chongqing) + \ (chongqing1) +16 / Southwest China (Guiyang) + \ (guiyang1) +17 / Nouthwest China (Xian) + \ (xian1) +18 / Yunnan China (Kunming) + \ (yunnan) +19 / Yunnan China (Kunming-2) + \ (yunnan2) +20 / Tianjin China (Tianjin) + \ (tianjin1) +21 / Jilin China (Changchun) + \ (jilin1) +22 / Hubei China (Xiangyan) + \ (hubei1) +23 / Jiangxi China (Nanchang) + \ (jiangxi1) +24 / Gansu China (Lanzhou) + \ (gansu1) +25 / Shanxi China (Taiyuan) + \ (shanxi1) +26 / Liaoning China (Shenyang) + \ (liaoning1) +27 / Hebei China (Shijiazhuang) + \ (hebei1) +28 / Fujian China (Xiamen) + \ (fujian1) +29 / Guangxi China (Nanning) + \ (guangxi1) +30 / Anhui China (Huainan) + \ (anhui1) +location_constraint> 1 +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \ (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \ (authenticated-read) + / Object owner gets FULL_CONTROL. +acl> private +Option server_side_encryption. +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / None + \ () + 2 / AES256 + \ (AES256) +server_side_encryption> +Option storage_class. +The storage class to use when storing new objects in ChinaMobile. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Default + \ () + 2 / Standard storage class + \ (STANDARD) + 3 / Archive storage mode + \ (GLACIER) + 4 / Infrequent access storage mode + \ (STANDARD_IA) +storage_class> +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +[ChinaMobile] +type = s3 +provider = ChinaMobile +access_key_id = accesskeyid +secret_access_key = secretaccesskey +endpoint = eos-wuxi-1.cmecloud.cn +location_constraint = wuxi1 +acl = private +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + ### Cloudflare R2 {#cloudflare-r2} [Cloudflare R2](https://blog.cloudflare.com/r2-open-beta/) Storage @@ -2699,6 +3167,52 @@ does. If this is causing a problem then upload the files with A consequence of this is that `Content-Encoding: gzip` will never appear in the metadata on Cloudflare. +### DigitalOcean Spaces + +[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean. + +To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`. + +When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings. + +Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below: + +``` +Storage> s3 +env_auth> 1 +access_key_id> YOUR_ACCESS_KEY +secret_access_key> YOUR_SECRET_KEY +region> +endpoint> nyc3.digitaloceanspaces.com +location_constraint> +acl> +storage_class> +``` + +The resulting configuration file should look like: + +``` +[spaces] +type = s3 +provider = DigitalOcean +env_auth = false +access_key_id = YOUR_ACCESS_KEY +secret_access_key = YOUR_SECRET_KEY +region = +endpoint = nyc3.digitaloceanspaces.com +location_constraint = +acl = +server_side_encryption = +storage_class = +``` + +Once configured, you can create a new Space and begin copying files. For example: + +``` +rclone mkdir spaces:my-new-space +rclone copy /path/to/files spaces:my-new-space +``` + ### Dreamhost Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is @@ -2822,52 +3336,6 @@ endpoint = https://storage.googleapis.com This is Google bug [#312292516](https://issuetracker.google.com/u/0/issues/312292516). -### DigitalOcean Spaces - -[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean. - -To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`. - -When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings. - -Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below: - -``` -Storage> s3 -env_auth> 1 -access_key_id> YOUR_ACCESS_KEY -secret_access_key> YOUR_SECRET_KEY -region> -endpoint> nyc3.digitaloceanspaces.com -location_constraint> -acl> -storage_class> -``` - -The resulting configuration file should look like: - -``` -[spaces] -type = s3 -provider = DigitalOcean -env_auth = false -access_key_id = YOUR_ACCESS_KEY -secret_access_key = YOUR_SECRET_KEY -region = -endpoint = nyc3.digitaloceanspaces.com -location_constraint = -acl = -server_side_encryption = -storage_class = -``` - -Once configured, you can create a new Space and begin copying files. For example: - -``` -rclone mkdir spaces:my-new-space -rclone copy /path/to/files spaces:my-new-space -``` - ### Huawei OBS {#huawei-obs} Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere. @@ -3462,1486 +3930,6 @@ rclone ls ionos-fra:my-bucket rclone copy ionos-fra:my-bucket/file.txt ``` -### Minio - -[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. - -It is very easy to install and provides an S3 compatible server which can be used by rclone. - -To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide). - -When it configures itself Minio will print something like this - -``` -Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000 -AccessKey: USWUXHGYZQYFYFFIT3RE -SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 -Region: us-east-1 -SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis - -Browser Access: - http://192.168.1.106:9000 http://172.23.0.1:9000 - -Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide - $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 - -Object API (Amazon S3 compatible): - Go: https://docs.minio.io/docs/golang-client-quickstart-guide - Java: https://docs.minio.io/docs/java-client-quickstart-guide - Python: https://docs.minio.io/docs/python-client-quickstart-guide - JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide - .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide - -Drive Capacity: 26 GiB Free, 165 GiB Total -``` - -These details need to go into `rclone config` like this. Note that it -is important to put the region in as stated above. - -``` -env_auth> 1 -access_key_id> USWUXHGYZQYFYFFIT3RE -secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 -region> us-east-1 -endpoint> http://192.168.1.106:9000 -location_constraint> -server_side_encryption> -``` - -Which makes the config file look like this - -``` -[minio] -type = s3 -provider = Minio -env_auth = false -access_key_id = USWUXHGYZQYFYFFIT3RE -secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 -region = us-east-1 -endpoint = http://192.168.1.106:9000 -location_constraint = -server_side_encryption = -``` - -So once set up, for example, to copy files into a bucket - -``` -rclone copy /path/to/files minio:bucket -``` - -### Outscale - -[OUTSCALE Object Storage (OOS)](https://en.outscale.com/storage/outscale-object-storage/) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the [official documentation](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html). - -Here is an example of an OOS configuration that you can paste into your rclone configuration file: - -``` -[outscale] -type = s3 -provider = Outscale -env_auth = false -access_key_id = ABCDEFGHIJ0123456789 -secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -region = eu-west-2 -endpoint = oos.eu-west-2.outscale.com -acl = private -``` - -You can also run `rclone config` to go through the interactive setup process: - -``` -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` - -``` -Enter name for new remote. -name> outscale -``` - -``` -Option Storage. -Type of storage to configure. -Choose a number from below, or type in your own value. -[snip] - X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others - \ (s3) -[snip] -Storage> outscale -``` - -``` -Option provider. -Choose your S3 provider. -Choose a number from below, or type in your own value. -Press Enter to leave empty. -[snip] -XX / OUTSCALE Object Storage (OOS) - \ (Outscale) -[snip] -provider> Outscale -``` - -``` -Option env_auth. -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own boolean value (true or false). -Press Enter for the default (false). - 1 / Enter AWS credentials in the next step. - \ (false) - 2 / Get AWS credentials from the environment (env vars or IAM). - \ (true) -env_auth> -``` - -``` -Option access_key_id. -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -access_key_id> ABCDEFGHIJ0123456789 -``` - -``` -Option secret_access_key. -AWS Secret Access Key (password). -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -``` - -``` -Option region. -Region where your bucket will be created and your data stored. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / Paris, France - \ (eu-west-2) - 2 / New Jersey, USA - \ (us-east-2) - 3 / California, USA - \ (us-west-1) - 4 / SecNumCloud, Paris, France - \ (cloudgouv-eu-west-1) - 5 / Tokyo, Japan - \ (ap-northeast-1) -region> 1 -``` - -``` -Option endpoint. -Endpoint for S3 API. -Required when using an S3 clone. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / Outscale EU West 2 (Paris) - \ (oos.eu-west-2.outscale.com) - 2 / Outscale US east 2 (New Jersey) - \ (oos.us-east-2.outscale.com) - 3 / Outscale EU West 1 (California) - \ (oos.us-west-1.outscale.com) - 4 / Outscale SecNumCloud (Paris) - \ (oos.cloudgouv-eu-west-1.outscale.com) - 5 / Outscale AP Northeast 1 (Japan) - \ (oos.ap-northeast-1.outscale.com) -endpoint> 1 -``` - -``` -Option acl. -Canned ACL used when creating buckets and storing or copying objects. -This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -If the acl is an empty string then no X-Amz-Acl: header is added and -the default (private) will be used. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) -[snip] -acl> 1 -``` - -``` -Edit advanced config? -y) Yes -n) No (default) -y/n> n -``` - -``` -Configuration complete. -Options: -- type: s3 -- provider: Outscale -- access_key_id: ABCDEFGHIJ0123456789 -- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -- endpoint: oos.eu-west-2.outscale.com -Keep this "outscale" remote? -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -### OVHcloud {#ovhcloud} - -[OVHcloud Object Storage](https://www.ovhcloud.com/en-ie/public-cloud/object-storage/) -is an S3-compatible general-purpose object storage platform available in all OVHcloud regions. -To use the platform, you will need an access key and secret key. To know more about it and how -to interact with the platform, take a look at the [documentation](https://ovh.to/8stqhuo). - -Here is an example of making an OVHcloud Object Storage configuration with `rclone config`: - -``` -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n - -Enter name for new remote. -name> ovhcloud-rbx - -Option Storage. -Type of storage to configure. -Choose a number from below, or type in your own value. -[...] - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others - \ (s3) -[...] -Storage> s3 - -Option provider. -Choose your S3 provider. -Choose a number from below, or type in your own value. -Press Enter to leave empty. -[...] -XX / OVHcloud Object Storage - \ (OVHcloud) -[...] -provider> OVHcloud - -Option env_auth. -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own boolean value (true or false). -Press Enter for the default (false). - 1 / Enter AWS credentials in the next step. - \ (false) - 2 / Get AWS credentials from the environment (env vars or IAM). - \ (true) -env_auth> 1 - -Option access_key_id. -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -access_key_id> my_access - -Option secret_access_key. -AWS Secret Access Key (password). -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -secret_access_key> my_secret - -Option region. -Region where your bucket will be created and your data stored. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / Gravelines, France - \ (gra) - 2 / Roubaix, France - \ (rbx) - 3 / Strasbourg, France - \ (sbg) - 4 / Paris, France (3AZ) - \ (eu-west-par) - 5 / Frankfurt, Germany - \ (de) - 6 / London, United Kingdom - \ (uk) - 7 / Warsaw, Poland - \ (waw) - 8 / Beauharnois, Canada - \ (bhs) - 9 / Toronto, Canada - \ (ca-east-tor) -10 / Singapore - \ (sgp) -11 / Sydney, Australia - \ (ap-southeast-syd) -12 / Mumbai, India - \ (ap-south-mum) -13 / Vint Hill, Virginia, USA - \ (us-east-va) -14 / Hillsboro, Oregon, USA - \ (us-west-or) -15 / Roubaix, France (Cold Archive) - \ (rbx-archive) -region> 2 - -Option endpoint. -Endpoint for OVHcloud Object Storage. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / OVHcloud Gravelines, France - \ (s3.gra.io.cloud.ovh.net) - 2 / OVHcloud Roubaix, France - \ (s3.rbx.io.cloud.ovh.net) - 3 / OVHcloud Strasbourg, France - \ (s3.sbg.io.cloud.ovh.net) - 4 / OVHcloud Paris, France (3AZ) - \ (s3.eu-west-par.io.cloud.ovh.net) - 5 / OVHcloud Frankfurt, Germany - \ (s3.de.io.cloud.ovh.net) - 6 / OVHcloud London, United Kingdom - \ (s3.uk.io.cloud.ovh.net) - 7 / OVHcloud Warsaw, Poland - \ (s3.waw.io.cloud.ovh.net) - 8 / OVHcloud Beauharnois, Canada - \ (s3.bhs.io.cloud.ovh.net) - 9 / OVHcloud Toronto, Canada - \ (s3.ca-east-tor.io.cloud.ovh.net) -10 / OVHcloud Singapore - \ (s3.sgp.io.cloud.ovh.net) -11 / OVHcloud Sydney, Australia - \ (s3.ap-southeast-syd.io.cloud.ovh.net) -12 / OVHcloud Mumbai, India - \ (s3.ap-south-mum.io.cloud.ovh.net) -13 / OVHcloud Vint Hill, Virginia, USA - \ (s3.us-east-va.io.cloud.ovh.us) -14 / OVHcloud Hillsboro, Oregon, USA - \ (s3.us-west-or.io.cloud.ovh.us) -15 / OVHcloud Roubaix, France (Cold Archive) - \ (s3.rbx-archive.io.cloud.ovh.net) -endpoint> 2 - -Option acl. -Canned ACL used when creating buckets and storing or copying objects. -This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -If the acl is an empty string then no X-Amz-Acl: header is added and -the default (private) will be used. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) - / Owner gets FULL_CONTROL. - 3 | The AllUsers group gets READ and WRITE access. - | Granting this on a bucket is generally not recommended. - \ (public-read-write) - / Owner gets FULL_CONTROL. - 4 | The AuthenticatedUsers group gets READ access. - \ (authenticated-read) - / Object owner gets FULL_CONTROL. - 5 | Bucket owner gets READ access. - | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - \ (bucket-owner-read) - / Both the object owner and the bucket owner get FULL_CONTROL over the object. - 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - \ (bucket-owner-full-control) -acl> 1 - -Edit advanced config? -y) Yes -n) No (default) -y/n> n - -Configuration complete. -Options: -- type: s3 -- provider: OVHcloud -- access_key_id: my_access -- secret_access_key: my_secret -- region: rbx -- endpoint: s3.rbx.io.cloud.ovh.net -- acl: private -Keep this "ovhcloud-rbx" remote? -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -Your configuration file should now look like this: - -``` -[ovhcloud-rbx] -type = s3 -provider = OVHcloud -access_key_id = my_access -secret_access_key = my_secret -region = rbx -endpoint = s3.rbx.io.cloud.ovh.net -acl = private -``` - - -### Qiniu Cloud Object Storage (Kodo) {#qiniu} - -[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management. - -To configure access to Qiniu Kodo, follow the steps below: - -1. Run `rclone config` and select `n` for a new remote. - -``` -rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` - -2. Give the name of the configuration. For example, name it 'qiniu'. - -``` -name> qiniu -``` - -3. Select `s3` storage. - -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ (s3) -[snip] -Storage> s3 -``` - -4. Select `Qiniu` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" -[snip] -22 / Qiniu Object Storage (Kodo) - \ (Qiniu) -[snip] -provider> Qiniu -``` - -5. Enter your SecretId and SecretKey of Qiniu Kodo. - -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> AKIDxxxxxxxxxx -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` - -6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region. - -``` - / The default endpoint - a good choice if you are unsure. - 1 | East China Region 1. - | Needs location constraint cn-east-1. - \ (cn-east-1) - / East China Region 2. - 2 | Needs location constraint cn-east-2. - \ (cn-east-2) - / North China Region 1. - 3 | Needs location constraint cn-north-1. - \ (cn-north-1) - / South China Region 1. - 4 | Needs location constraint cn-south-1. - \ (cn-south-1) - / North America Region. - 5 | Needs location constraint us-north-1. - \ (us-north-1) - / Southeast Asia Region 1. - 6 | Needs location constraint ap-southeast-1. - \ (ap-southeast-1) - / Northeast Asia Region 1. - 7 | Needs location constraint ap-northeast-1. - \ (ap-northeast-1) -[snip] -endpoint> 1 - -Option endpoint. -Endpoint for Qiniu Object Storage. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / East China Endpoint 1 - \ (s3-cn-east-1.qiniucs.com) - 2 / East China Endpoint 2 - \ (s3-cn-east-2.qiniucs.com) - 3 / North China Endpoint 1 - \ (s3-cn-north-1.qiniucs.com) - 4 / South China Endpoint 1 - \ (s3-cn-south-1.qiniucs.com) - 5 / North America Endpoint 1 - \ (s3-us-north-1.qiniucs.com) - 6 / Southeast Asia Endpoint 1 - \ (s3-ap-southeast-1.qiniucs.com) - 7 / Northeast Asia Endpoint 1 - \ (s3-ap-northeast-1.qiniucs.com) -endpoint> 1 - -Option location_constraint. -Location constraint - must be set to match the Region. -Used when creating buckets only. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / East China Region 1 - \ (cn-east-1) - 2 / East China Region 2 - \ (cn-east-2) - 3 / North China Region 1 - \ (cn-north-1) - 4 / South China Region 1 - \ (cn-south-1) - 5 / North America Region 1 - \ (us-north-1) - 6 / Southeast Asia Region 1 - \ (ap-southeast-1) - 7 / Northeast Asia Region 1 - \ (ap-northeast-1) -location_constraint> 1 -``` - -7. Choose acl and storage class. - -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) -[snip] -acl> 2 -The storage class to use when storing new objects in Tencent COS. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Standard storage class - \ (STANDARD) - 2 / Infrequent access storage mode - \ (LINE) - 3 / Archive storage mode - \ (GLACIER) - 4 / Deep archive storage mode - \ (DEEP_ARCHIVE) -[snip] -storage_class> 1 -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[qiniu] -- type: s3 -- provider: Qiniu -- access_key_id: xxx -- secret_access_key: xxx -- region: cn-east-1 -- endpoint: s3-cn-east-1.qiniucs.com -- location_constraint: cn-east-1 -- acl: public-read -- storage_class: STANDARD --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -qiniu s3 -``` - -### RackCorp {#RackCorp} - -[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 compatible object storage platform from your friendly cloud provider RackCorp. -The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty. - -Before you can use RackCorp Object Storage, you'll need to "[sign up](https://www.rackcorp.com/signup)" for an account on our "[portal](https://portal.rackcorp.com)". -Next you can create an `access key`, a `secret key` and `buckets`, in your location of choice with ease. -These details are required for the next steps of configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`. - -Your config should end up looking a bit like this: - -``` -[RCS3-demo-config] -type = s3 -provider = RackCorp -env_auth = true -access_key_id = YOURACCESSKEY -secret_access_key = YOURSECRETACCESSKEY -region = au-nsw -endpoint = s3.rackcorp.com -location_constraint = au-nsw -``` - -### Rclone Serve S3 {#rclone} - -Rclone can serve any remote over the S3 protocol. For details see the -[rclone serve s3](/commands/rclone_serve_s3/) documentation. - -For example, to serve `remote:path` over s3, run the server like this: - -``` -rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path -``` - -This will be compatible with an rclone remote which is defined like this: - -``` -[serves3] -type = s3 -provider = Rclone -endpoint = http://127.0.0.1:8080/ -access_key_id = ACCESS_KEY_ID -secret_access_key = SECRET_ACCESS_KEY -use_multipart_uploads = false -``` - -Note that setting `use_multipart_uploads = false` is to work around -[a bug](/commands/rclone_serve_s3/#bugs) which will be fixed in due course. - -### Scaleway - -[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. -Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. - -Scaleway provides an S3 interface which can be configured for use with rclone like this: - -``` -[scaleway] -type = s3 -provider = Scaleway -env_auth = false -endpoint = s3.nl-ams.scw.cloud -access_key_id = SCWXXXXXXXXXXXXXX -secret_access_key = 1111111-2222-3333-44444-55555555555555 -region = nl-ams -location_constraint = nl-ams -acl = private -upload_cutoff = 5M -chunk_size = 5M -copy_cutoff = 5M -``` - -[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`. -So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above) - -### Seagate Lyve Cloud {#lyve} - -[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3 -compatible object storage platform from [Seagate](https://seagate.com/) intended for enterprise use. - -Here is a config run through for a remote called `remote` - you may -choose a different name of course. Note that to create an access key -and secret key you will need to create a service account first. - -``` -$ rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -``` - -Choose `s3` backend - -``` -Type of storage to configure. -Choose a number from below, or type in your own value. -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ (s3) -[snip] -Storage> s3 -``` - -Choose `LyveCloud` as S3 provider - -``` -Choose your S3 provider. -Choose a number from below, or type in your own value. -Press Enter to leave empty. -[snip] -XX / Seagate Lyve Cloud - \ (LyveCloud) -[snip] -provider> LyveCloud -``` - -Take the default (just press enter) to enter access key and secret in the config file. - -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own boolean value (true or false). -Press Enter for the default (false). - 1 / Enter AWS credentials in the next step. - \ (false) - 2 / Get AWS credentials from the environment (env vars or IAM). - \ (true) -env_auth> -``` - -``` -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -access_key_id> XXX -``` - -``` -AWS Secret Access Key (password). -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -secret_access_key> YYY -``` - -Leave region blank - -``` -Region to connect to. -Leave blank if you are using an S3 clone and you don't have a region. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - / Use this if unsure. - 1 | Will use v4 signatures and an empty region. - \ () - / Use this only if v4 signatures don't work. - 2 | E.g. pre Jewel/v10 CEPH. - \ (other-v2-signature) -region> -``` - -Enter your Lyve Cloud endpoint. This field cannot be kept empty. - -``` -Endpoint for Lyve Cloud S3 API. -Required when using an S3 clone. -Please type in your LyveCloud endpoint. -Examples: -- s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California) -- s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland) -Enter a value. -endpoint> s3.us-west-1.global.lyve.seagate.com -``` - -Leave location constraint blank - -``` -Location constraint - must be set to match the Region. -Leave blank if not sure. Used when creating buckets only. -Enter a value. Press Enter to leave empty. -location_constraint> -``` - -Choose default ACL (`private`). - -``` -Canned ACL used when creating buckets and storing or copying objects. -This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) -[snip] -acl> -``` - -And the config file should end up looking like this: - -``` -[remote] -type = s3 -provider = LyveCloud -access_key_id = XXX -secret_access_key = YYY -endpoint = s3.us-east-1.lyvecloud.seagate.com -``` - -### SeaweedFS - -[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for -blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store. -It has an S3 compatible object storage interface. SeaweedFS can also act as a -[gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage) -to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost. - -Assuming the SeaweedFS are configured with `weed shell` as such: -``` -> s3.bucket.create -name foo -> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply -{ - "identities": [ - { - "name": "me", - "credentials": [ - { - "accessKey": "any", - "secretKey": "any" - } - ], - "actions": [ - "Read:foo", - "Write:foo", - "List:foo", - "Tagging:foo", - "Admin:foo" - ] - } - ] -} -``` - -To use rclone with SeaweedFS, above configuration should end up with something like this in -your config: - -``` -[seaweedfs_s3] -type = s3 -provider = SeaweedFS -access_key_id = any -secret_access_key = any -endpoint = localhost:8333 -``` - -So once set up, for example to copy files into a bucket - -``` -rclone copy /path/to/files seaweedfs_s3:foo -``` - -### Selectel - -[Selectel Cloud Storage](https://selectel.ru/services/cloud/storage/) -is an S3 compatible storage system which features triple redundancy -storage, automatic scaling, high availability and a comprehensive IAM -system. - -Selectel have a section on their website for [configuring -rclone](https://docs.selectel.ru/en/cloud/object-storage/tools/rclone/) -which shows how to make the right API keys. - -From rclone v1.69 Selectel is a supported operator - please choose the -`Selectel` provider type. - -Note that you should use "vHosted" access for the buckets (which is -the recommended default), not "path style". - -You can use `rclone config` to make a new provider like this - -``` -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n - -Enter name for new remote. -name> selectel - -Option Storage. -Type of storage to configure. -Choose a number from below, or type in your own value. -[snip] -XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ... - \ (s3) -[snip] -Storage> s3 - -Option provider. -Choose your S3 provider. -Choose a number from below, or type in your own value. -Press Enter to leave empty. -[snip] -XX / Selectel Object Storage - \ (Selectel) -[snip] -provider> Selectel - -Option env_auth. -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own boolean value (true or false). -Press Enter for the default (false). - 1 / Enter AWS credentials in the next step. - \ (false) - 2 / Get AWS credentials from the environment (env vars or IAM). - \ (true) -env_auth> 1 - -Option access_key_id. -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -access_key_id> ACCESS_KEY - -Option secret_access_key. -AWS Secret Access Key (password). -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -secret_access_key> SECRET_ACCESS_KEY - -Option region. -Region where your data stored. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / St. Petersburg - \ (ru-1) -region> 1 - -Option endpoint. -Endpoint for Selectel Object Storage. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / Saint Petersburg - \ (s3.ru-1.storage.selcloud.ru) -endpoint> 1 - -Edit advanced config? -y) Yes -n) No (default) -y/n> n - -Configuration complete. -Options: -- type: s3 -- provider: Selectel -- access_key_id: ACCESS_KEY -- secret_access_key: SECRET_ACCESS_KEY -- region: ru-1 -- endpoint: s3.ru-1.storage.selcloud.ru -Keep this "selectel" remote? -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -And your config should end up looking like this: - -``` -[selectel] -type = s3 -provider = Selectel -access_key_id = ACCESS_KEY -secret_access_key = SECRET_ACCESS_KEY -region = ru-1 -endpoint = s3.ru-1.storage.selcloud.ru -``` - -### Wasabi - -[Wasabi](https://wasabi.com) is a cloud-based object storage service for a -broad range of applications and use cases. Wasabi is designed for -individuals and organizations that require a high-performance, -reliable, and secure data storage infrastructure at minimal cost. - -Wasabi provides an S3 interface which can be configured for use with -rclone like this. - -``` -No remotes found, make a new one? -n) New remote -s) Set configuration password -n/s> n -name> wasabi -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara) - \ "s3" -[snip] -Storage> s3 -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID - leave blank for anonymous access or runtime credentials. -access_key_id> YOURACCESSKEY -AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. -secret_access_key> YOURSECRETACCESSKEY -Region to connect to. -Choose a number from below, or type in your own value - / The default endpoint - a good choice if you are unsure. - 1 | US Region, Northern Virginia, or Pacific Northwest. - | Leave location constraint empty. - \ "us-east-1" -[snip] -region> us-east-1 -Endpoint for S3 API. -Leave blank if using AWS to use the default endpoint for the region. -Specify if using an S3 clone such as Ceph. -endpoint> s3.wasabisys.com -Location constraint - must be set to match the Region. Used when creating buckets only. -Choose a number from below, or type in your own value - 1 / Empty for US Region, Northern Virginia, or Pacific Northwest. - \ "" -[snip] -location_constraint> -Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). - \ "private" -[snip] -acl> -The server-side encryption algorithm used when storing this object in S3. -Choose a number from below, or type in your own value - 1 / None - \ "" - 2 / AES256 - \ "AES256" -server_side_encryption> -The storage class to use when storing objects in S3. -Choose a number from below, or type in your own value - 1 / Default - \ "" - 2 / Standard storage class - \ "STANDARD" - 3 / Reduced redundancy storage class - \ "REDUCED_REDUNDANCY" - 4 / Standard Infrequent Access storage class - \ "STANDARD_IA" -storage_class> -Remote config --------------------- -[wasabi] -env_auth = false -access_key_id = YOURACCESSKEY -secret_access_key = YOURSECRETACCESSKEY -region = us-east-1 -endpoint = s3.wasabisys.com -location_constraint = -acl = -server_side_encryption = -storage_class = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -This will leave the config file looking like this. - -``` -[wasabi] -type = s3 -provider = Wasabi -env_auth = false -access_key_id = YOURACCESSKEY -secret_access_key = YOURSECRETACCESSKEY -region = -endpoint = s3.wasabisys.com -location_constraint = -acl = -server_side_encryption = -storage_class = -``` - -### Alibaba OSS {#alibaba-oss} - -Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/) -configuration. First run: - - rclone config - -This will guide you through an interactive setup process. - -``` -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> oss -Type of storage to configure. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" -[snip] -Storage> s3 -Choose your S3 provider. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Amazon Web Services (AWS) S3 - \ "AWS" - 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun - \ "Alibaba" - 3 / Ceph Object Storage - \ "Ceph" -[snip] -provider> Alibaba -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> accesskeyid -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> secretaccesskey -Endpoint for OSS API. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / East China 1 (Hangzhou) - \ "oss-cn-hangzhou.aliyuncs.com" - 2 / East China 2 (Shanghai) - \ "oss-cn-shanghai.aliyuncs.com" - 3 / North China 1 (Qingdao) - \ "oss-cn-qingdao.aliyuncs.com" -[snip] -endpoint> 1 -Canned ACL used when creating buckets and storing or copying objects. - -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). - \ "private" - 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. - \ "public-read" - / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. -[snip] -acl> 1 -The storage class to use when storing new objects in OSS. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Default - \ "" - 2 / Standard storage class - \ "STANDARD" - 3 / Archive storage mode. - \ "GLACIER" - 4 / Infrequent access storage mode. - \ "STANDARD_IA" -storage_class> 1 -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config --------------------- -[oss] -type = s3 -provider = Alibaba -env_auth = false -access_key_id = accesskeyid -secret_access_key = secretaccesskey -endpoint = oss-cn-hangzhou.aliyuncs.com -acl = private -storage_class = Standard --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -### China Mobile Ecloud Elastic Object Storage (EOS) {#china-mobile-ecloud-eos} - -Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/) -configuration. First run: - - rclone config - -This will guide you through an interactive setup process. - -``` -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> ChinaMobile -Option Storage. -Type of storage to configure. -Choose a number from below, or type in your own value. - ... -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ (s3) - ... -Storage> s3 -Option provider. -Choose your S3 provider. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - ... - 4 / China Mobile Ecloud Elastic Object Storage (EOS) - \ (ChinaMobile) - ... -provider> ChinaMobile -Option env_auth. -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own boolean value (true or false). -Press Enter for the default (false). - 1 / Enter AWS credentials in the next step. - \ (false) - 2 / Get AWS credentials from the environment (env vars or IAM). - \ (true) -env_auth> -Option access_key_id. -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -access_key_id> accesskeyid -Option secret_access_key. -AWS Secret Access Key (password). -Leave blank for anonymous access or runtime credentials. -Enter a value. Press Enter to leave empty. -secret_access_key> secretaccesskey -Option endpoint. -Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - / The default endpoint - a good choice if you are unsure. - 1 | East China (Suzhou) - \ (eos-wuxi-1.cmecloud.cn) - 2 / East China (Jinan) - \ (eos-jinan-1.cmecloud.cn) - 3 / East China (Hangzhou) - \ (eos-ningbo-1.cmecloud.cn) - 4 / East China (Shanghai-1) - \ (eos-shanghai-1.cmecloud.cn) - 5 / Central China (Zhengzhou) - \ (eos-zhengzhou-1.cmecloud.cn) - 6 / Central China (Changsha-1) - \ (eos-hunan-1.cmecloud.cn) - 7 / Central China (Changsha-2) - \ (eos-zhuzhou-1.cmecloud.cn) - 8 / South China (Guangzhou-2) - \ (eos-guangzhou-1.cmecloud.cn) - 9 / South China (Guangzhou-3) - \ (eos-dongguan-1.cmecloud.cn) -10 / North China (Beijing-1) - \ (eos-beijing-1.cmecloud.cn) -11 / North China (Beijing-2) - \ (eos-beijing-2.cmecloud.cn) -12 / North China (Beijing-3) - \ (eos-beijing-4.cmecloud.cn) -13 / North China (Huhehaote) - \ (eos-huhehaote-1.cmecloud.cn) -14 / Southwest China (Chengdu) - \ (eos-chengdu-1.cmecloud.cn) -15 / Southwest China (Chongqing) - \ (eos-chongqing-1.cmecloud.cn) -16 / Southwest China (Guiyang) - \ (eos-guiyang-1.cmecloud.cn) -17 / Nouthwest China (Xian) - \ (eos-xian-1.cmecloud.cn) -18 / Yunnan China (Kunming) - \ (eos-yunnan.cmecloud.cn) -19 / Yunnan China (Kunming-2) - \ (eos-yunnan-2.cmecloud.cn) -20 / Tianjin China (Tianjin) - \ (eos-tianjin-1.cmecloud.cn) -21 / Jilin China (Changchun) - \ (eos-jilin-1.cmecloud.cn) -22 / Hubei China (Xiangyan) - \ (eos-hubei-1.cmecloud.cn) -23 / Jiangxi China (Nanchang) - \ (eos-jiangxi-1.cmecloud.cn) -24 / Gansu China (Lanzhou) - \ (eos-gansu-1.cmecloud.cn) -25 / Shanxi China (Taiyuan) - \ (eos-shanxi-1.cmecloud.cn) -26 / Liaoning China (Shenyang) - \ (eos-liaoning-1.cmecloud.cn) -27 / Hebei China (Shijiazhuang) - \ (eos-hebei-1.cmecloud.cn) -28 / Fujian China (Xiamen) - \ (eos-fujian-1.cmecloud.cn) -29 / Guangxi China (Nanning) - \ (eos-guangxi-1.cmecloud.cn) -30 / Anhui China (Huainan) - \ (eos-anhui-1.cmecloud.cn) -endpoint> 1 -Option location_constraint. -Location constraint - must match endpoint. -Used when creating buckets only. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / East China (Suzhou) - \ (wuxi1) - 2 / East China (Jinan) - \ (jinan1) - 3 / East China (Hangzhou) - \ (ningbo1) - 4 / East China (Shanghai-1) - \ (shanghai1) - 5 / Central China (Zhengzhou) - \ (zhengzhou1) - 6 / Central China (Changsha-1) - \ (hunan1) - 7 / Central China (Changsha-2) - \ (zhuzhou1) - 8 / South China (Guangzhou-2) - \ (guangzhou1) - 9 / South China (Guangzhou-3) - \ (dongguan1) -10 / North China (Beijing-1) - \ (beijing1) -11 / North China (Beijing-2) - \ (beijing2) -12 / North China (Beijing-3) - \ (beijing4) -13 / North China (Huhehaote) - \ (huhehaote1) -14 / Southwest China (Chengdu) - \ (chengdu1) -15 / Southwest China (Chongqing) - \ (chongqing1) -16 / Southwest China (Guiyang) - \ (guiyang1) -17 / Nouthwest China (Xian) - \ (xian1) -18 / Yunnan China (Kunming) - \ (yunnan) -19 / Yunnan China (Kunming-2) - \ (yunnan2) -20 / Tianjin China (Tianjin) - \ (tianjin1) -21 / Jilin China (Changchun) - \ (jilin1) -22 / Hubei China (Xiangyan) - \ (hubei1) -23 / Jiangxi China (Nanchang) - \ (jiangxi1) -24 / Gansu China (Lanzhou) - \ (gansu1) -25 / Shanxi China (Taiyuan) - \ (shanxi1) -26 / Liaoning China (Shenyang) - \ (liaoning1) -27 / Hebei China (Shijiazhuang) - \ (hebei1) -28 / Fujian China (Xiamen) - \ (fujian1) -29 / Guangxi China (Nanning) - \ (guangxi1) -30 / Anhui China (Huainan) - \ (anhui1) -location_constraint> 1 -Option acl. -Canned ACL used when creating buckets and storing or copying objects. -This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) - / Owner gets FULL_CONTROL. - 3 | The AllUsers group gets READ and WRITE access. - | Granting this on a bucket is generally not recommended. - \ (public-read-write) - / Owner gets FULL_CONTROL. - 4 | The AuthenticatedUsers group gets READ access. - \ (authenticated-read) - / Object owner gets FULL_CONTROL. -acl> private -Option server_side_encryption. -The server-side encryption algorithm used when storing this object in S3. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / None - \ () - 2 / AES256 - \ (AES256) -server_side_encryption> -Option storage_class. -The storage class to use when storing new objects in ChinaMobile. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / Default - \ () - 2 / Standard storage class - \ (STANDARD) - 3 / Archive storage mode - \ (GLACIER) - 4 / Infrequent access storage mode - \ (STANDARD_IA) -storage_class> -Edit advanced config? -y) Yes -n) No (default) -y/n> n --------------------- -[ChinaMobile] -type = s3 -provider = ChinaMobile -access_key_id = accesskeyid -secret_access_key = secretaccesskey -endpoint = eos-wuxi-1.cmecloud.cn -location_constraint = wuxi1 -acl = private --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - ### Leviia Cloud Object Storage {#leviia} [Leviia Object Storage](https://www.leviia.com/object-storage/), backup and secure your data in a 100% French cloud, independent of GAFAM.. @@ -5542,239 +4530,71 @@ secret_access_key = XXX endpoint = s3.eu-central-1.s4.mega.io ``` -### ArvanCloud {#arvan-cloud} +### Minio -[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage. -It gives you access to backup and archived files and allows sharing. -Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service. +[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. -ArvanCloud provides an S3 interface which can be configured for use with -rclone like this. +It is very easy to install and provides an S3 compatible server which can be used by rclone. + +To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide). + +When it configures itself Minio will print something like this + +``` +Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000 +AccessKey: USWUXHGYZQYFYFFIT3RE +SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 +Region: us-east-1 +SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis + +Browser Access: + http://192.168.1.106:9000 http://172.23.0.1:9000 + +Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide + $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 + +Object API (Amazon S3 compatible): + Go: https://docs.minio.io/docs/golang-client-quickstart-guide + Java: https://docs.minio.io/docs/java-client-quickstart-guide + Python: https://docs.minio.io/docs/python-client-quickstart-guide + JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide + .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide + +Drive Capacity: 26 GiB Free, 165 GiB Total +``` + +These details need to go into `rclone config` like this. Note that it +is important to put the region in as stated above. ``` -No remotes found, make a new one? -n) New remote -s) Set configuration password -n/s> n -name> ArvanCloud -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio) - \ "s3" -[snip] -Storage> s3 -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" env_auth> 1 -AWS Access Key ID - leave blank for anonymous access or runtime credentials. -access_key_id> YOURACCESSKEY -AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. -secret_access_key> YOURSECRETACCESSKEY -Region to connect to. -Choose a number from below, or type in your own value - / The default endpoint - a good choice if you are unsure. - 1 | US Region, Northern Virginia, or Pacific Northwest. - | Leave location constraint empty. - \ "us-east-1" -[snip] -region> -Endpoint for S3 API. -Leave blank if using ArvanCloud to use the default endpoint for the region. -Specify if using an S3 clone such as Ceph. -endpoint> s3.arvanstorage.com -Location constraint - must be set to match the Region. Used when creating buckets only. -Choose a number from below, or type in your own value - 1 / Empty for Iran-Tehran Region. - \ "" -[snip] +access_key_id> USWUXHGYZQYFYFFIT3RE +secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 +region> us-east-1 +endpoint> http://192.168.1.106:9000 location_constraint> -Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). - \ "private" -[snip] -acl> -The server-side encryption algorithm used when storing this object in S3. -Choose a number from below, or type in your own value - 1 / None - \ "" - 2 / AES256 - \ "AES256" server_side_encryption> -The storage class to use when storing objects in S3. -Choose a number from below, or type in your own value - 1 / Default - \ "" - 2 / Standard storage class - \ "STANDARD" -storage_class> -Remote config --------------------- -[ArvanCloud] -env_auth = false -access_key_id = YOURACCESSKEY -secret_access_key = YOURSECRETACCESSKEY -region = ir-thr-at1 -endpoint = s3.arvanstorage.com -location_constraint = -acl = -server_side_encryption = -storage_class = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y ``` -This will leave the config file looking like this. +Which makes the config file look like this ``` -[ArvanCloud] +[minio] type = s3 -provider = ArvanCloud +provider = Minio env_auth = false -access_key_id = YOURACCESSKEY -secret_access_key = YOURSECRETACCESSKEY -region = -endpoint = s3.arvanstorage.com +access_key_id = USWUXHGYZQYFYFFIT3RE +secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 +region = us-east-1 +endpoint = http://192.168.1.106:9000 location_constraint = -acl = server_side_encryption = -storage_class = ``` -### Tencent COS {#tencent-cos} - -[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. - -To configure access to Tencent COS, follow the steps below: - -1. Run `rclone config` and select `n` for a new remote. +So once set up, for example, to copy files into a bucket ``` -rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` - -2. Give the name of the configuration. For example, name it 'cos'. - -``` -name> cos -``` - -3. Select `s3` storage. - -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" -[snip] -Storage> s3 -``` - -4. Select `TencentCOS` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" -[snip] -11 / Tencent Cloud Object Storage (COS) - \ "TencentCOS" -[snip] -provider> TencentCOS -``` - -5. Enter your SecretId and SecretKey of Tencent Cloud. - -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> AKIDxxxxxxxxxx -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` - -6. Select endpoint for Tencent COS. This is the standard endpoint for different region. - -``` - 1 / Beijing Region. - \ "cos.ap-beijing.myqcloud.com" - 2 / Nanjing Region. - \ "cos.ap-nanjing.myqcloud.com" - 3 / Shanghai Region. - \ "cos.ap-shanghai.myqcloud.com" - 4 / Guangzhou Region. - \ "cos.ap-guangzhou.myqcloud.com" -[snip] -endpoint> 4 -``` - -7. Choose acl and storage class. - -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Owner gets Full_CONTROL. No one else has access rights (default). - \ "default" -[snip] -acl> 1 -The storage class to use when storing new objects in Tencent COS. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Default - \ "" -[snip] -storage_class> 1 -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[cos] -type = s3 -provider = TencentCOS -env_auth = false -access_key_id = xxx -secret_access_key = xxx -endpoint = cos.ap-guangzhou.myqcloud.com -acl = default --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -cos s3 +rclone copy /path/to/files minio:bucket ``` ### Netease NOS @@ -5783,6 +4603,368 @@ For Netease NOS configure as per the configurator `rclone config` setting the provider `Netease`. This will automatically set `force_path_style = false` which is necessary for it to run properly. +### Outscale + +[OUTSCALE Object Storage (OOS)](https://en.outscale.com/storage/outscale-object-storage/) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the [official documentation](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html). + +Here is an example of an OOS configuration that you can paste into your rclone configuration file: + +``` +[outscale] +type = s3 +provider = Outscale +env_auth = false +access_key_id = ABCDEFGHIJ0123456789 +secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +region = eu-west-2 +endpoint = oos.eu-west-2.outscale.com +acl = private +``` + +You can also run `rclone config` to go through the interactive setup process: + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +``` + +``` +Enter name for new remote. +name> outscale +``` + +``` +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] + X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others + \ (s3) +[snip] +Storage> outscale +``` + +``` +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / OUTSCALE Object Storage (OOS) + \ (Outscale) +[snip] +provider> Outscale +``` + +``` +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> +``` + +``` +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ABCDEFGHIJ0123456789 +``` + +``` +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +``` + +``` +Option region. +Region where your bucket will be created and your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Paris, France + \ (eu-west-2) + 2 / New Jersey, USA + \ (us-east-2) + 3 / California, USA + \ (us-west-1) + 4 / SecNumCloud, Paris, France + \ (cloudgouv-eu-west-1) + 5 / Tokyo, Japan + \ (ap-northeast-1) +region> 1 +``` + +``` +Option endpoint. +Endpoint for S3 API. +Required when using an S3 clone. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Outscale EU West 2 (Paris) + \ (oos.eu-west-2.outscale.com) + 2 / Outscale US east 2 (New Jersey) + \ (oos.us-east-2.outscale.com) + 3 / Outscale EU West 1 (California) + \ (oos.us-west-1.outscale.com) + 4 / Outscale SecNumCloud (Paris) + \ (oos.cloudgouv-eu-west-1.outscale.com) + 5 / Outscale AP Northeast 1 (Japan) + \ (oos.ap-northeast-1.outscale.com) +endpoint> 1 +``` + +``` +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +If the acl is an empty string then no X-Amz-Acl: header is added and +the default (private) will be used. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) +[snip] +acl> 1 +``` + +``` +Edit advanced config? +y) Yes +n) No (default) +y/n> n +``` + +``` +Configuration complete. +Options: +- type: s3 +- provider: Outscale +- access_key_id: ABCDEFGHIJ0123456789 +- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +- endpoint: oos.eu-west-2.outscale.com +Keep this "outscale" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +### OVHcloud {#ovhcloud} + +[OVHcloud Object Storage](https://www.ovhcloud.com/en-ie/public-cloud/object-storage/) +is an S3-compatible general-purpose object storage platform available in all OVHcloud regions. +To use the platform, you will need an access key and secret key. To know more about it and how +to interact with the platform, take a look at the [documentation](https://ovh.to/8stqhuo). + +Here is an example of making an OVHcloud Object Storage configuration with `rclone config`: + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> ovhcloud-rbx + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[...] + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others + \ (s3) +[...] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[...] +XX / OVHcloud Object Storage + \ (OVHcloud) +[...] +provider> OVHcloud + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> 1 + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> my_access + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> my_secret + +Option region. +Region where your bucket will be created and your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Gravelines, France + \ (gra) + 2 / Roubaix, France + \ (rbx) + 3 / Strasbourg, France + \ (sbg) + 4 / Paris, France (3AZ) + \ (eu-west-par) + 5 / Frankfurt, Germany + \ (de) + 6 / London, United Kingdom + \ (uk) + 7 / Warsaw, Poland + \ (waw) + 8 / Beauharnois, Canada + \ (bhs) + 9 / Toronto, Canada + \ (ca-east-tor) +10 / Singapore + \ (sgp) +11 / Sydney, Australia + \ (ap-southeast-syd) +12 / Mumbai, India + \ (ap-south-mum) +13 / Vint Hill, Virginia, USA + \ (us-east-va) +14 / Hillsboro, Oregon, USA + \ (us-west-or) +15 / Roubaix, France (Cold Archive) + \ (rbx-archive) +region> 2 + +Option endpoint. +Endpoint for OVHcloud Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / OVHcloud Gravelines, France + \ (s3.gra.io.cloud.ovh.net) + 2 / OVHcloud Roubaix, France + \ (s3.rbx.io.cloud.ovh.net) + 3 / OVHcloud Strasbourg, France + \ (s3.sbg.io.cloud.ovh.net) + 4 / OVHcloud Paris, France (3AZ) + \ (s3.eu-west-par.io.cloud.ovh.net) + 5 / OVHcloud Frankfurt, Germany + \ (s3.de.io.cloud.ovh.net) + 6 / OVHcloud London, United Kingdom + \ (s3.uk.io.cloud.ovh.net) + 7 / OVHcloud Warsaw, Poland + \ (s3.waw.io.cloud.ovh.net) + 8 / OVHcloud Beauharnois, Canada + \ (s3.bhs.io.cloud.ovh.net) + 9 / OVHcloud Toronto, Canada + \ (s3.ca-east-tor.io.cloud.ovh.net) +10 / OVHcloud Singapore + \ (s3.sgp.io.cloud.ovh.net) +11 / OVHcloud Sydney, Australia + \ (s3.ap-southeast-syd.io.cloud.ovh.net) +12 / OVHcloud Mumbai, India + \ (s3.ap-south-mum.io.cloud.ovh.net) +13 / OVHcloud Vint Hill, Virginia, USA + \ (s3.us-east-va.io.cloud.ovh.us) +14 / OVHcloud Hillsboro, Oregon, USA + \ (s3.us-west-or.io.cloud.ovh.us) +15 / OVHcloud Roubaix, France (Cold Archive) + \ (s3.rbx-archive.io.cloud.ovh.net) +endpoint> 2 + +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +If the acl is an empty string then no X-Amz-Acl: header is added and +the default (private) will be used. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \ (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \ (authenticated-read) + / Object owner gets FULL_CONTROL. + 5 | Bucket owner gets READ access. + | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ (bucket-owner-read) + / Both the object owner and the bucket owner get FULL_CONTROL over the object. + 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ (bucket-owner-full-control) +acl> 1 + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: OVHcloud +- access_key_id: my_access +- secret_access_key: my_secret +- region: rbx +- endpoint: s3.rbx.io.cloud.ovh.net +- acl: private +Keep this "ovhcloud-rbx" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +Your configuration file should now look like this: + +``` +[ovhcloud-rbx] +type = s3 +provider = OVHcloud +access_key_id = my_access +secret_access_key = my_secret +region = rbx +endpoint = s3.rbx.io.cloud.ovh.net +acl = private +``` + + ### Petabox Here is an example of making a [Petabox](https://petabox.io/) @@ -6056,6 +5238,584 @@ ensure proper DNS configuration: subdomains of the endpoint hostname should reso FlashBlade data VIP. For example, if your endpoint is `https://s3.flashblade.example.com`, then `bucket-name.s3.flashblade.example.com` should also resolve to the data VIP. +### Qiniu Cloud Object Storage (Kodo) {#qiniu} + +[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management. + +To configure access to Qiniu Kodo, follow the steps below: + +1. Run `rclone config` and select `n` for a new remote. + +``` +rclone config +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +``` + +2. Give the name of the configuration. For example, name it 'qiniu'. + +``` +name> qiniu +``` + +3. Select `s3` storage. + +``` +Choose a number from below, or type in your own value +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) +[snip] +Storage> s3 +``` + +4. Select `Qiniu` provider. +``` +Choose a number from below, or type in your own value +1 / Amazon Web Services (AWS) S3 + \ "AWS" +[snip] +22 / Qiniu Object Storage (Kodo) + \ (Qiniu) +[snip] +provider> Qiniu +``` + +5. Enter your SecretId and SecretKey of Qiniu Kodo. + +``` +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Enter a boolean value (true or false). Press Enter for the default ("false"). +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +access_key_id> AKIDxxxxxxxxxx +AWS Secret Access Key (password) +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +secret_access_key> xxxxxxxxxxx +``` + +6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region. + +``` + / The default endpoint - a good choice if you are unsure. + 1 | East China Region 1. + | Needs location constraint cn-east-1. + \ (cn-east-1) + / East China Region 2. + 2 | Needs location constraint cn-east-2. + \ (cn-east-2) + / North China Region 1. + 3 | Needs location constraint cn-north-1. + \ (cn-north-1) + / South China Region 1. + 4 | Needs location constraint cn-south-1. + \ (cn-south-1) + / North America Region. + 5 | Needs location constraint us-north-1. + \ (us-north-1) + / Southeast Asia Region 1. + 6 | Needs location constraint ap-southeast-1. + \ (ap-southeast-1) + / Northeast Asia Region 1. + 7 | Needs location constraint ap-northeast-1. + \ (ap-northeast-1) +[snip] +endpoint> 1 + +Option endpoint. +Endpoint for Qiniu Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / East China Endpoint 1 + \ (s3-cn-east-1.qiniucs.com) + 2 / East China Endpoint 2 + \ (s3-cn-east-2.qiniucs.com) + 3 / North China Endpoint 1 + \ (s3-cn-north-1.qiniucs.com) + 4 / South China Endpoint 1 + \ (s3-cn-south-1.qiniucs.com) + 5 / North America Endpoint 1 + \ (s3-us-north-1.qiniucs.com) + 6 / Southeast Asia Endpoint 1 + \ (s3-ap-southeast-1.qiniucs.com) + 7 / Northeast Asia Endpoint 1 + \ (s3-ap-northeast-1.qiniucs.com) +endpoint> 1 + +Option location_constraint. +Location constraint - must be set to match the Region. +Used when creating buckets only. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / East China Region 1 + \ (cn-east-1) + 2 / East China Region 2 + \ (cn-east-2) + 3 / North China Region 1 + \ (cn-north-1) + 4 / South China Region 1 + \ (cn-south-1) + 5 / North America Region 1 + \ (us-north-1) + 6 / Southeast Asia Region 1 + \ (ap-southeast-1) + 7 / Northeast Asia Region 1 + \ (ap-northeast-1) +location_constraint> 1 +``` + +7. Choose acl and storage class. + +``` +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) +[snip] +acl> 2 +The storage class to use when storing new objects in Tencent COS. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / Standard storage class + \ (STANDARD) + 2 / Infrequent access storage mode + \ (LINE) + 3 / Archive storage mode + \ (GLACIER) + 4 / Deep archive storage mode + \ (DEEP_ARCHIVE) +[snip] +storage_class> 1 +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +-------------------- +[qiniu] +- type: s3 +- provider: Qiniu +- access_key_id: xxx +- secret_access_key: xxx +- region: cn-east-1 +- endpoint: s3-cn-east-1.qiniucs.com +- location_constraint: cn-east-1 +- acl: public-read +- storage_class: STANDARD +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +qiniu s3 +``` + +### RackCorp {#RackCorp} + +[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 compatible object storage platform from your friendly cloud provider RackCorp. +The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty. + +Before you can use RackCorp Object Storage, you'll need to "[sign up](https://www.rackcorp.com/signup)" for an account on our "[portal](https://portal.rackcorp.com)". +Next you can create an `access key`, a `secret key` and `buckets`, in your location of choice with ease. +These details are required for the next steps of configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`. + +Your config should end up looking a bit like this: + +``` +[RCS3-demo-config] +type = s3 +provider = RackCorp +env_auth = true +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = au-nsw +endpoint = s3.rackcorp.com +location_constraint = au-nsw +``` + +### Rclone Serve S3 {#rclone} + +Rclone can serve any remote over the S3 protocol. For details see the +[rclone serve s3](/commands/rclone_serve_s3/) documentation. + +For example, to serve `remote:path` over s3, run the server like this: + +``` +rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path +``` + +This will be compatible with an rclone remote which is defined like this: + +``` +[serves3] +type = s3 +provider = Rclone +endpoint = http://127.0.0.1:8080/ +access_key_id = ACCESS_KEY_ID +secret_access_key = SECRET_ACCESS_KEY +use_multipart_uploads = false +``` + +Note that setting `use_multipart_uploads = false` is to work around +[a bug](/commands/rclone_serve_s3/#bugs) which will be fixed in due course. + +### Scaleway + +[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. +Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. + +Scaleway provides an S3 interface which can be configured for use with rclone like this: + +``` +[scaleway] +type = s3 +provider = Scaleway +env_auth = false +endpoint = s3.nl-ams.scw.cloud +access_key_id = SCWXXXXXXXXXXXXXX +secret_access_key = 1111111-2222-3333-44444-55555555555555 +region = nl-ams +location_constraint = nl-ams +acl = private +upload_cutoff = 5M +chunk_size = 5M +copy_cutoff = 5M +``` + +[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`. +So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above) + +### Seagate Lyve Cloud {#lyve} + +[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3 +compatible object storage platform from [Seagate](https://seagate.com/) intended for enterprise use. + +Here is a config run through for a remote called `remote` - you may +choose a different name of course. Note that to create an access key +and secret key you will need to create a service account first. + +``` +$ rclone config +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +``` + +Choose `s3` backend + +``` +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) +[snip] +Storage> s3 +``` + +Choose `LyveCloud` as S3 provider + +``` +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / Seagate Lyve Cloud + \ (LyveCloud) +[snip] +provider> LyveCloud +``` + +Take the default (just press enter) to enter access key and secret in the config file. + +``` +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> +``` + +``` +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> XXX +``` + +``` +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> YYY +``` + +Leave region blank + +``` +Region to connect to. +Leave blank if you are using an S3 clone and you don't have a region. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Use this if unsure. + 1 | Will use v4 signatures and an empty region. + \ () + / Use this only if v4 signatures don't work. + 2 | E.g. pre Jewel/v10 CEPH. + \ (other-v2-signature) +region> +``` + +Enter your Lyve Cloud endpoint. This field cannot be kept empty. + +``` +Endpoint for Lyve Cloud S3 API. +Required when using an S3 clone. +Please type in your LyveCloud endpoint. +Examples: +- s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California) +- s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland) +Enter a value. +endpoint> s3.us-west-1.global.lyve.seagate.com +``` + +Leave location constraint blank + +``` +Location constraint - must be set to match the Region. +Leave blank if not sure. Used when creating buckets only. +Enter a value. Press Enter to leave empty. +location_constraint> +``` + +Choose default ACL (`private`). + +``` +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) +[snip] +acl> +``` + +And the config file should end up looking like this: + +``` +[remote] +type = s3 +provider = LyveCloud +access_key_id = XXX +secret_access_key = YYY +endpoint = s3.us-east-1.lyvecloud.seagate.com +``` + +### SeaweedFS + +[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for +blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store. +It has an S3 compatible object storage interface. SeaweedFS can also act as a +[gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage) +to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost. + +Assuming the SeaweedFS are configured with `weed shell` as such: +``` +> s3.bucket.create -name foo +> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply +{ + "identities": [ + { + "name": "me", + "credentials": [ + { + "accessKey": "any", + "secretKey": "any" + } + ], + "actions": [ + "Read:foo", + "Write:foo", + "List:foo", + "Tagging:foo", + "Admin:foo" + ] + } + ] +} +``` + +To use rclone with SeaweedFS, above configuration should end up with something like this in +your config: + +``` +[seaweedfs_s3] +type = s3 +provider = SeaweedFS +access_key_id = any +secret_access_key = any +endpoint = localhost:8333 +``` + +So once set up, for example to copy files into a bucket + +``` +rclone copy /path/to/files seaweedfs_s3:foo +``` + +### Selectel + +[Selectel Cloud Storage](https://selectel.ru/services/cloud/storage/) +is an S3 compatible storage system which features triple redundancy +storage, automatic scaling, high availability and a comprehensive IAM +system. + +Selectel have a section on their website for [configuring +rclone](https://docs.selectel.ru/en/cloud/object-storage/tools/rclone/) +which shows how to make the right API keys. + +From rclone v1.69 Selectel is a supported operator - please choose the +`Selectel` provider type. + +Note that you should use "vHosted" access for the buckets (which is +the recommended default), not "path style". + +You can use `rclone config` to make a new provider like this + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> selectel + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ... + \ (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / Selectel Object Storage + \ (Selectel) +[snip] +provider> Selectel + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> 1 + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY + +Option region. +Region where your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / St. Petersburg + \ (ru-1) +region> 1 + +Option endpoint. +Endpoint for Selectel Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Saint Petersburg + \ (s3.ru-1.storage.selcloud.ru) +endpoint> 1 + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: Selectel +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_ACCESS_KEY +- region: ru-1 +- endpoint: s3.ru-1.storage.selcloud.ru +Keep this "selectel" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +And your config should end up looking like this: + +``` +[selectel] +type = s3 +provider = Selectel +access_key_id = ACCESS_KEY +secret_access_key = SECRET_ACCESS_KEY +region = ru-1 +endpoint = s3.ru-1.storage.selcloud.ru +``` + ### Storj Storj is a decentralized cloud storage which can be used through its @@ -6157,38 +5917,6 @@ nodes across the network. For more detailed comparison please check the documentation of the [storj](/storj) backend. -## Memory usage {memory} - -The most common cause of rclone using lots of memory is a single -directory with millions of files in. Despite s3 not really having the -concepts of directories, rclone does the sync on a directory by -directory basis to be compatible with normal filing systems. - -Rclone loads each directory into memory as rclone objects. Each rclone -object takes 0.5k-1k of memory, so approximately 1GB per 1,000,000 -files, and the sync for that directory does not begin until it is -entirely loaded in memory. So the sync can take a long time to start -for large directories. - -To sync a directory with 100,000,000 files in you would need approximately -100 GB of memory. At some point the amount of memory becomes difficult -to provide so there is -[a workaround for this](https://github.com/rclone/rclone/wiki/Big-syncs-with-millions-of-files) -which involves a bit of scripting. - -At some point rclone will gain a sync mode which is effectively this -workaround but built in to rclone. - -## Limitations - -`rclone about` is not supported by the S3 backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy `mfs` (most free space) as a member of an rclone union -remote. - -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - - ### Synology C2 Object Storage {#synology-c2} [Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty. @@ -6321,6 +6049,247 @@ d) Delete this remote y/e/d> y ``` + +### Tencent COS {#tencent-cos} + +[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. + +To configure access to Tencent COS, follow the steps below: + +1. Run `rclone config` and select `n` for a new remote. + +``` +rclone config +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +``` + +2. Give the name of the configuration. For example, name it 'cos'. + +``` +name> cos +``` + +3. Select `s3` storage. + +``` +Choose a number from below, or type in your own value +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" +[snip] +Storage> s3 +``` + +4. Select `TencentCOS` provider. +``` +Choose a number from below, or type in your own value +1 / Amazon Web Services (AWS) S3 + \ "AWS" +[snip] +11 / Tencent Cloud Object Storage (COS) + \ "TencentCOS" +[snip] +provider> TencentCOS +``` + +5. Enter your SecretId and SecretKey of Tencent Cloud. + +``` +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Enter a boolean value (true or false). Press Enter for the default ("false"). +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +access_key_id> AKIDxxxxxxxxxx +AWS Secret Access Key (password) +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +secret_access_key> xxxxxxxxxxx +``` + +6. Select endpoint for Tencent COS. This is the standard endpoint for different region. + +``` + 1 / Beijing Region. + \ "cos.ap-beijing.myqcloud.com" + 2 / Nanjing Region. + \ "cos.ap-nanjing.myqcloud.com" + 3 / Shanghai Region. + \ "cos.ap-shanghai.myqcloud.com" + 4 / Guangzhou Region. + \ "cos.ap-guangzhou.myqcloud.com" +[snip] +endpoint> 4 +``` + +7. Choose acl and storage class. + +``` +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / Owner gets Full_CONTROL. No one else has access rights (default). + \ "default" +[snip] +acl> 1 +The storage class to use when storing new objects in Tencent COS. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / Default + \ "" +[snip] +storage_class> 1 +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +-------------------- +[cos] +type = s3 +provider = TencentCOS +env_auth = false +access_key_id = xxx +secret_access_key = xxx +endpoint = cos.ap-guangzhou.myqcloud.com +acl = default +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +cos s3 +``` + +### Wasabi + +[Wasabi](https://wasabi.com) is a cloud-based object storage service for a +broad range of applications and use cases. Wasabi is designed for +individuals and organizations that require a high-performance, +reliable, and secure data storage infrastructure at minimal cost. + +Wasabi provides an S3 interface which can be configured for use with +rclone like this. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +n/s> n +name> wasabi +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara) + \ "s3" +[snip] +Storage> s3 +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> YOURACCESSKEY +AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> YOURSECRETACCESSKEY +Region to connect to. +Choose a number from below, or type in your own value + / The default endpoint - a good choice if you are unsure. + 1 | US Region, Northern Virginia, or Pacific Northwest. + | Leave location constraint empty. + \ "us-east-1" +[snip] +region> us-east-1 +Endpoint for S3 API. +Leave blank if using AWS to use the default endpoint for the region. +Specify if using an S3 clone such as Ceph. +endpoint> s3.wasabisys.com +Location constraint - must be set to match the Region. Used when creating buckets only. +Choose a number from below, or type in your own value + 1 / Empty for US Region, Northern Virginia, or Pacific Northwest. + \ "" +[snip] +location_constraint> +Canned ACL used when creating buckets and/or storing objects in S3. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" +[snip] +acl> +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value + 1 / None + \ "" + 2 / AES256 + \ "AES256" +server_side_encryption> +The storage class to use when storing objects in S3. +Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Standard storage class + \ "STANDARD" + 3 / Reduced redundancy storage class + \ "REDUCED_REDUNDANCY" + 4 / Standard Infrequent Access storage class + \ "STANDARD_IA" +storage_class> +Remote config +-------------------- +[wasabi] +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = us-east-1 +endpoint = s3.wasabisys.com +location_constraint = +acl = +server_side_encryption = +storage_class = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This will leave the config file looking like this. + +``` +[wasabi] +type = s3 +provider = Wasabi +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = +endpoint = s3.wasabisys.com +location_constraint = +acl = +server_side_encryption = +storage_class = +``` + ### Zata Object Storage {#Zata} [Zata Object Storage](https://zata.ai/) provides a secure, S3-compatible cloud storage solution designed for scalability and performance, ideal for a variety of data storage needs. @@ -6472,4 +6441,37 @@ secret_access_key = xxx region = us-east-1 endpoint = idr01.zata.ai -``` \ No newline at end of file +``` + +## Memory usage {#memory} + +The most common cause of rclone using lots of memory is a single +directory with millions of files in. Despite s3 not really having the +concepts of directories, rclone does the sync on a directory by +directory basis to be compatible with normal filing systems. + +Rclone loads each directory into memory as rclone objects. Each rclone +object takes 0.5k-1k of memory, so approximately 1GB per 1,000,000 +files, and the sync for that directory does not begin until it is +entirely loaded in memory. So the sync can take a long time to start +for large directories. + +To sync a directory with 100,000,000 files in you would need approximately +100 GB of memory. At some point the amount of memory becomes difficult +to provide so there is +[a workaround for this](https://github.com/rclone/rclone/wiki/Big-syncs-with-millions-of-files) +which involves a bit of scripting. + +At some point rclone will gain a sync mode which is effectively this +workaround but built in to rclone. + +## Limitations + +`rclone about` is not supported by the S3 backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy `mfs` (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +