From a7faf053939d7cef8570c67e9e6b729b4280b37c Mon Sep 17 00:00:00 2001
From: albertony <12441419+albertony@users.noreply.github.com>
Date: Sat, 18 Nov 2023 13:36:46 +0100
Subject: [PATCH] docs: cleanup backend hashes sections

---
 docs/content/amazonclouddrive.md    |  6 ++--
 docs/content/azureblob.md           | 16 ++++-----
 docs/content/b2.md                  |  4 +--
 docs/content/bisync.md              |  4 +--
 docs/content/box.md                 |  2 +-
 docs/content/chunker.md             |  2 +-
 docs/content/crypt.md               |  2 +-
 docs/content/drive.md               | 11 ++++--
 docs/content/dropbox.md             |  2 +-
 docs/content/fichier.md             |  4 +--
 docs/content/filefabric.md          |  2 +-
 docs/content/ftp.md                 |  2 +-
 docs/content/googlecloudstorage.md  |  2 +-
 docs/content/googlephotos.md        |  2 +-
 docs/content/hdfs.md                |  2 +-
 docs/content/hidrive.md             |  2 +-
 docs/content/http.md                |  2 +-
 docs/content/jottacloud.md          |  2 +-
 docs/content/local.md               |  6 ++--
 docs/content/mailru.md              |  8 ++---
 docs/content/mega.md                |  2 +-
 docs/content/memory.md              |  2 +-
 docs/content/onedrive.md            |  2 +-
 docs/content/opendrive.md           |  4 ++-
 docs/content/oracleobjectstorage.md | 11 ++++--
 docs/content/overview.md            |  2 +-
 docs/content/pcloud.md              |  2 +-
 docs/content/pikpak.md              | 16 ++++++---
 docs/content/premiumizeme.md        |  2 +-
 docs/content/protondrive.md         |  4 ++-
 docs/content/quatrix.md             |  2 +-
 docs/content/s3.md                  | 53 +++++++++++++++--------------
 docs/content/sftp.md                |  2 +-
 docs/content/sharefile.md           |  2 +-
 docs/content/sugarsync.md           |  2 +-
 docs/content/swift.md               |  4 ++-
 docs/content/uptobox.md             |  2 +-
 docs/content/webdav.md              |  2 +-
 docs/content/yandex.md              |  6 ++--
 docs/content/zoho.md                |  6 ++--
 40 files changed, 115 insertions(+), 96 deletions(-)

diff --git a/docs/content/amazonclouddrive.md b/docs/content/amazonclouddrive.md
index b0068f94f..d7ddec191 100644
--- a/docs/content/amazonclouddrive.md
+++ b/docs/content/amazonclouddrive.md
@@ -127,13 +127,13 @@ To copy a local directory to an Amazon Drive directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and MD5SUMs
+### Modification times and hashes
 
 Amazon Drive doesn't allow modification times to be changed via
 the API so these won't be accurate or used for syncing.
 
-It does store MD5SUMs so for a more accurate sync, you can use the
-`--checksum` flag.
+It does support the MD5 hash algorithm, so for a more accurate sync,
+you can use the `--checksum` flag.
 
 ### Restricted filename characters
 
diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md
index 937824f11..6436c5d23 100644
--- a/docs/content/azureblob.md
+++ b/docs/content/azureblob.md
@@ -75,10 +75,10 @@ This remote supports `--fast-list` which allows you to use fewer
 transactions in exchange for more memory. See the [rclone
 docs](/docs/#fast-list) for more details.
 
-### Modified time
+### Modification times and hashes
 
-The modified time is stored as metadata on the object with the `mtime`
-key.  It is stored using RFC3339 Format time with nanosecond
+The modification time is stored as metadata on the object with the
+`mtime` key.  It is stored using RFC3339 Format time with nanosecond
 precision.  The metadata is supplied during directory listings so
 there is no performance overhead to using it.
 
@@ -88,6 +88,10 @@ flag. Note that rclone can't set `LastModified`, so using the
 `--update` flag when syncing is recommended if using
 `--use-server-modtime`.
 
+MD5 hashes are stored with blobs. However blobs that were uploaded in
+chunks only have an MD5 if the source remote was capable of MD5
+hashes, e.g. the local disk.
+
 ### Performance
 
 When uploading large files, increasing the value of
@@ -116,12 +120,6 @@ These only get replaced if they are the last character in the name:
 Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
 as they can't be used in JSON strings.
 
-### Hashes
-
-MD5 hashes are stored with blobs.  However blobs that were uploaded in
-chunks only have an MD5 if the source remote was capable of MD5
-hashes, e.g. the local disk.
-
 ### Authentication {#authentication}
 
 There are a number of ways of supplying credentials for Azure Blob
diff --git a/docs/content/b2.md b/docs/content/b2.md
index c2a8d2e4d..4c49ac45c 100644
--- a/docs/content/b2.md
+++ b/docs/content/b2.md
@@ -96,9 +96,9 @@ This remote supports `--fast-list` which allows you to use fewer
 transactions in exchange for more memory. See the [rclone
 docs](/docs/#fast-list) for more details.
 
-### Modified time
+### Modification times
 
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
 `X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
 in the Backblaze standard.  Other tools should be able to use this as
 a modified time.
diff --git a/docs/content/bisync.md b/docs/content/bisync.md
index e50d8cda2..be03f8cb8 100644
--- a/docs/content/bisync.md
+++ b/docs/content/bisync.md
@@ -298,7 +298,7 @@ while `--ignore-checksum` controls whether checksums are considered during the c
 if there ARE diffs.
 * Unless `--ignore-listing-checksum` is passed, bisync currently computes hashes for one path 
 *even when there's no common hash with the other path* 
-(for example, a [crypt](/crypt/#modified-time-and-hashes) remote.)
+(for example, a [crypt](/crypt/#modification-times-and-hashes) remote.)
 * If both paths support checksums and have a common hash, 
 AND `--ignore-listing-checksum` was not specified when creating the listings, 
 `--check-sync=only` can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.) 
@@ -402,7 +402,7 @@ Alternately, a `--resync` may be used (Path1 versions will be pushed
 to Path2). Consider the situation carefully and perhaps use `--dry-run`
 before you commit to the changes.
 
-### Modification time
+### Modification times
 
 Bisync relies on file timestamps to identify changed files and will
 _refuse_ to operate if backend lacks the modification time support.
diff --git a/docs/content/box.md b/docs/content/box.md
index 025e12902..576db2b03 100644
--- a/docs/content/box.md
+++ b/docs/content/box.md
@@ -199,7 +199,7 @@ d) Delete this remote
 y/e/d> y
 ```
 
-### Modified time and hashes
+### Modification times and hashes
 
 Box allows modification times to be set on objects accurate to 1
 second.  These will be used to detect whether objects need syncing or
diff --git a/docs/content/chunker.md b/docs/content/chunker.md
index 1ee168071..e7011dc08 100644
--- a/docs/content/chunker.md
+++ b/docs/content/chunker.md
@@ -244,7 +244,7 @@ revert (sometimes silently) to time/size comparison if compatible hashsums
 between source and target are not found.
 
 
-### Modified time
+### Modification times
 
 Chunker stores modification times using the wrapped remote so support
 depends on that. For a small non-chunked file the chunker overlay simply
diff --git a/docs/content/crypt.md b/docs/content/crypt.md
index 9599dccc1..269e26fed 100644
--- a/docs/content/crypt.md
+++ b/docs/content/crypt.md
@@ -405,7 +405,7 @@ Example:
 `1/12/qgm4avr35m5loi1th53ato71v0`
 
 
-### Modified time and hashes
+### Modification times and hashes
 
 Crypt stores modification times using the underlying remote so support
 depends on that.
diff --git a/docs/content/drive.md b/docs/content/drive.md
index a4c6a2a4d..246430b98 100644
--- a/docs/content/drive.md
+++ b/docs/content/drive.md
@@ -361,10 +361,14 @@ large folder (10600 directories, 39000 files):
 - without `--fast-list`: 22:05 min
 - with `--fast-list`: 58s
 
-### Modified time
+### Modification times and hashes
 
 Google drive stores modification times accurate to 1 ms.
 
+Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however,
+that a small fraction of files uploaded may not have SHA1 or SHA256
+hashes especially if they were uploaded before 2018.
+
 ### Restricted filename characters
 
 Only Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8),
@@ -1528,9 +1532,10 @@ Waiting a moderate period of time between attempts (estimated to be
 approximately 1 hour) and/or not using --fast-list both seem to be
 effective in preventing the problem.
 
-### Hashes
+### SHA1 or SHA256 hashes may be missing
 
-We need to say that all files have MD5 hashes, but a small fraction of files uploaded may not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
+All files have MD5 hashes, but a small fraction of files uploaded may
+not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
 
 ## Making your own client_id
 
diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md
index 609393425..fa20cfe19 100644
--- a/docs/content/dropbox.md
+++ b/docs/content/dropbox.md
@@ -97,7 +97,7 @@ You can then use team folders like this `remote:/TeamFolder` and
 A leading `/` for a Dropbox personal account will do nothing, but it
 will take an extra HTTP transaction so it should be avoided.
 
-### Modified time and Hashes
+### Modification times and hashes
 
 Dropbox supports modified times, but the only way to set a
 modification time is to re-upload the file.
diff --git a/docs/content/fichier.md b/docs/content/fichier.md
index a86d72bbb..8576cb9aa 100644
--- a/docs/content/fichier.md
+++ b/docs/content/fichier.md
@@ -76,11 +76,11 @@ To copy a local directory to a 1Fichier directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and hashes ###
+### Modification times and hashes
 
 1Fichier does not support modification times. It supports the Whirlpool hash algorithm.
 
-### Duplicated files ###
+### Duplicated files
 
 1Fichier can have two files with exactly the same name and path (unlike a
 normal file system).
diff --git a/docs/content/filefabric.md b/docs/content/filefabric.md
index 69ee67fcd..61f2ae815 100644
--- a/docs/content/filefabric.md
+++ b/docs/content/filefabric.md
@@ -101,7 +101,7 @@ To copy a local directory to an Enterprise File Fabric directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and hashes
+### Modification times and hashes
 
 The Enterprise File Fabric allows modification times to be set on
 files accurate to 1 second.  These will be used to detect whether
diff --git a/docs/content/ftp.md b/docs/content/ftp.md
index dea99f0c3..7d01d9f8e 100644
--- a/docs/content/ftp.md
+++ b/docs/content/ftp.md
@@ -486,7 +486,7 @@ at present.
 
 The `ftp_proxy` environment variable is not currently supported.
 
-#### Modified time
+### Modification times
 
 File modification time (timestamps) is supported to 1 second resolution
 for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md
index 80e5b75b8..0c7512964 100644
--- a/docs/content/googlecloudstorage.md
+++ b/docs/content/googlecloudstorage.md
@@ -247,7 +247,7 @@ Eg `--header-upload "Content-Type text/potato"`
 Note that the last of these is for setting custom metadata in the form
 `--header-upload "x-goog-meta-key: value"`
 
-### Modification time
+### Modification times
 
 Google Cloud Storage stores md5sum natively.
 Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md
index e10b2a9fe..bce5a26f7 100644
--- a/docs/content/googlephotos.md
+++ b/docs/content/googlephotos.md
@@ -428,7 +428,7 @@ if you uploaded an image to `upload` then uploaded the same image to
 what it was uploaded with initially, not what you uploaded it with to
 `album`.  In practise this shouldn't cause too many problems.
 
-### Modified time
+### Modification times
 
 The date shown of media in Google Photos is the creation date as
 determined by the EXIF information, or the upload date if that is not
diff --git a/docs/content/hdfs.md b/docs/content/hdfs.md
index 6d22e446b..58889c65f 100644
--- a/docs/content/hdfs.md
+++ b/docs/content/hdfs.md
@@ -126,7 +126,7 @@ username = root
 You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
 uploaded will be lost.)
 
-### Modified time
+### Modification times
 
 Time accurate to 1 second is stored.
 
diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md
index 91e8ab29b..830778119 100644
--- a/docs/content/hidrive.md
+++ b/docs/content/hidrive.md
@@ -123,7 +123,7 @@ Using
 
 the process is very similar to the process of initial setup exemplified before.
 
-### Modified time and hashes
+### Modification times and hashes
 
 HiDrive allows modification times to be set on objects accurate to 1 second.
 
diff --git a/docs/content/http.md b/docs/content/http.md
index 06ae9cada..19667dda4 100644
--- a/docs/content/http.md
+++ b/docs/content/http.md
@@ -105,7 +105,7 @@ Sync the remote `directory` to `/home/local/directory`, deleting any excess file
 
 This remote is read only - you can't upload files to an HTTP server.
 
-### Modified time
+### Modification times
 
 Most HTTP servers store time accurate to 1 second.
 
diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md
index d2b19e53e..ef570c394 100644
--- a/docs/content/jottacloud.md
+++ b/docs/content/jottacloud.md
@@ -245,7 +245,7 @@ Note also that with rclone version 1.58 and newer, information about
 [MIME types](/overview/#mime-type) and metadata item [utime](#metadata)
 are not available when using `--fast-list`.
 
-### Modified time and hashes
+### Modification times and hashes
 
 Jottacloud allows modification times to be set on objects accurate to 1
 second. These will be used to detect whether objects need syncing or
diff --git a/docs/content/local.md b/docs/content/local.md
index 42327a940..d7881eef6 100644
--- a/docs/content/local.md
+++ b/docs/content/local.md
@@ -19,10 +19,10 @@ For consistencies sake one can also configure a remote of type
 rclone remote paths, e.g. `remote:path/to/wherever`, but it is probably
 easier not to.
 
-### Modified time ###
+### Modification times
 
-Rclone reads and writes the modified time using an accuracy determined by
-the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
+Rclone reads and writes the modification times using an accuracy determined
+by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
 on OS X.
 
 ### Filenames ###
diff --git a/docs/content/mailru.md b/docs/content/mailru.md
index 57d3dbbde..9aae4013d 100644
--- a/docs/content/mailru.md
+++ b/docs/content/mailru.md
@@ -123,17 +123,15 @@ excess files in the path.
 
     rclone sync --interactive /home/local/directory remote:directory
 
-### Modified time
+### Modification times and hashes
 
 Files support a modification time attribute with up to 1 second precision.
 Directories do not have a modification time, which is shown as "Jan 1 1970".
 
-### Hash checksums
-
-Hash sums use a custom Mail.ru algorithm based on SHA1.
+File hashes are supported, with a custom Mail.ru algorithm based on SHA1.
 If file size is less than or equal to the SHA1 block size (20 bytes),
 its hash is simply its data right-padded with zero bytes.
-Hash sum of a larger file is computed as a SHA1 sum of the file data
+Hashes of a larger file is computed as a SHA1 of the file data
 bytes concatenated with a decimal representation of the data length.
 
 ### Emptying Trash
diff --git a/docs/content/mega.md b/docs/content/mega.md
index 5532a4fbb..53a275868 100644
--- a/docs/content/mega.md
+++ b/docs/content/mega.md
@@ -82,7 +82,7 @@ To copy a local directory to an Mega directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and hashes
+### Modification times and hashes
 
 Mega does not support modification times or hashes yet.
 
diff --git a/docs/content/memory.md b/docs/content/memory.md
index 843fc3cbd..783fc2815 100644
--- a/docs/content/memory.md
+++ b/docs/content/memory.md
@@ -54,7 +54,7 @@ testing or with an rclone server or rclone mount, e.g.
     rclone serve webdav :memory:
     rclone serve sftp :memory:
 
-### Modified time and hashes
+### Modification times and hashes
 
 The memory backend supports MD5 hashes and modification times accurate to 1 nS.
 
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index cf7ca5175..da83f448a 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -162,7 +162,7 @@ You may try to [verify you account](https://docs.microsoft.com/en-us/azure/activ
 Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
 
 
-### Modification time and hashes
+### Modification times and hashes
 
 OneDrive allows modification times to be set on objects accurate to 1
 second.  These will be used to detect whether objects need syncing or
diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md
index 0758470a6..90d5f4cf4 100644
--- a/docs/content/opendrive.md
+++ b/docs/content/opendrive.md
@@ -64,12 +64,14 @@ To copy a local directory to an OpenDrive directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and MD5SUMs
+### Modification times and hashes
 
 OpenDrive allows modification times to be set on objects accurate to 1
 second. These will be used to detect whether objects need syncing or
 not.
 
+The MD5 hash algorithm is supported.
+
 ### Restricted filename characters
 
 | Character | Value | Replacement |
diff --git a/docs/content/oracleobjectstorage.md b/docs/content/oracleobjectstorage.md
index 48b0b4f17..7ed7a0870 100644
--- a/docs/content/oracleobjectstorage.md
+++ b/docs/content/oracleobjectstorage.md
@@ -154,6 +154,7 @@ Rclone supports the following OCI authentication provider.
     No authentication
 
 ### User Principal
+
 Sample rclone config file for Authentication Provider User Principal:
 
     [oos]
@@ -174,6 +175,7 @@ Considerations:
 - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
 
 ###  Instance Principal
+
 An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. 
 With this approach no credentials have to be stored and managed.
 
@@ -203,6 +205,7 @@ Considerations:
 - It is applicable for oci compute instances only. It cannot be used on external instance or resources.
 
 ### Resource Principal
+
 Resource principal auth is very similar to instance principal auth but used for resources that are not 
 compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). 
 To use resource principal ensure Rclone process is started with these environment variables set in its process.
@@ -222,6 +225,7 @@ Sample rclone configuration file for Authentication Provider Resource Principal:
     provider = resource_principal_auth
 
 ### No authentication
+
 Public buckets do not require any authentication mechanism to read objects.
 Sample rclone configuration file for No authentication:
     
@@ -232,10 +236,9 @@ Sample rclone configuration file for No authentication:
     region = us-ashburn-1
     provider = no_auth
 
-## Options
-### Modified time
+### Modification times and hashes
 
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
 `opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
 
 If the modification time needs to be updated rclone will attempt to perform a server
@@ -245,6 +248,8 @@ In the case the object is larger than 5Gb, the object will be uploaded rather th
 Note that reading this from the object takes an additional `HEAD` request as the metadata
 isn't returned in object listings.
 
+The MD5 hash algorithm is supported.
+
 ### Multipart uploads
 
 rclone supports multipart uploads with OOS which means that it can
diff --git a/docs/content/overview.md b/docs/content/overview.md
index c1ad7da2c..e95411e18 100644
--- a/docs/content/overview.md
+++ b/docs/content/overview.md
@@ -90,7 +90,7 @@ mistake or an unsupported feature.
 ⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
 
 ¹⁰ FTP supports modtimes for the major FTP servers, and also others
-if they advertised required protocol extensions. See [this](/ftp/#modified-time)
+if they advertised required protocol extensions. See [this](/ftp/#modification-times)
 for more details.
 
 ¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value
diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md
index bbf06a141..3a28668f5 100644
--- a/docs/content/pcloud.md
+++ b/docs/content/pcloud.md
@@ -86,7 +86,7 @@ To copy a local directory to a pCloud directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and hashes ###
+### Modification times and hashes
 
 pCloud allows modification times to be set on objects accurate to 1
 second.  These will be used to detect whether objects need syncing or
diff --git a/docs/content/pikpak.md b/docs/content/pikpak.md
index f20ba3c9e..502219238 100644
--- a/docs/content/pikpak.md
+++ b/docs/content/pikpak.md
@@ -71,6 +71,13 @@ d) Delete this remote
 y/e/d> y
 ```
 
+### Modification times and hashes
+
+PikPak keeps modification times on objects, and updates them when uploading objects,
+but it does not support changing only the modification time
+
+The MD5 hash algorithm is supported.
+
 {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pikpak/pikpak.go then run make backenddocs" >}}
 ### Standard options
 
@@ -294,12 +301,13 @@ Result:
 
 {{< rem autogenerated options stop >}}
 
-## Limitations ##
+## Limitations
 
-### Hashes ###
+### Hashes may be empty
 
 PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
 
-### Deleted files ###
+### Deleted files still visible with trashed-only
 
-Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days.
+Deleted files will still be visible with `--pikpak-trashed-only` even after the
+trash emptied. This goes away after few days.
diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md
index 9324b3e8d..6b8ccace4 100644
--- a/docs/content/premiumizeme.md
+++ b/docs/content/premiumizeme.md
@@ -84,7 +84,7 @@ To copy a local directory to an premiumize.me directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and hashes
+### Modification times and hashes
 
 premiumize.me does not support modification times or hashes, therefore
 syncing will default to `--size-only` checking.  Note that using
diff --git a/docs/content/protondrive.md b/docs/content/protondrive.md
index b3bb9f7aa..5e46803c4 100644
--- a/docs/content/protondrive.md
+++ b/docs/content/protondrive.md
@@ -95,10 +95,12 @@ To copy a local directory to an Proton Drive directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time
+### Modification times and hashes
 
 Proton Drive Bridge does not support updating modification times yet.
 
+The SHA1 hash algorithm is supported.
+
 ### Restricted filename characters
 
 Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), also left and 
diff --git a/docs/content/quatrix.md b/docs/content/quatrix.md
index 66819c057..02ea14cea 100644
--- a/docs/content/quatrix.md
+++ b/docs/content/quatrix.md
@@ -121,7 +121,7 @@ d) Delete this remote
 y/e/d> y
 ```
 
-### Modified time and hashes
+### Modification times and hashes
 
 Quatrix allows modification times to be set on objects accurate to 1 microsecond.
 These will be used to detect whether objects need syncing or not.
diff --git a/docs/content/s3.md b/docs/content/s3.md
index c4cc881fa..b1853b3e6 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -271,7 +271,9 @@ d) Delete this remote
 y/e/d>
 ```
 
-### Modified time
+### Modification times and hashes
+
+#### Modification times
 
 The modified time is stored as metadata on the object as
 `X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
@@ -284,6 +286,29 @@ storage the object will be uploaded rather than copied.
 Note that reading this from the object takes an additional `HEAD`
 request as the metadata isn't returned in object listings.
 
+#### Hashes
+
+For small objects which weren't uploaded as multipart uploads (objects
+sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
+the `ETag:` header as an MD5 checksum.
+
+However for objects which were uploaded as multipart uploads or with
+server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
+longer the MD5 sum of the data, so rclone adds an additional piece of
+metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
+the same format as is required for `Content-MD5`).  You can use base64 -d and hexdump to check this value manually:
+
+    echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
+
+or you can use `rclone check` to verify the hashes are OK.
+
+For large objects, calculating this hash can take some time so the
+addition of this hash can be disabled with `--s3-disable-checksum`.
+This will mean that these objects do not have an MD5 checksum.
+
+Note that reading this from the object takes an additional `HEAD`
+request as the metadata isn't returned in object listings.
+
 ### Reducing costs
 
 #### Avoiding HEAD requests to read the modification time
@@ -375,29 +400,6 @@ there for more details.
 
 Setting this flag increases the chance for undetected upload failures.
 
-### Hashes
-
-For small objects which weren't uploaded as multipart uploads (objects
-sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
-the `ETag:` header as an MD5 checksum.
-
-However for objects which were uploaded as multipart uploads or with
-server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
-longer the MD5 sum of the data, so rclone adds an additional piece of
-metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
-the same format as is required for `Content-MD5`).  You can use base64 -d and hexdump to check this value manually:
-
-    echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
-
-or you can use `rclone check` to verify the hashes are OK.
-
-For large objects, calculating this hash can take some time so the
-addition of this hash can be disabled with `--s3-disable-checksum`.
-This will mean that these objects do not have an MD5 checksum.
-
-Note that reading this from the object takes an additional `HEAD`
-request as the metadata isn't returned in object listings.
-
 ### Versions
 
 When bucket versioning is enabled (this can be done with rclone with
@@ -660,7 +662,8 @@ According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com
 
 > If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
 
-As mentioned in the [Hashes](#hashes) section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
+As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section,
+small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
 A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
 
 {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
diff --git a/docs/content/sftp.md b/docs/content/sftp.md
index f0d4a5651..e2cd91b83 100644
--- a/docs/content/sftp.md
+++ b/docs/content/sftp.md
@@ -359,7 +359,7 @@ commands is prohibited.  Set the configuration option `disable_hashcheck`
 to `true` to disable checksumming entirely, or set `shell_type` to `none`
 to disable all functionality based on remote shell command execution.
 
-### Modified time
+### Modification times and hashes
 
 Modified times are stored on the server to 1 second precision.
 
diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md
index a362f1dcc..60d518687 100644
--- a/docs/content/sharefile.md
+++ b/docs/content/sharefile.md
@@ -105,7 +105,7 @@ To copy a local directory to an ShareFile directory called backup
 
 Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
 
-### Modified time and hashes
+### Modification times and hashes
 
 ShareFile allows modification times to be set on objects accurate to 1
 second.  These will be used to detect whether objects need syncing or
diff --git a/docs/content/sugarsync.md b/docs/content/sugarsync.md
index 0d9a29d73..30633051c 100644
--- a/docs/content/sugarsync.md
+++ b/docs/content/sugarsync.md
@@ -98,7 +98,7 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
 create a folder, which rclone will create as a "Sync Folder" with
 SugarSync.
 
-### Modified time and hashes
+### Modification times and hashes
 
 SugarSync does not support modification times or hashes, therefore
 syncing will default to `--size-only` checking.  Note that using
diff --git a/docs/content/swift.md b/docs/content/swift.md
index 4b4817f4a..ddcfa45f7 100644
--- a/docs/content/swift.md
+++ b/docs/content/swift.md
@@ -227,7 +227,7 @@ sufficient to determine if it is "dirty". By using `--update` along with
 `--use-server-modtime`, you can avoid the extra API call and simply upload
 files whose local modtime is newer than the time it was last uploaded.
 
-### Modified time
+### Modification times and hashes
 
 The modified time is stored as metadata on the object as
 `X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
@@ -236,6 +236,8 @@ ns.
 This is a de facto standard (used in the official python-swiftclient
 amongst others) for storing the modification time for an object.
 
+The MD5 hash algorithm is supported.
+
 ### Restricted filename characters
 
 | Character | Value | Replacement |
diff --git a/docs/content/uptobox.md b/docs/content/uptobox.md
index ed717d6c2..9a08f3f53 100644
--- a/docs/content/uptobox.md
+++ b/docs/content/uptobox.md
@@ -82,7 +82,7 @@ To copy a local directory to an Uptobox directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and hashes
+### Modification times and hashes
 
 Uptobox supports neither modified times nor checksums. All timestamps
 will read as that set by `--default-time`.
diff --git a/docs/content/webdav.md b/docs/content/webdav.md
index bbbb20527..6f246a017 100644
--- a/docs/content/webdav.md
+++ b/docs/content/webdav.md
@@ -101,7 +101,7 @@ To copy a local directory to an WebDAV directory called backup
 
     rclone copy /home/source remote:backup
 
-### Modified time and hashes ###
+### Modification times and hashes
 
 Plain WebDAV does not support modified times.  However when used with
 Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
diff --git a/docs/content/yandex.md b/docs/content/yandex.md
index 0606fdfbd..d62b33e2f 100644
--- a/docs/content/yandex.md
+++ b/docs/content/yandex.md
@@ -87,14 +87,12 @@ excess files in the path.
 
 Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
 
-### Modified time
+### Modification times and hashes
 
 Modified times are supported and are stored accurate to 1 ns in custom
 metadata called `rclone_modified` in RFC3339 with nanoseconds format.
 
-### MD5 checksums
-
-MD5 checksums are natively supported by Yandex Disk.
+The MD5 hash algorithm is natively supported by Yandex Disk.
 
 ### Emptying Trash
 
diff --git a/docs/content/zoho.md b/docs/content/zoho.md
index a103c4613..c185a2271 100644
--- a/docs/content/zoho.md
+++ b/docs/content/zoho.md
@@ -107,13 +107,11 @@ excess files in the path.
 
 Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
 
-### Modified time
+### Modification times and hashes
 
 Modified times are currently not supported for Zoho Workdrive
 
-### Checksums
-
-No checksums are supported.
+No hash algorithms are supported.
 
 ### Usage information