mirror of
https://github.com/laurent22/joplin.git
synced 2024-12-21 09:38:01 +02:00
All: Improved S3 sync error handling and reliability, and upgraded S3 SDK (#5312)
This commit is contained in:
parent
8e54a65ca5
commit
5981227c06
35
README.md
35
README.md
@ -275,16 +275,20 @@ In the **desktop application** or **mobile application**, select "OneDrive" as t
|
|||||||
|
|
||||||
In the **terminal application**, to initiate the synchronisation process, type `:sync`. You will be asked to follow a link to authorise the application (simply input your Microsoft credentials - you do not need to register with OneDrive).
|
In the **terminal application**, to initiate the synchronisation process, type `:sync`. You will be asked to follow a link to authorise the application (simply input your Microsoft credentials - you do not need to register with OneDrive).
|
||||||
|
|
||||||
## AWS S3 synchronisation
|
## S3 synchronisation
|
||||||
|
|
||||||
In the **desktop application** or **mobile application**, select "AWS S3 (Beta)" as the synchronisation target in the [Configuration screen](https://github.com/laurent22/joplin/blob/dev/readme/config_screen.md).
|
As of Joplin 2.x.x, Joplin supports multiple S3 providers. We expose some options that will need to be configured depending on your provider of choice. We have tested with UpCloud, AWS, and Linode. others should work as well.
|
||||||
|
|
||||||
- **AWS S3 Bucket:** The name of your Bucket, such as `joplin-bucket`
|
In the **desktop application** or **mobile application**, select "S3 (Beta)" as the synchronisation target in the [Configuration screen](https://github.com/laurent22/joplin/blob/dev/readme/config_screen.md).
|
||||||
- **AWS S3 URL:** Fully qualified URL; By default this should be `https://s3.amazonaws.com/`
|
|
||||||
- **AWS key & AWS secret:** IAM user's programmatic access key. To create a new key & secret, visit [IAM Security Credentials](https://console.aws.amazon.com/iam/home#/security_credentials).
|
- **S3 Bucket:** The name of your Bucket, such as `joplin-bucket`
|
||||||
|
- **S3 URL:** Fully qualified URL; For AWS this should be `https://s3.amazonaws.com/`
|
||||||
|
- **S3 Access Key & S3 Secret Key:** The User's programmatic access key. To create a new key & secret on AWS, visit [IAM Security Credentials](https://console.aws.amazon.com/iam/home#/security_credentials). For other providers follow their documentation.
|
||||||
|
- **S3 Region:** Some providers require you to provide the region of your bucket. This is usually in the form of "eu-west1" or something similar depending on your region. For providers that do not require a region, you can leave it blank.
|
||||||
|
- **Force Path Style**: This setting enables Joplin to talk to S3 providers using an older style S3 Path. Depending on your provider you may need to try with this on and off.
|
||||||
|
|
||||||
|
|
||||||
While creating a new Bucket for Joplin, disable **Bucket Versioning**, enable **Block all public access** and enable **Default encryption** with `Amazon S3 key (SSE-S3)`.
|
While creating a new Bucket for Joplin, disable **Bucket Versioning**, enable **Block all public access** and enable **Default encryption** with `Amazon S3 key (SSE-S3)`. Some providers do not expose these options, and it could create a syncing problem. Do attempt and report back so we can update the documentation appropriately.
|
||||||
|
|
||||||
To add a **Bucket Policy** from the AWS S3 Web Console, navigate to the **Permissions** tab. Temporarily disable **Block all public access** to edit the Bucket policy, something along the lines of:
|
To add a **Bucket Policy** from the AWS S3 Web Console, navigate to the **Permissions** tab. Temporarily disable **Block all public access** to edit the Bucket policy, something along the lines of:
|
||||||
```
|
```
|
||||||
@ -311,7 +315,26 @@ To add a **Bucket Policy** from the AWS S3 Web Console, navigate to the **Permis
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Configuration settings for tested providers
|
||||||
|
|
||||||
|
All providers will require a bucket, Access Key, and Secret Key.
|
||||||
|
|
||||||
|
If you provide a configuration and you receive "success!" on the "check config" then your S3 sync should work for your provider. If you do not receive success, you may need to adjust your settings, or save them, restart the app, and attempt a sync. This may reveal more clear error messaging that will help you deduce the problem.
|
||||||
|
|
||||||
|
### AWS
|
||||||
|
- URL: https://s3.amazonaws.com
|
||||||
|
- Region: required
|
||||||
|
- Force Path Style: unchecked
|
||||||
|
|
||||||
|
### Linode
|
||||||
|
- URL: https://<region>.linodeobjects.com
|
||||||
|
- Region: empty
|
||||||
|
- Force Path Style: unchecked
|
||||||
|
|
||||||
|
### UpCloud
|
||||||
|
- URL: https://<account>.<region>.upcloudobjects.com (They will provide you with multiple URLs, the one that follows this pattern should work.)
|
||||||
|
- Region: required
|
||||||
|
- Force Path Style: unchecked
|
||||||
|
|
||||||
# Encryption
|
# Encryption
|
||||||
|
|
||||||
|
@ -6,6 +6,10 @@
|
|||||||
|
|
||||||
// So there's basically still a one way flux: React => SQLite => Redux => React
|
// So there's basically still a one way flux: React => SQLite => Redux => React
|
||||||
|
|
||||||
|
// For aws-sdk-js-v3
|
||||||
|
import 'react-native-get-random-values';
|
||||||
|
import 'react-native-url-polyfill/auto';
|
||||||
|
|
||||||
import { LogBox, AppRegistry } from 'react-native';
|
import { LogBox, AppRegistry } from 'react-native';
|
||||||
const Root = require('./root').default;
|
const Root = require('./root').default;
|
||||||
|
|
||||||
|
@ -227,6 +227,8 @@ PODS:
|
|||||||
- React-Core
|
- React-Core
|
||||||
- react-native-geolocation (2.0.2):
|
- react-native-geolocation (2.0.2):
|
||||||
- React
|
- React
|
||||||
|
- react-native-get-random-values (1.7.0):
|
||||||
|
- React-Core
|
||||||
- react-native-image-picker (2.3.4):
|
- react-native-image-picker (2.3.4):
|
||||||
- React-Core
|
- React-Core
|
||||||
- react-native-image-resizer (1.3.0):
|
- react-native-image-resizer (1.3.0):
|
||||||
@ -356,6 +358,7 @@ DEPENDENCIES:
|
|||||||
- react-native-camera (from `../node_modules/react-native-camera`)
|
- react-native-camera (from `../node_modules/react-native-camera`)
|
||||||
- react-native-document-picker (from `../node_modules/react-native-document-picker`)
|
- react-native-document-picker (from `../node_modules/react-native-document-picker`)
|
||||||
- "react-native-geolocation (from `../node_modules/@react-native-community/geolocation`)"
|
- "react-native-geolocation (from `../node_modules/@react-native-community/geolocation`)"
|
||||||
|
- react-native-get-random-values (from `../node_modules/react-native-get-random-values`)
|
||||||
- react-native-image-picker (from `../node_modules/react-native-image-picker`)
|
- react-native-image-picker (from `../node_modules/react-native-image-picker`)
|
||||||
- react-native-image-resizer (from `../node_modules/react-native-image-resizer`)
|
- react-native-image-resizer (from `../node_modules/react-native-image-resizer`)
|
||||||
- "react-native-netinfo (from `../node_modules/@react-native-community/netinfo`)"
|
- "react-native-netinfo (from `../node_modules/@react-native-community/netinfo`)"
|
||||||
@ -439,6 +442,8 @@ EXTERNAL SOURCES:
|
|||||||
:path: "../node_modules/react-native-document-picker"
|
:path: "../node_modules/react-native-document-picker"
|
||||||
react-native-geolocation:
|
react-native-geolocation:
|
||||||
:path: "../node_modules/@react-native-community/geolocation"
|
:path: "../node_modules/@react-native-community/geolocation"
|
||||||
|
react-native-get-random-values:
|
||||||
|
:path: "../node_modules/react-native-get-random-values"
|
||||||
react-native-image-picker:
|
react-native-image-picker:
|
||||||
:path: "../node_modules/react-native-image-picker"
|
:path: "../node_modules/react-native-image-picker"
|
||||||
react-native-image-resizer:
|
react-native-image-resizer:
|
||||||
@ -524,17 +529,18 @@ SPEC CHECKSUMS:
|
|||||||
React-jsinspector: 8c0517dee5e8c70cd6c3066f20213ff7ce54f176
|
React-jsinspector: 8c0517dee5e8c70cd6c3066f20213ff7ce54f176
|
||||||
React-logger: bfddd3418dc1d45b77b822958f3e31422e2c179b
|
React-logger: bfddd3418dc1d45b77b822958f3e31422e2c179b
|
||||||
react-native-alarm-notification: 466e4ad56fbd948ecac26e657f292dca8bf483d5
|
react-native-alarm-notification: 466e4ad56fbd948ecac26e657f292dca8bf483d5
|
||||||
react-native-camera: 35854c4f764a4a6cf61c1c3525888b92f0fe4b31
|
react-native-camera: 5c1fbfecf63b802b8ca4a71c60d30a71550fb348
|
||||||
react-native-document-picker: 0bba80cc56caab1f67dbaa81ff557e3a9b7f2b9f
|
react-native-document-picker: b3e78a8f7fef98b5cb069f20fc35797d55e68e28
|
||||||
react-native-geolocation: c956aeb136625c23e0dce0467664af2c437888c9
|
react-native-geolocation: cbd9d6bd06bac411eed2671810f454d4908484a8
|
||||||
react-native-image-picker: c6d75c4ab2cf46f9289f341242b219cb3c1180d3
|
react-native-get-random-values: 237bffb1c7e05fb142092681531810a29ba53015
|
||||||
react-native-image-resizer: a79bcffdef1b52160ff91db0d6fa24816a4ff332
|
react-native-image-picker: 32d1ad2c0024ca36161ae0d5c2117e2d6c441f11
|
||||||
react-native-netinfo: e849fc21ca2f4128a5726c801a82fc6f4a6db50d
|
react-native-image-resizer: b53bf95ad880100e20262687e41f76fdbc9df255
|
||||||
react-native-rsa-native: 1f6bba06dd02f0e652a66a384c75c270f7a0062f
|
react-native-netinfo: 34f4d7a42f49157f3b45c14217d256bce7dc9682
|
||||||
react-native-slider: e99fc201cefe81270fc9d81714a7a0f5e566b168
|
react-native-rsa-native: a8037a48782aa2c29b8fe8d4bc5110a85d100e2d
|
||||||
react-native-sqlite-storage: 418ef4afc5e6df6ce3574c4617e5f0b65cffde55
|
react-native-slider: b733e17fdd31186707146debf1f04b5d94aa1a93
|
||||||
react-native-version-info: 36490da17d2c6b5cc21321c70e433784dee7ed0b
|
react-native-sqlite-storage: ce71689c5a73b79390a1ab213555ae80979a5dc7
|
||||||
react-native-webview: 4e96d493f9f90ba4f03b28933f30b2964df07e39
|
react-native-version-info: 64f0f0bf3da6316298f9cd6085d50ba3a992d0c7
|
||||||
|
react-native-webview: c51f73be304c61d359ec3e7c5e4e8f2c977fd360
|
||||||
React-perflogger: fcac6090a80e3d967791b4c7f1b1a017f9d4a398
|
React-perflogger: fcac6090a80e3d967791b4c7f1b1a017f9d4a398
|
||||||
React-RCTActionSheet: caf5913d9f9e605f5467206cf9d1caa6d47d7ad6
|
React-RCTActionSheet: caf5913d9f9e605f5467206cf9d1caa6d47d7ad6
|
||||||
React-RCTAnimation: 6539e3bf594f6a529cd861985ba6548286ae1ead
|
React-RCTAnimation: 6539e3bf594f6a529cd861985ba6548286ae1ead
|
||||||
@ -548,9 +554,9 @@ SPEC CHECKSUMS:
|
|||||||
React-runtimeexecutor: 33a949a51bec5f8a3c9e8d8092deb259600d761e
|
React-runtimeexecutor: 33a949a51bec5f8a3c9e8d8092deb259600d761e
|
||||||
ReactCommon: 620442811dc6f707b4bf5e3b27d4f19c12d5a821
|
ReactCommon: 620442811dc6f707b4bf5e3b27d4f19c12d5a821
|
||||||
rn-fetch-blob: f065bb7ab7fb48dd002629f8bdcb0336602d3cba
|
rn-fetch-blob: f065bb7ab7fb48dd002629f8bdcb0336602d3cba
|
||||||
RNCClipboard: c7abea1baea58adca5c1f29e56dd5261837b4892
|
RNCClipboard: 8f9f12fabf3c06e976f19f87a62c89e28dfedfca
|
||||||
RNCPushNotificationIOS: ec7ffe65c7b5097f8d287fd627e1c1674ea69cef
|
RNCPushNotificationIOS: 20c4403b2ef8732297ea81e22f66c41bed7aaedf
|
||||||
RNDateTimePicker: 6f62fd42ac8b58bcc30c43ac3620e5097e8a227f
|
RNDateTimePicker: e9fcd5ecdc0c5b018871e0d178d6040dca11973c
|
||||||
RNFileViewer: 83cc066ad795b1f986791d03b56fe0ee14b6a69f
|
RNFileViewer: 83cc066ad795b1f986791d03b56fe0ee14b6a69f
|
||||||
RNFS: 2bd9eb49dc82fa9676382f0585b992c424cd59df
|
RNFS: 2bd9eb49dc82fa9676382f0585b992c424cd59df
|
||||||
RNQuickAction: 6d404a869dc872cde841ad3147416a670d13fa93
|
RNQuickAction: 6d404a869dc872cde841ad3147416a670d13fa93
|
||||||
@ -561,4 +567,4 @@ SPEC CHECKSUMS:
|
|||||||
|
|
||||||
PODFILE CHECKSUM: 3ccf11f600ddb42a825b2bb9a341a19f5c891f2b
|
PODFILE CHECKSUM: 3ccf11f600ddb42a825b2bb9a341a19f5c891f2b
|
||||||
|
|
||||||
COCOAPODS: 1.10.2
|
COCOAPODS: 1.11.2
|
||||||
|
8388
packages/app-mobile/package-lock.json
generated
8388
packages/app-mobile/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -42,6 +42,7 @@
|
|||||||
"react-native-dropdownalert": "^3.1.2",
|
"react-native-dropdownalert": "^3.1.2",
|
||||||
"react-native-file-viewer": "^2.1.4",
|
"react-native-file-viewer": "^2.1.4",
|
||||||
"react-native-fs": "^2.16.6",
|
"react-native-fs": "^2.16.6",
|
||||||
|
"react-native-get-random-values": "^1.7.0",
|
||||||
"react-native-image-picker": "^2.3.4",
|
"react-native-image-picker": "^2.3.4",
|
||||||
"react-native-image-resizer": "^1.3.0",
|
"react-native-image-resizer": "^1.3.0",
|
||||||
"react-native-modal-datetime-picker": "^9.0.0",
|
"react-native-modal-datetime-picker": "^9.0.0",
|
||||||
@ -53,6 +54,7 @@
|
|||||||
"react-native-share": "^7.2.1",
|
"react-native-share": "^7.2.1",
|
||||||
"react-native-side-menu": "^1.1.3",
|
"react-native-side-menu": "^1.1.3",
|
||||||
"react-native-sqlite-storage": "^5.0.0",
|
"react-native-sqlite-storage": "^5.0.0",
|
||||||
|
"react-native-url-polyfill": "^1.3.0",
|
||||||
"react-native-vector-icons": "^7.1.0",
|
"react-native-vector-icons": "^7.1.0",
|
||||||
"react-native-version-info": "^1.1.0",
|
"react-native-version-info": "^1.1.0",
|
||||||
"react-native-webview": "^10.9.2",
|
"react-native-webview": "^10.9.2",
|
||||||
@ -63,6 +65,7 @@
|
|||||||
"stream-browserify": "^3.0.0",
|
"stream-browserify": "^3.0.0",
|
||||||
"string-natural-compare": "^2.0.2",
|
"string-natural-compare": "^2.0.2",
|
||||||
"timers": "^0.1.1",
|
"timers": "^0.1.1",
|
||||||
|
"url": "^0.11.0",
|
||||||
"valid-url": "^1.0.9"
|
"valid-url": "^1.0.9"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
import FsDriverBase from '@joplin/lib/fs-driver-base';
|
import FsDriverBase from '@joplin/lib/fs-driver-base';
|
||||||
const RNFetchBlob = require('rn-fetch-blob').default;
|
const RNFetchBlob = require('rn-fetch-blob').default;
|
||||||
const RNFS = require('react-native-fs');
|
const RNFS = require('react-native-fs');
|
||||||
const { Writable } = require('stream-browserify');
|
|
||||||
const { Buffer } = require('buffer');
|
|
||||||
|
|
||||||
export default class FsDriverRN extends FsDriverBase {
|
export default class FsDriverRN extends FsDriverBase {
|
||||||
public appendFileSync() {
|
public appendFileSync() {
|
||||||
@ -26,27 +24,6 @@ export default class FsDriverRN extends FsDriverBase {
|
|||||||
return await this.unlink(path);
|
return await this.unlink(path);
|
||||||
}
|
}
|
||||||
|
|
||||||
public writeBinaryFile(path: string, content: any) {
|
|
||||||
const buffer = Buffer.from(content);
|
|
||||||
return RNFetchBlob.fs.writeStream(path, 'base64').then((stream: any) => {
|
|
||||||
const fileStream = new Writable({
|
|
||||||
write(chunk: any, _encoding: any, callback: Function) {
|
|
||||||
this.stream.write(chunk.toString('base64'));
|
|
||||||
callback();
|
|
||||||
},
|
|
||||||
final(callback: Function) {
|
|
||||||
this.stream.close();
|
|
||||||
callback();
|
|
||||||
},
|
|
||||||
});
|
|
||||||
// using options.construct is not implemented in readable-stream so lets
|
|
||||||
// pass the stream from RNFetchBlob to the Writable instance here
|
|
||||||
fileStream.stream = stream;
|
|
||||||
fileStream.write(buffer);
|
|
||||||
fileStream.end();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Returns a format compatible with Node.js format
|
// Returns a format compatible with Node.js format
|
||||||
private rnfsStatToStd_(stat: any, path: string) {
|
private rnfsStatToStd_(stat: any, path: string) {
|
||||||
return {
|
return {
|
||||||
|
@ -4,7 +4,7 @@ const Setting = require('./models/Setting').default;
|
|||||||
const { FileApi } = require('./file-api.js');
|
const { FileApi } = require('./file-api.js');
|
||||||
const Synchronizer = require('./Synchronizer').default;
|
const Synchronizer = require('./Synchronizer').default;
|
||||||
const { FileApiDriverAmazonS3 } = require('./file-api-driver-amazon-s3.js');
|
const { FileApiDriverAmazonS3 } = require('./file-api-driver-amazon-s3.js');
|
||||||
const S3 = require('aws-sdk/clients/s3');
|
const { S3Client, HeadBucketCommand } = require('@aws-sdk/client-s3');
|
||||||
|
|
||||||
class SyncTargetAmazonS3 extends BaseSyncTarget {
|
class SyncTargetAmazonS3 extends BaseSyncTarget {
|
||||||
static id() {
|
static id() {
|
||||||
@ -25,7 +25,7 @@ class SyncTargetAmazonS3 extends BaseSyncTarget {
|
|||||||
}
|
}
|
||||||
|
|
||||||
static label() {
|
static label() {
|
||||||
return `${_('AWS S3')} (Beta)`;
|
return `${_('S3')} (Beta)`;
|
||||||
}
|
}
|
||||||
|
|
||||||
static description() {
|
static description() {
|
||||||
@ -40,12 +40,17 @@ class SyncTargetAmazonS3 extends BaseSyncTarget {
|
|||||||
return Setting.value('sync.8.path');
|
return Setting.value('sync.8.path');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// These are the settings that get read from disk to instantiate the API.
|
||||||
s3AuthParameters() {
|
s3AuthParameters() {
|
||||||
return {
|
return {
|
||||||
|
// We need to set a region. See https://github.com/aws/aws-sdk-js-v3/issues/1845#issuecomment-754832210
|
||||||
|
region: Setting.value('sync.8.region'),
|
||||||
|
credentials: {
|
||||||
accessKeyId: Setting.value('sync.8.username'),
|
accessKeyId: Setting.value('sync.8.username'),
|
||||||
secretAccessKey: Setting.value('sync.8.password'),
|
secretAccessKey: Setting.value('sync.8.password'),
|
||||||
s3UseArnRegion: true, // override the request region with the region inferred from requested resource's ARN
|
},
|
||||||
s3ForcePathStyle: true,
|
UseArnRegion: true, // override the request region with the region inferred from requested resource's ARN.
|
||||||
|
forcePathStyle: Setting.value('sync.8.forcePathStyle'), // Older implementations may not support more modern access, so we expose this to allow people the option to toggle.
|
||||||
endpoint: Setting.value('sync.8.url'),
|
endpoint: Setting.value('sync.8.url'),
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
@ -53,26 +58,45 @@ class SyncTargetAmazonS3 extends BaseSyncTarget {
|
|||||||
api() {
|
api() {
|
||||||
if (this.api_) return this.api_;
|
if (this.api_) return this.api_;
|
||||||
|
|
||||||
this.api_ = new S3(this.s3AuthParameters());
|
this.api_ = new S3Client(this.s3AuthParameters());
|
||||||
|
|
||||||
|
// There is a bug with auto skew correction in aws-sdk-js-v3
|
||||||
|
// and this attempts to remove the skew correction for all calls.
|
||||||
|
// There are some additional spots in the app where we reset this
|
||||||
|
// to zero as well as it appears the skew logic gets triggered
|
||||||
|
// which makes "RequestTimeTooSkewed" errors...
|
||||||
|
// See https://github.com/aws/aws-sdk-js-v3/issues/2208
|
||||||
|
this.api_.config.systemClockOffset = 0;
|
||||||
|
|
||||||
return this.api_;
|
return this.api_;
|
||||||
}
|
}
|
||||||
|
|
||||||
static async newFileApi_(syncTargetId, options) {
|
static async newFileApi_(syncTargetId, options) {
|
||||||
|
// These options are read from the form on the page
|
||||||
|
// so we can test new config choices without overriding the current settings.
|
||||||
const apiOptions = {
|
const apiOptions = {
|
||||||
|
region: options.region(),
|
||||||
|
credentials: {
|
||||||
accessKeyId: options.username(),
|
accessKeyId: options.username(),
|
||||||
secretAccessKey: options.password(),
|
secretAccessKey: options.password(),
|
||||||
s3UseArnRegion: true,
|
},
|
||||||
s3ForcePathStyle: true,
|
UseArnRegion: true, // override the request region with the region inferred from requested resource's ARN.
|
||||||
|
forcePathStyle: options.forcePathStyle(),
|
||||||
endpoint: options.url(),
|
endpoint: options.url(),
|
||||||
};
|
};
|
||||||
|
|
||||||
const api = new S3(apiOptions);
|
const api = new S3Client(apiOptions);
|
||||||
const driver = new FileApiDriverAmazonS3(api, SyncTargetAmazonS3.s3BucketName());
|
const driver = new FileApiDriverAmazonS3(api, SyncTargetAmazonS3.s3BucketName());
|
||||||
const fileApi = new FileApi('', driver);
|
const fileApi = new FileApi('', driver);
|
||||||
fileApi.setSyncTargetId(syncTargetId);
|
fileApi.setSyncTargetId(syncTargetId);
|
||||||
return fileApi;
|
return fileApi;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// With the aws-sdk-v3-js some errors (301/403) won't get their XML parsed properly.
|
||||||
|
// I think it's this issue: https://github.com/aws/aws-sdk-js-v3/issues/1596
|
||||||
|
// If you save the config on desktop, restart the app and attempt a sync, we should get a clearer error message because the sync logic has more robust XML error parsing.
|
||||||
|
// We could implement that here, but the above workaround saves some code.
|
||||||
|
|
||||||
static async checkConfig(options) {
|
static async checkConfig(options) {
|
||||||
const fileApi = await SyncTargetAmazonS3.newFileApi_(SyncTargetAmazonS3.id(), options);
|
const fileApi = await SyncTargetAmazonS3.newFileApi_(SyncTargetAmazonS3.id(), options);
|
||||||
fileApi.requestRepeatCount_ = 0;
|
fileApi.requestRepeatCount_ = 0;
|
||||||
@ -81,22 +105,28 @@ class SyncTargetAmazonS3 extends BaseSyncTarget {
|
|||||||
ok: false,
|
ok: false,
|
||||||
errorMessage: '',
|
errorMessage: '',
|
||||||
};
|
};
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const headBucketReq = new Promise((resolve, reject) => {
|
const headBucketReq = new Promise((resolve, reject) => {
|
||||||
fileApi.driver().api().headBucket({
|
fileApi.driver().api().send(
|
||||||
|
|
||||||
|
new HeadBucketCommand({
|
||||||
Bucket: options.path(),
|
Bucket: options.path(),
|
||||||
},(err, response) => {
|
}),(err, response) => {
|
||||||
if (err) reject(err);
|
if (err) reject(err);
|
||||||
else resolve(response);
|
else resolve(response);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
const result = await headBucketReq;
|
const result = await headBucketReq;
|
||||||
|
|
||||||
if (!result) throw new Error(`AWS S3 bucket not found: ${SyncTargetAmazonS3.s3BucketName()}`);
|
if (!result) throw new Error(`AWS S3 bucket not found: ${SyncTargetAmazonS3.s3BucketName()}`);
|
||||||
output.ok = true;
|
output.ok = true;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
if (error.message) {
|
||||||
output.errorMessage = error.message;
|
output.errorMessage = error.message;
|
||||||
if (error.code) output.errorMessage += ` (Code ${error.code})`;
|
}
|
||||||
|
if (error.code) {
|
||||||
|
output.errorMessage += ` (Code ${error.code})`;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return output;
|
return output;
|
||||||
|
@ -82,7 +82,6 @@ shared.saveSettings = function(comp) {
|
|||||||
for (const key in comp.state.settings) {
|
for (const key in comp.state.settings) {
|
||||||
if (!comp.state.settings.hasOwnProperty(key)) continue;
|
if (!comp.state.settings.hasOwnProperty(key)) continue;
|
||||||
if (comp.state.changedSettingKeys.indexOf(key) < 0) continue;
|
if (comp.state.changedSettingKeys.indexOf(key) < 0) continue;
|
||||||
console.info('Saving', key, comp.state.settings[key]);
|
|
||||||
Setting.setValue(key, comp.state.settings[key]);
|
Setting.setValue(key, comp.state.settings[key]);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3,6 +3,9 @@ const { basename } = require('./path-utils');
|
|||||||
const shim = require('./shim').default;
|
const shim = require('./shim').default;
|
||||||
const JoplinError = require('./JoplinError').default;
|
const JoplinError = require('./JoplinError').default;
|
||||||
const { Buffer } = require('buffer');
|
const { Buffer } = require('buffer');
|
||||||
|
const { GetObjectCommand, ListObjectsV2Command, HeadObjectCommand, PutObjectCommand, DeleteObjectCommand, DeleteObjectsCommand, CopyObjectCommand } = require('@aws-sdk/client-s3');
|
||||||
|
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');
|
||||||
|
const parser = require('fast-xml-parser');
|
||||||
|
|
||||||
const S3_MAX_DELETES = 1000;
|
const S3_MAX_DELETES = 1000;
|
||||||
|
|
||||||
@ -26,43 +29,56 @@ class FileApiDriverAmazonS3 {
|
|||||||
}
|
}
|
||||||
|
|
||||||
hasErrorCode_(error, errorCode) {
|
hasErrorCode_(error, errorCode) {
|
||||||
if (!error || typeof error.code !== 'string') return false;
|
if (!error) return false;
|
||||||
|
|
||||||
|
if (error.name) {
|
||||||
|
return error.name.indexOf(errorCode) >= 0;
|
||||||
|
} else if (error.code) {
|
||||||
return error.code.indexOf(errorCode) >= 0;
|
return error.code.indexOf(errorCode) >= 0;
|
||||||
|
} else if (error.Code) {
|
||||||
|
return error.Code.indexOf(errorCode) >= 0;
|
||||||
|
} else {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Need to make a custom promise, built-in promise is broken: https://github.com/aws/aws-sdk-js/issues/1436
|
// Because of the way AWS-SDK-v3 works for getting data from a bucket we will
|
||||||
async s3GetObject(key) {
|
// use a pre-signed URL to avoid https://github.com/aws/aws-sdk-js-v3/issues/1877
|
||||||
return new Promise((resolve, reject) => {
|
async s3GenerateGetURL(key) {
|
||||||
this.api().getObject({
|
const signedUrl = await getSignedUrl(this.api(), new GetObjectCommand({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
Key: key,
|
Key: key,
|
||||||
}, (err, response) => {
|
}), {
|
||||||
if (err) reject(err);
|
expiresIn: 3600,
|
||||||
else resolve(response);
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
return signedUrl;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// We've now moved to aws-sdk-v3 and this note is outdated, but explains the promise structure.
|
||||||
|
// Need to make a custom promise, built-in promise is broken: https://github.com/aws/aws-sdk-js/issues/1436
|
||||||
|
// TODO: Re-factor to https://github.com/aws/aws-sdk-js-v3/tree/main/clients/client-s3#asyncawait
|
||||||
async s3ListObjects(key, cursor) {
|
async s3ListObjects(key, cursor) {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
this.api().listObjectsV2({
|
this.api().send(new ListObjectsV2Command({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
Prefix: key,
|
Prefix: key,
|
||||||
Delimiter: '/',
|
Delimiter: '/',
|
||||||
ContinuationToken: cursor,
|
ContinuationToken: cursor,
|
||||||
}, (err, response) => {
|
}), (err, response) => {
|
||||||
if (err) reject(err);
|
if (err) reject(err);
|
||||||
else resolve(response);
|
else resolve(response);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
async s3HeadObject(key) {
|
async s3HeadObject(key) {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
this.api().headObject({
|
this.api().send(new HeadObjectCommand({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
Key: key,
|
Key: key,
|
||||||
}, (err, response) => {
|
}), (err, response) => {
|
||||||
if (err) reject(err);
|
if (err) reject(err);
|
||||||
else resolve(response);
|
else resolve(response);
|
||||||
});
|
});
|
||||||
@ -71,11 +87,11 @@ class FileApiDriverAmazonS3 {
|
|||||||
|
|
||||||
async s3PutObject(key, body) {
|
async s3PutObject(key, body) {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
this.api().putObject({
|
this.api().send(new PutObjectCommand({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
Key: key,
|
Key: key,
|
||||||
Body: body,
|
Body: body,
|
||||||
}, (err, response) => {
|
}), (err, response) => {
|
||||||
if (err) reject(err);
|
if (err) reject(err);
|
||||||
else resolve(response);
|
else resolve(response);
|
||||||
});
|
});
|
||||||
@ -87,12 +103,12 @@ class FileApiDriverAmazonS3 {
|
|||||||
const body = await shim.fsDriver().readFile(path, 'base64');
|
const body = await shim.fsDriver().readFile(path, 'base64');
|
||||||
const fileStat = await shim.fsDriver().stat(path);
|
const fileStat = await shim.fsDriver().stat(path);
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
this.api().putObject({
|
this.api().send(new PutObjectCommand({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
Key: key,
|
Key: key,
|
||||||
Body: Buffer.from(body, 'base64'),
|
Body: Buffer.from(body, 'base64'),
|
||||||
ContentLength: `${fileStat.size}`,
|
ContentLength: `${fileStat.size}`,
|
||||||
}, (err, response) => {
|
}), (err, response) => {
|
||||||
if (err) reject(err);
|
if (err) reject(err);
|
||||||
else resolve(response);
|
else resolve(response);
|
||||||
});
|
});
|
||||||
@ -101,10 +117,10 @@ class FileApiDriverAmazonS3 {
|
|||||||
|
|
||||||
async s3DeleteObject(key) {
|
async s3DeleteObject(key) {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
this.api().deleteObject({
|
this.api().send(new DeleteObjectCommand({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
Key: key,
|
Key: key,
|
||||||
},
|
}),
|
||||||
(err, response) => {
|
(err, response) => {
|
||||||
if (err) {
|
if (err) {
|
||||||
console.log(err.code);
|
console.log(err.code);
|
||||||
@ -118,10 +134,10 @@ class FileApiDriverAmazonS3 {
|
|||||||
// Assumes key is formatted, like `{Key: 's3 path'}`
|
// Assumes key is formatted, like `{Key: 's3 path'}`
|
||||||
async s3DeleteObjects(keys) {
|
async s3DeleteObjects(keys) {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
this.api().deleteObjects({
|
this.api().send(new DeleteObjectsCommand({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
Delete: { Objects: keys },
|
Delete: { Objects: keys },
|
||||||
},
|
}),
|
||||||
(err, response) => {
|
(err, response) => {
|
||||||
if (err) {
|
if (err) {
|
||||||
console.log(err.code);
|
console.log(err.code);
|
||||||
@ -188,8 +204,20 @@ class FileApiDriverAmazonS3 {
|
|||||||
prefixPath = `${prefixPath}/`;
|
prefixPath = `${prefixPath}/`;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// There is a bug/quirk of aws-sdk-js-v3 which causes the
|
||||||
|
// S3Client systemClockOffset to be wildly inaccurate. This
|
||||||
|
// effectively removes the offset and sets it to system time.
|
||||||
|
// See https://github.com/aws/aws-sdk-js-v3/issues/2208 for more.
|
||||||
|
// If the user's time actaully off, then this should correctly
|
||||||
|
// result in a RequestTimeTooSkewed error from s3ListObjects.
|
||||||
|
this.api().config.systemClockOffset = 0;
|
||||||
|
|
||||||
let response = await this.s3ListObjects(prefixPath);
|
let response = await this.s3ListObjects(prefixPath);
|
||||||
|
|
||||||
|
// In aws-sdk-js-v3 if there are no contents it no longer returns
|
||||||
|
// an empty array. This creates an Empty array to pass onward.
|
||||||
|
if (response.Contents === undefined) response.Contents = [];
|
||||||
|
|
||||||
let output = this.metadataToStats_(response.Contents, prefixPath);
|
let output = this.metadataToStats_(response.Contents, prefixPath);
|
||||||
|
|
||||||
while (response.IsTruncated) {
|
while (response.IsTruncated) {
|
||||||
@ -212,44 +240,54 @@ class FileApiDriverAmazonS3 {
|
|||||||
|
|
||||||
try {
|
try {
|
||||||
let output = null;
|
let output = null;
|
||||||
const response = await this.s3GetObject(remotePath);
|
let response = null;
|
||||||
output = response.Body;
|
|
||||||
|
const s3Url = await this.s3GenerateGetURL(remotePath);
|
||||||
|
|
||||||
if (options.target === 'file') {
|
if (options.target === 'file') {
|
||||||
const filePath = options.path;
|
output = await shim.fetchBlob(s3Url, options);
|
||||||
if (!filePath) throw new Error('get: target options.path is missing');
|
} else if (responseFormat === 'text') {
|
||||||
|
response = await shim.fetch(s3Url, options);
|
||||||
|
|
||||||
// TODO: check if this ever hits on RN
|
output = await response.text();
|
||||||
await shim.fsDriver().writeBinaryFile(filePath, output);
|
// we need to make sure that errors get thrown as we are manually fetching above.
|
||||||
return {
|
if (!response.ok) {
|
||||||
ok: true,
|
throw { name: response.statusText, output: output };
|
||||||
path: filePath,
|
|
||||||
text: () => {
|
|
||||||
return response.statusMessage;
|
|
||||||
},
|
|
||||||
json: () => {
|
|
||||||
return { message: `${response.statusCode}: ${response.statusMessage}` };
|
|
||||||
},
|
|
||||||
status: response.statusCode,
|
|
||||||
headers: response.headers,
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (responseFormat === 'text') {
|
|
||||||
output = output.toString();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return output;
|
return output;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
if (this.hasErrorCode_(error, 'NoSuchKey')) {
|
|
||||||
|
// This means that the error was on the Desktop client side and we need to handle that.
|
||||||
|
// On Mobile it won't match because FetchError is a node-fetch feature.
|
||||||
|
// https://github.com/node-fetch/node-fetch/blob/main/docs/ERROR-HANDLING.md
|
||||||
|
if (error.name === 'FetchError') { throw error.message; }
|
||||||
|
|
||||||
|
let parsedOutput = '';
|
||||||
|
|
||||||
|
// If error.output is not xml the last else case should
|
||||||
|
// actually let us see the output of error.
|
||||||
|
if (error.output) {
|
||||||
|
parsedOutput = parser.parse(error.output);
|
||||||
|
if (this.hasErrorCode_(parsedOutput.Error, 'AuthorizationHeaderMalformed')) {
|
||||||
|
throw error.output;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (this.hasErrorCode_(parsedOutput.Error, 'NoSuchKey')) {
|
||||||
return null;
|
return null;
|
||||||
} else if (this.hasErrorCode_(error, 'AccessDenied')) {
|
} else if (this.hasErrorCode_(parsedOutput.Error, 'AccessDenied')) {
|
||||||
throw new JoplinError('Do not have proper permissions to Bucket', 'rejectedByTarget');
|
throw new JoplinError('Do not have proper permissions to Bucket', 'rejectedByTarget');
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (error.output) {
|
||||||
|
throw error.output;
|
||||||
} else {
|
} else {
|
||||||
throw error;
|
throw error;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Don't need to make directories, S3 is key based storage.
|
// Don't need to make directories, S3 is key based storage.
|
||||||
async mkdir() {
|
async mkdir() {
|
||||||
@ -308,13 +346,14 @@ class FileApiDriverAmazonS3 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
async move(oldPath, newPath) {
|
async move(oldPath, newPath) {
|
||||||
const req = new Promise((resolve, reject) => {
|
const req = new Promise((resolve, reject) => {
|
||||||
this.api().copyObject({
|
this.api().send(new CopyObjectCommand({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
CopySource: this.makePath_(oldPath),
|
CopySource: this.makePath_(oldPath),
|
||||||
Key: newPath,
|
Key: newPath,
|
||||||
},(err, response) => {
|
}),(err, response) => {
|
||||||
if (err) reject(err);
|
if (err) reject(err);
|
||||||
else resolve(response);
|
else resolve(response);
|
||||||
});
|
});
|
||||||
@ -333,6 +372,7 @@ class FileApiDriverAmazonS3 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
format() {
|
format() {
|
||||||
throw new Error('Not supported');
|
throw new Error('Not supported');
|
||||||
}
|
}
|
||||||
@ -340,10 +380,10 @@ class FileApiDriverAmazonS3 {
|
|||||||
async clearRoot() {
|
async clearRoot() {
|
||||||
const listRecursive = async (cursor) => {
|
const listRecursive = async (cursor) => {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
return this.api().listObjectsV2({
|
return this.api().send(new ListObjectsV2Command({
|
||||||
Bucket: this.s3_bucket_,
|
Bucket: this.s3_bucket_,
|
||||||
ContinuationToken: cursor,
|
ContinuationToken: cursor,
|
||||||
}, (err, response) => {
|
}), (err, response) => {
|
||||||
if (err) reject(err);
|
if (err) reject(err);
|
||||||
else resolve(response);
|
else resolve(response);
|
||||||
});
|
});
|
||||||
@ -351,6 +391,9 @@ class FileApiDriverAmazonS3 {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let response = await listRecursive();
|
let response = await listRecursive();
|
||||||
|
// In aws-sdk-js-v3 if there are no contents it no longer returns
|
||||||
|
// an empty array. This creates an Empty array to pass onward.
|
||||||
|
if (response.Contents === undefined) response.Contents = [];
|
||||||
let keys = response.Contents.map((content) => content.Key);
|
let keys = response.Contents.map((content) => content.Key);
|
||||||
|
|
||||||
while (response.IsTruncated) {
|
while (response.IsTruncated) {
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
class FsDriverDummy {
|
class FsDriverDummy {
|
||||||
constructor() {}
|
constructor() {}
|
||||||
appendFileSync() {}
|
appendFileSync() {}
|
||||||
writeBinaryFile() {}
|
|
||||||
readFile() {}
|
readFile() {}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -26,16 +26,6 @@ export default class FsDriverNode extends FsDriverBase {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public async writeBinaryFile(path: string, content: any) {
|
|
||||||
try {
|
|
||||||
// let buffer = new Buffer(content);
|
|
||||||
const buffer = Buffer.from(content);
|
|
||||||
return await fs.writeFile(path, buffer);
|
|
||||||
} catch (error) {
|
|
||||||
throw this.fsErrorToJsError_(error, path);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public async writeFile(path: string, string: string, encoding: string = 'base64') {
|
public async writeFile(path: string, string: string, encoding: string = 'base64') {
|
||||||
try {
|
try {
|
||||||
if (encoding === 'buffer') {
|
if (encoding === 'buffer') {
|
||||||
|
@ -246,10 +246,6 @@ export default class Resource extends BaseItem {
|
|||||||
return this.fsDriver().readFile(this.fullPath(resource), 'Buffer');
|
return this.fsDriver().readFile(this.fullPath(resource), 'Buffer');
|
||||||
}
|
}
|
||||||
|
|
||||||
static setContent(resource: ResourceEntity, content: any) {
|
|
||||||
return this.fsDriver().writeBinaryFile(this.fullPath(resource), content);
|
|
||||||
}
|
|
||||||
|
|
||||||
static isResourceUrl(url: string) {
|
static isResourceUrl(url: string) {
|
||||||
return url && url.length === 34 && url[0] === ':' && url[1] === '/';
|
return url && url.length === 34 && url[0] === ':' && url[1] === '/';
|
||||||
}
|
}
|
||||||
|
@ -503,7 +503,7 @@ class Setting extends BaseModel {
|
|||||||
return value ? rtrimSlashes(value) : '';
|
return value ? rtrimSlashes(value) : '';
|
||||||
},
|
},
|
||||||
public: true,
|
public: true,
|
||||||
label: () => _('AWS S3 bucket'),
|
label: () => _('S3 bucket'),
|
||||||
description: () => emptyDirWarning,
|
description: () => emptyDirWarning,
|
||||||
storage: SettingStorage.File,
|
storage: SettingStorage.File,
|
||||||
},
|
},
|
||||||
@ -514,8 +514,25 @@ class Setting extends BaseModel {
|
|||||||
show: (settings: any) => {
|
show: (settings: any) => {
|
||||||
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
||||||
},
|
},
|
||||||
|
filter: value => {
|
||||||
|
return value ? value.trim() : '';
|
||||||
|
},
|
||||||
public: true,
|
public: true,
|
||||||
label: () => _('AWS S3 URL'),
|
label: () => _('S3 URL'),
|
||||||
|
storage: SettingStorage.File,
|
||||||
|
},
|
||||||
|
'sync.8.region': {
|
||||||
|
value: '',
|
||||||
|
type: SettingItemType.String,
|
||||||
|
section: 'sync',
|
||||||
|
show: (settings: any) => {
|
||||||
|
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
||||||
|
},
|
||||||
|
filter: value => {
|
||||||
|
return value ? value.trim() : '';
|
||||||
|
},
|
||||||
|
public: true,
|
||||||
|
label: () => _('Region'),
|
||||||
storage: SettingStorage.File,
|
storage: SettingStorage.File,
|
||||||
},
|
},
|
||||||
'sync.8.username': {
|
'sync.8.username': {
|
||||||
@ -526,7 +543,7 @@ class Setting extends BaseModel {
|
|||||||
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
||||||
},
|
},
|
||||||
public: true,
|
public: true,
|
||||||
label: () => _('AWS key'),
|
label: () => _('Access Key'),
|
||||||
storage: SettingStorage.File,
|
storage: SettingStorage.File,
|
||||||
},
|
},
|
||||||
'sync.8.password': {
|
'sync.8.password': {
|
||||||
@ -537,10 +554,20 @@ class Setting extends BaseModel {
|
|||||||
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
||||||
},
|
},
|
||||||
public: true,
|
public: true,
|
||||||
label: () => _('AWS secret'),
|
label: () => _('Secret Key'),
|
||||||
secure: true,
|
secure: true,
|
||||||
},
|
},
|
||||||
|
'sync.8.forcePathStyle': {
|
||||||
|
value: false,
|
||||||
|
type: SettingItemType.Bool,
|
||||||
|
section: 'sync',
|
||||||
|
show: (settings: any) => {
|
||||||
|
return settings['sync.target'] == SyncTargetRegistry.nameToId('amazon_s3');
|
||||||
|
},
|
||||||
|
public: true,
|
||||||
|
label: () => _('Force Path Style'),
|
||||||
|
storage: SettingStorage.File,
|
||||||
|
},
|
||||||
'sync.9.path': {
|
'sync.9.path': {
|
||||||
value: '',
|
value: '',
|
||||||
type: SettingItemType.String,
|
type: SettingItemType.String,
|
||||||
|
3806
packages/lib/package-lock.json
generated
3806
packages/lib/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -35,7 +35,8 @@
|
|||||||
"@joplin/turndown": "^4.0.60",
|
"@joplin/turndown": "^4.0.60",
|
||||||
"@joplin/turndown-plugin-gfm": "^1.0.42",
|
"@joplin/turndown-plugin-gfm": "^1.0.42",
|
||||||
"async-mutex": "^0.1.3",
|
"async-mutex": "^0.1.3",
|
||||||
"aws-sdk": "^2.588.0",
|
"@aws-sdk/client-s3": "^3.34.0",
|
||||||
|
"@aws-sdk/s3-request-presigner": "^3.34.0",
|
||||||
"base-64": "^0.1.0",
|
"base-64": "^0.1.0",
|
||||||
"base64-stream": "^1.0.0",
|
"base64-stream": "^1.0.0",
|
||||||
"builtin-modules": "^3.1.0",
|
"builtin-modules": "^3.1.0",
|
||||||
|
@ -59,7 +59,7 @@ import Synchronizer from '../Synchronizer';
|
|||||||
import SyncTargetNone from '../SyncTargetNone';
|
import SyncTargetNone from '../SyncTargetNone';
|
||||||
import { setRSA } from '../services/e2ee/ppk';
|
import { setRSA } from '../services/e2ee/ppk';
|
||||||
const md5 = require('md5');
|
const md5 = require('md5');
|
||||||
const S3 = require('aws-sdk/clients/s3');
|
const { S3Client } = require('@aws-sdk/client-s3');
|
||||||
const { Dirnames } = require('../services/synchronizer/utils/types');
|
const { Dirnames } = require('../services/synchronizer/utils/types');
|
||||||
import RSA from '../services/e2ee/RSA.node';
|
import RSA from '../services/e2ee/RSA.node';
|
||||||
|
|
||||||
@ -602,10 +602,16 @@ async function initFileApi() {
|
|||||||
const appDir = await api.appDirectory();
|
const appDir = await api.appDirectory();
|
||||||
fileApi = new FileApi(appDir, new FileApiDriverOneDrive(api));
|
fileApi = new FileApi(appDir, new FileApiDriverOneDrive(api));
|
||||||
} else if (syncTargetId_ == SyncTargetRegistry.nameToId('amazon_s3')) {
|
} else if (syncTargetId_ == SyncTargetRegistry.nameToId('amazon_s3')) {
|
||||||
|
|
||||||
|
// We make sure for S3 tests run in band because tests
|
||||||
|
// share the same directory which will cause locking errors.
|
||||||
|
|
||||||
|
mustRunInBand();
|
||||||
|
|
||||||
const amazonS3CredsPath = `${oldTestDir}/support/amazon-s3-auth.json`;
|
const amazonS3CredsPath = `${oldTestDir}/support/amazon-s3-auth.json`;
|
||||||
const amazonS3Creds = require(amazonS3CredsPath);
|
const amazonS3Creds = require(amazonS3CredsPath);
|
||||||
if (!amazonS3Creds || !amazonS3Creds.accessKeyId) throw new Error(`AWS auth JSON missing in ${amazonS3CredsPath} format should be: { "accessKeyId": "", "secretAccessKey": "", "bucket": "mybucket"}`);
|
if (!amazonS3Creds || !amazonS3Creds.credentials) throw new Error(`AWS auth JSON missing in ${amazonS3CredsPath} format should be: { "credentials": { "accessKeyId": "", "secretAccessKey": "", } "bucket": "mybucket", region: "", forcePathStyle: ""}`);
|
||||||
const api = new S3({ accessKeyId: amazonS3Creds.accessKeyId, secretAccessKey: amazonS3Creds.secretAccessKey, s3UseArnRegion: true });
|
const api = new S3Client({ region: amazonS3Creds.region, credentials: amazonS3Creds.credentials, s3UseArnRegion: true, forcePathStyle: amazonS3Creds.forcePathStyle, endpoint: amazonS3Creds.endpoint });
|
||||||
fileApi = new FileApi('', new FileApiDriverAmazonS3(api, amazonS3Creds.bucket));
|
fileApi = new FileApi('', new FileApiDriverAmazonS3(api, amazonS3Creds.bucket));
|
||||||
} else if (syncTargetId_ == SyncTargetRegistry.nameToId('joplinServer')) {
|
} else if (syncTargetId_ == SyncTargetRegistry.nameToId('joplinServer')) {
|
||||||
mustRunInBand();
|
mustRunInBand();
|
||||||
|
Loading…
Reference in New Issue
Block a user