1
0
mirror of https://github.com/laurent22/joplin.git synced 2025-09-02 20:46:21 +02:00

Compare commits

..

64 Commits

Author SHA1 Message Date
Laurent Cozic
683dea01ba init 2025-03-03 20:20:05 +00:00
Laurent Cozic
06359834d6 Desktop: Add a button to collapse or expand all folders (#11905) 2025-03-02 22:20:47 +00:00
Laurent Cozic
0cc0fec8c3 lock file 2025-03-01 13:04:37 +00:00
Joplin Bot
68ab5dcda5 Doc: Auto-update documentation
Auto-updated using release-website.sh
2025-03-01 02:08:23 +00:00
Joplin Bot
65544123e6 Doc: Auto-update documentation
Auto-updated using release-website.sh
2025-02-28 18:43:59 +00:00
Laurent Cozic
cfbded00e2 Merge branch 'release-3.2' into dev 2025-02-28 14:14:21 +00:00
Laurent Cozic
a898e17b4c Desktop release v3.2.13 2025-02-28 14:13:20 +00:00
pedr
d12e2d9a81 Desktop: Fixes #11759: Preserve attachment file extensions regardless of the mime type (#11852)
Co-authored-by: Laurent Cozic <laurent22@users.noreply.github.com>
2025-02-28 14:13:08 +00:00
Henry Heino
7025321d76 Mobile: Accessibility: Fix "new note" and "new to-do" buttons are focusable even while invisible (#11899) 2025-02-28 10:30:50 +00:00
Laurent Cozic
6c890121b9 Doc: Update release cycle 2025-02-27 18:47:02 +00:00
Josh Scheitler
9c4be00745 Desktop: Resolves #11663: Improve Rich Text Editor toolbar structure (#11869)
Co-authored-by: Laurent Cozic <laurent22@users.noreply.github.com>
2025-02-27 18:32:47 +00:00
Henry Heino
7f51712311 Android: Switch default library used for Whisper voice typing (#11881) 2025-02-27 18:31:13 +00:00
Anmol Garg
502c929c88 Chore: Update Docker Compose POSTGRES_HOST for proper service-to-service communication (#11886) 2025-02-27 18:26:21 +00:00
Kev Bittner
1abf9e9602 Doc: Update S3 synchronization documentation (#11890) 2025-02-27 18:24:45 +00:00
Laurent Cozic
8bdb6c5d72 Desktop: Add dialog to select a note and link to it (#11891) 2025-02-27 18:24:02 +00:00
Henry Heino
9cbd1b855c Desktop: Accessibility: Add more standard keyboard shortcuts for the notebook sidebar (#11892) 2025-02-27 18:23:28 +00:00
Laurent Cozic
ae8658554f Desktop: Fix issue with GotoAnything that would prevent it from highlighting search results in note titles (#11888) 2025-02-27 16:29:33 +00:00
renovate[bot]
bc385d59e9 Update dependency @types/node to v18.19.67 (#11880)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-27 16:13:24 +00:00
renovate[bot]
00ccd994e3 Update dependency @types/adm-zip to v0.5.7 (#11895)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-27 16:11:10 +00:00
Laurent Cozic
9251299289 Deskop: Attempt to capture more debug info when the app crashes 2025-02-26 10:51:18 +00:00
Laurent Cozic
fe67a44285 Plugins: Add support for joplin.shouldUseDarkColors API 2025-02-25 15:33:44 +00:00
Laurent Cozic
50a1b184fd Chore: Desktop: Ensure dev tools are open on startup in dev mode 2025-02-24 16:55:52 +00:00
Laurent Cozic
3caa718132 Chore: Improve error message when renderMarkup command cannot render some text 2025-02-24 16:54:57 +00:00
Laurent Cozic
d0e16c0878 Chore: Improve error message when data API cannot parse a note 2025-02-24 16:54:56 +00:00
renovate[bot]
4fcb250c27 Update dependency @bam.tech/react-native-image-resizer to v3.0.11 (#11879)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-24 00:01:48 +00:00
Laurent Cozic
86e59ad621 Server v3.3.3 2025-02-23 19:07:47 +00:00
Laurent Cozic
12baa9827d Server: Fixed patching user properties 2025-02-23 18:40:12 +00:00
pedr
95c50ada7c Mobile: Fixes #11858: Fix disabled encryption keys list showing enabled keys (#11861) 2025-02-23 14:08:55 +00:00
Henry Heino
55a57f7baf Mobile: Resolves #11846: Improve encryption config screen accessibility (#11874) 2025-02-23 14:08:09 +00:00
summoner001
69b24b4437 Update hu-HU.po (#11877) 2025-02-23 13:56:11 +00:00
Henry Heino
5143fae0f6 Mobile: Fixes #11864: Fix voice recorder crash (#11876) 2025-02-23 13:53:28 +00:00
Henry Heino
01a62acfdf Chore: Fix yarn tsc fails when run from packages/utils (#11873) 2025-02-23 13:52:24 +00:00
renovate[bot]
c663742689 Update dependency @types/node to v18.19.65 (#11875)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-23 11:42:29 +00:00
Laurent Cozic
0c405951ed Server v3.3.2 2025-02-19 23:16:30 +00:00
Laurent Cozic
4b411e600c Chore: Add iOS entitlements for push notifications 2025-02-19 21:58:43 +00:00
Laurent Cozic
bf58a52394 Chore: Server: Exclude db migration from test 2025-02-19 21:57:13 +00:00
Laurent Cozic
36d3736bff Server v3.3.1 2025-02-19 19:19:02 +00:00
Laurent Cozic
4df0b9f851 Server: Optimise delta sync queries by optimising the underlying SQL query 2025-02-19 19:01:09 +00:00
Joplin Bot
914b5e230d Doc: Auto-update documentation
Auto-updated using release-website.sh
2025-02-19 18:42:49 +00:00
pedr
9278fd7910 Desktop: Accessibility: Add error indication on Note properties (#11784) 2025-02-19 18:32:29 +00:00
Laurent Cozic
2180ad1d9b iOS 13.3.1 2025-02-19 16:04:53 +00:00
Laurent Cozic
d301cdf992 Android 3.3.1 2025-02-19 16:03:59 +00:00
Laurent Cozic
200d3c84e0 Desktop release v3.3.2 2025-02-19 15:37:37 +00:00
Laurent Cozic
6cadaa2137 lock file 2025-02-19 15:37:20 +00:00
Henry Heino
8221081514 Mobile: Support attaching audio recordings (#11836) 2025-02-19 15:23:20 +00:00
Laurent Cozic
dd06b1e680 Desktop: Improve usability of note list when ticking to-dos using the Space key (#11855) 2025-02-19 15:19:20 +00:00
pedr
70e0ae0c2c Desktop: Fixes #11844: Fix OneNote importer not being able to handle corrupted attachments (#11859) 2025-02-19 15:18:53 +00:00
Laurent Cozic
7aeec923e3 Doc: Update OCR documentation 2025-02-19 14:33:01 +00:00
pedr
70d64225c8 Desktop: Accessibility: Make click outside of dialog content be cancellable (#11765) 2025-02-18 18:25:49 +00:00
Henry Heino
ad0ecc2320 Desktop: Fixes #11847: Hide extra clear button in search field (#11851) 2025-02-18 18:17:40 +00:00
pedr
8a28edcda8 Desktop: Fixes #11759: Preserve attachment file extensions regardless of the mime type (#11852)
Co-authored-by: Laurent Cozic <laurent22@users.noreply.github.com>
2025-02-18 18:17:23 +00:00
Henry Heino
c8640aa7f8 Desktop: Fix Rich Text right-click and paste regressions (#11850) 2025-02-18 18:15:46 +00:00
Jozef Gaal
ddf75d6c52 New strings translated to Slovak (#11856) 2025-02-18 18:14:43 +00:00
Kamila Łopuszańska
0a42317e07 Doc: Resolves #11842: Updated faq.md (#11853) 2025-02-18 18:14:10 +00:00
Henry Heino
51ce1b06fe Chore: Docs: Document creating new editor commands (#11829) 2025-02-18 18:13:32 +00:00
Laurent Cozic
44c735afac Desktop: Improve behaviour of note list to-dos when ticking a checkbox using the keyboard 2025-02-17 22:38:43 +00:00
Laurent Cozic
c6154cfb4e Mobile: Add support for plugin editor views (#11831)
Co-authored-by: Henry Heino <46334387+personalizedrefrigerator@users.noreply.github.com>
2025-02-17 13:47:56 +00:00
Joplin Bot
d2aad1d6c7 Doc: Auto-update documentation
Auto-updated using release-website.sh
2025-02-17 12:59:15 +00:00
Dan Dascalescu
3e81cc8585 Docs: update note tagging instructions in 1_welcome_to_joplin.md (#11834) 2025-02-17 12:11:45 +00:00
Henry Heino
abc5c062c3 iOS: Fix "attach file" doesn't work the first time after startup (#11839) 2025-02-17 12:09:10 +00:00
Henry Heino
316ef9d960 Desktop,Mobile: Plugins: Simplify getting the ID of the note open in an editor (#11841) 2025-02-17 12:08:48 +00:00
Henry Heino
b870f8344c iOS: Fixes #11835: Allow attaching videos to notes (#11840) 2025-02-17 12:07:15 +00:00
Joplin Bot
6f6683d15d Doc: Auto-update documentation
Auto-updated using release-website.sh
2025-02-16 18:40:23 +00:00
Henry Heino
17e463b6bc Desktop: Resolves #11710: Plugins: Mark the LanguageTool Integration plugin as incompatible (#11715) 2025-02-06 18:12:30 +00:00
670 changed files with 178287 additions and 3186 deletions

View File

@@ -436,6 +436,7 @@ packages/app-desktop/gui/WindowCommandsAndDialogs/commands/gotoAnything.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/hideModalMessage.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/index.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/leaveSharedFolder.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/linkToNote.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/moveToFolder.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/newFolder.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/newNote.js
@@ -456,7 +457,6 @@ packages/app-desktop/gui/WindowCommandsAndDialogs/commands/restoreNote.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/revealResourceFile.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/search.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/setTags.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showEditorPlugin.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showModalMessage.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showNoteContentProperties.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showNoteProperties.js
@@ -465,7 +465,6 @@ packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showShareFolderDialog
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showShareNoteDialog.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showSpellCheckerMenu.test.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showSpellCheckerMenu.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleEditorPlugin.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleEditors.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleLayoutMoveMode.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleMenuBar.js
@@ -789,12 +788,16 @@ packages/app-mobile/components/screens/ShareManager/index.test.js
packages/app-mobile/components/screens/ShareManager/index.js
packages/app-mobile/components/screens/UpgradeSyncTargetScreen.js
packages/app-mobile/components/screens/dropbox-login.js
packages/app-mobile/components/screens/encryption-config.test.js
packages/app-mobile/components/screens/encryption-config.js
packages/app-mobile/components/screens/status.js
packages/app-mobile/components/screens/tags.js
packages/app-mobile/components/side-menu-content.js
packages/app-mobile/components/testing/TestProviderStack.js
packages/app-mobile/components/voiceTyping/VoiceTypingDialog.js
packages/app-mobile/components/voiceTyping/AudioRecordingBanner.js
packages/app-mobile/components/voiceTyping/RecordingControls.js
packages/app-mobile/components/voiceTyping/SpeechToTextBanner.js
packages/app-mobile/components/voiceTyping/types.js
packages/app-mobile/gulpfile.js
packages/app-mobile/index.web.js
packages/app-mobile/root.js
@@ -808,12 +811,11 @@ packages/app-mobile/services/e2ee/crypto.js
packages/app-mobile/services/plugins/PlatformImplementation.js
packages/app-mobile/services/profiles/index.js
packages/app-mobile/services/voiceTyping/VoiceTyping.js
packages/app-mobile/services/voiceTyping/utils/splitWhisperText.test.js
packages/app-mobile/services/voiceTyping/utils/splitWhisperText.js
packages/app-mobile/services/voiceTyping/utils/unzip.android.js
packages/app-mobile/services/voiceTyping/utils/unzip.js
packages/app-mobile/services/voiceTyping/vosk.android.js
packages/app-mobile/services/voiceTyping/vosk.js
packages/app-mobile/services/voiceTyping/whisper.test.js
packages/app-mobile/services/voiceTyping/whisper.js
packages/app-mobile/setupQuickActions.js
packages/app-mobile/tools/buildInjectedJs/BundledFile.js
@@ -954,6 +956,7 @@ packages/editor/CodeMirror/utils/keyUpHandlerExtension.js
packages/editor/CodeMirror/utils/overwriteModeExtension.test.js
packages/editor/CodeMirror/utils/overwriteModeExtension.js
packages/editor/CodeMirror/utils/searchExtension.js
packages/editor/CodeMirror/utils/selectedNoteIdExtension.js
packages/editor/CodeMirror/utils/setupVim.js
packages/editor/SelectionFormatting.js
packages/editor/events.js
@@ -1021,7 +1024,10 @@ packages/lib/commands/openMasterPasswordDialog.js
packages/lib/commands/permanentlyDeleteNote.js
packages/lib/commands/renderMarkup.test.js
packages/lib/commands/renderMarkup.js
packages/lib/commands/showEditorPlugin.js
packages/lib/commands/synchronize.js
packages/lib/commands/toggleAllFolders.js
packages/lib/commands/toggleEditorPlugin.js
packages/lib/components/EncryptionConfigScreen/utils.test.js
packages/lib/components/EncryptionConfigScreen/utils.js
packages/lib/components/shared/NoteList/getEmptyFolderMessage.js
@@ -1115,6 +1121,9 @@ packages/lib/models/settings/builtInMetadata.js
packages/lib/models/settings/settingValidations.test.js
packages/lib/models/settings/settingValidations.js
packages/lib/models/settings/types.js
packages/lib/models/utils/areAllFoldersCollapsed.test.js
packages/lib/models/utils/areAllFoldersCollapsed.js
packages/lib/models/utils/getCanBeCollapsedFolderIds.js
packages/lib/models/utils/getCollator.js
packages/lib/models/utils/getConflictFolderId.js
packages/lib/models/utils/isItemId.js
@@ -1257,6 +1266,7 @@ packages/lib/services/ocr/utils/filterOcrText.js
packages/lib/services/ocr/utils/types.js
packages/lib/services/plugins/BasePlatformImplementation.js
packages/lib/services/plugins/BasePluginRunner.js
packages/lib/services/plugins/EditorPluginHandler.js
packages/lib/services/plugins/MenuController.js
packages/lib/services/plugins/MenuItemController.js
packages/lib/services/plugins/Plugin.js
@@ -1303,6 +1313,7 @@ packages/lib/services/plugins/utils/getPluginIssueReportUrl.js
packages/lib/services/plugins/utils/getPluginNamespacedSettingKey.js
packages/lib/services/plugins/utils/getPluginSettingKeyPrefix.js
packages/lib/services/plugins/utils/getPluginSettingValue.js
packages/lib/services/plugins/utils/getShownPluginEditorView.js
packages/lib/services/plugins/utils/isCompatible/getDefaultPlatforms.js
packages/lib/services/plugins/utils/isCompatible/index.test.js
packages/lib/services/plugins/utils/isCompatible/index.js

21
.gitignore vendored
View File

@@ -411,6 +411,7 @@ packages/app-desktop/gui/WindowCommandsAndDialogs/commands/gotoAnything.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/hideModalMessage.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/index.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/leaveSharedFolder.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/linkToNote.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/moveToFolder.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/newFolder.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/newNote.js
@@ -431,7 +432,6 @@ packages/app-desktop/gui/WindowCommandsAndDialogs/commands/restoreNote.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/revealResourceFile.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/search.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/setTags.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showEditorPlugin.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showModalMessage.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showNoteContentProperties.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showNoteProperties.js
@@ -440,7 +440,6 @@ packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showShareFolderDialog
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showShareNoteDialog.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showSpellCheckerMenu.test.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/showSpellCheckerMenu.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleEditorPlugin.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleEditors.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleLayoutMoveMode.js
packages/app-desktop/gui/WindowCommandsAndDialogs/commands/toggleMenuBar.js
@@ -764,12 +763,16 @@ packages/app-mobile/components/screens/ShareManager/index.test.js
packages/app-mobile/components/screens/ShareManager/index.js
packages/app-mobile/components/screens/UpgradeSyncTargetScreen.js
packages/app-mobile/components/screens/dropbox-login.js
packages/app-mobile/components/screens/encryption-config.test.js
packages/app-mobile/components/screens/encryption-config.js
packages/app-mobile/components/screens/status.js
packages/app-mobile/components/screens/tags.js
packages/app-mobile/components/side-menu-content.js
packages/app-mobile/components/testing/TestProviderStack.js
packages/app-mobile/components/voiceTyping/VoiceTypingDialog.js
packages/app-mobile/components/voiceTyping/AudioRecordingBanner.js
packages/app-mobile/components/voiceTyping/RecordingControls.js
packages/app-mobile/components/voiceTyping/SpeechToTextBanner.js
packages/app-mobile/components/voiceTyping/types.js
packages/app-mobile/gulpfile.js
packages/app-mobile/index.web.js
packages/app-mobile/root.js
@@ -783,12 +786,11 @@ packages/app-mobile/services/e2ee/crypto.js
packages/app-mobile/services/plugins/PlatformImplementation.js
packages/app-mobile/services/profiles/index.js
packages/app-mobile/services/voiceTyping/VoiceTyping.js
packages/app-mobile/services/voiceTyping/utils/splitWhisperText.test.js
packages/app-mobile/services/voiceTyping/utils/splitWhisperText.js
packages/app-mobile/services/voiceTyping/utils/unzip.android.js
packages/app-mobile/services/voiceTyping/utils/unzip.js
packages/app-mobile/services/voiceTyping/vosk.android.js
packages/app-mobile/services/voiceTyping/vosk.js
packages/app-mobile/services/voiceTyping/whisper.test.js
packages/app-mobile/services/voiceTyping/whisper.js
packages/app-mobile/setupQuickActions.js
packages/app-mobile/tools/buildInjectedJs/BundledFile.js
@@ -929,6 +931,7 @@ packages/editor/CodeMirror/utils/keyUpHandlerExtension.js
packages/editor/CodeMirror/utils/overwriteModeExtension.test.js
packages/editor/CodeMirror/utils/overwriteModeExtension.js
packages/editor/CodeMirror/utils/searchExtension.js
packages/editor/CodeMirror/utils/selectedNoteIdExtension.js
packages/editor/CodeMirror/utils/setupVim.js
packages/editor/SelectionFormatting.js
packages/editor/events.js
@@ -996,7 +999,10 @@ packages/lib/commands/openMasterPasswordDialog.js
packages/lib/commands/permanentlyDeleteNote.js
packages/lib/commands/renderMarkup.test.js
packages/lib/commands/renderMarkup.js
packages/lib/commands/showEditorPlugin.js
packages/lib/commands/synchronize.js
packages/lib/commands/toggleAllFolders.js
packages/lib/commands/toggleEditorPlugin.js
packages/lib/components/EncryptionConfigScreen/utils.test.js
packages/lib/components/EncryptionConfigScreen/utils.js
packages/lib/components/shared/NoteList/getEmptyFolderMessage.js
@@ -1090,6 +1096,9 @@ packages/lib/models/settings/builtInMetadata.js
packages/lib/models/settings/settingValidations.test.js
packages/lib/models/settings/settingValidations.js
packages/lib/models/settings/types.js
packages/lib/models/utils/areAllFoldersCollapsed.test.js
packages/lib/models/utils/areAllFoldersCollapsed.js
packages/lib/models/utils/getCanBeCollapsedFolderIds.js
packages/lib/models/utils/getCollator.js
packages/lib/models/utils/getConflictFolderId.js
packages/lib/models/utils/isItemId.js
@@ -1232,6 +1241,7 @@ packages/lib/services/ocr/utils/filterOcrText.js
packages/lib/services/ocr/utils/types.js
packages/lib/services/plugins/BasePlatformImplementation.js
packages/lib/services/plugins/BasePluginRunner.js
packages/lib/services/plugins/EditorPluginHandler.js
packages/lib/services/plugins/MenuController.js
packages/lib/services/plugins/MenuItemController.js
packages/lib/services/plugins/Plugin.js
@@ -1278,6 +1288,7 @@ packages/lib/services/plugins/utils/getPluginIssueReportUrl.js
packages/lib/services/plugins/utils/getPluginNamespacedSettingKey.js
packages/lib/services/plugins/utils/getPluginSettingKeyPrefix.js
packages/lib/services/plugins/utils/getPluginSettingValue.js
packages/lib/services/plugins/utils/getShownPluginEditorView.js
packages/lib/services/plugins/utils/isCompatible/getDefaultPlatforms.js
packages/lib/services/plugins/utils/isCompatible/index.test.js
packages/lib/services/plugins/utils/isCompatible/index.js

View File

@@ -0,0 +1,50 @@
# This is a (hopefully temporary) fix for an accessibility issue in the FAB.Group
# component. See https://github.com/callstack/react-native-paper/pull/4498 for details.
diff --git a/lib/commonjs/components/FAB/FABGroup.js b/lib/commonjs/components/FAB/FABGroup.js
index 26933dd7ac6862c0dd95e52b8cd91c8bbd0b6efc..417c91a0257849eb597afb5e339e13b6d1d54486 100644
--- a/lib/commonjs/components/FAB/FABGroup.js
+++ b/lib/commonjs/components/FAB/FABGroup.js
@@ -209,8 +209,9 @@ const FABGroup = _ref => {
}],
pointerEvents: open ? 'box-none' : 'none',
accessibilityRole: "button",
- importantForAccessibility: "yes",
- accessible: true,
+ importantForAccessibility: open ? 'yes' : 'no-hide-descendants',
+ accessibilityElementsHidden: !open,
+ accessible: open,
accessibilityLabel: accessibilityLabel
}, it.label && /*#__PURE__*/React.createElement(_reactNative.View, null, /*#__PURE__*/React.createElement(_Card.default, {
mode: isV3 ? 'contained' : 'elevated',
diff --git a/lib/module/components/FAB/FABGroup.js b/lib/module/components/FAB/FABGroup.js
index ca5c02679539b17b048d4c82f570791dd8b57545..a06902b744b3bfb06b0644930eda0ba2ce2967ca 100644
--- a/lib/module/components/FAB/FABGroup.js
+++ b/lib/module/components/FAB/FABGroup.js
@@ -200,8 +200,9 @@ const FABGroup = _ref => {
}],
pointerEvents: open ? 'box-none' : 'none',
accessibilityRole: "button",
- importantForAccessibility: "yes",
- accessible: true,
+ importantForAccessibility: open ? 'yes' : 'no-hide-descendants',
+ accessibilityElementsHidden: !open,
+ accessible: open,
accessibilityLabel: accessibilityLabel
}, it.label && /*#__PURE__*/React.createElement(View, null, /*#__PURE__*/React.createElement(Card, {
mode: isV3 ? 'contained' : 'elevated',
diff --git a/src/components/FAB/FABGroup.tsx b/src/components/FAB/FABGroup.tsx
index af1e85c4cbabfdd05499f9befb9f851be5911835..d010393975b0b31852efba1b7ce9cb09da4feaec 100644
--- a/src/components/FAB/FABGroup.tsx
+++ b/src/components/FAB/FABGroup.tsx
@@ -383,8 +383,9 @@ const FABGroup = ({
]}
pointerEvents={open ? 'box-none' : 'none'}
accessibilityRole="button"
- importantForAccessibility="yes"
- accessible={true}
+ importantForAccessibility={open ? 'yes' : 'no-hide-descendants'}
+ accessibilityElementsHidden={!open}
+ accessible={open}
accessibilityLabel={accessibilityLabel}
>
{it.label && (

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

View File

@@ -33,6 +33,7 @@
"/packages/app-desktop/build/",
"/packages/app-desktop/utils/checkForUpdatesUtilsTestData.ts",
"/packages/app-desktop/vendor/",
"/packages/app-mobile/android/vendor/",
"/packages/app-mobile/ios/Pods/",
"/packages/app-mobile/lib/rnInjectedJs",
"/packages/app-mobile/pluginAssets",

View File

@@ -16,7 +16,7 @@ services:
- POSTGRES_DATABASE=joplin
- POSTGRES_USER=joplin
- POSTGRES_PORT=5432
- POSTGRES_HOST=localhost
- POSTGRES_HOST=db
db:
image: postgres:16
ports:

View File

@@ -115,6 +115,7 @@
"rn-fetch-blob@0.12.0": "patch:rn-fetch-blob@npm%3A0.12.0#./.yarn/patches/rn-fetch-blob-npm-0.12.0-cf02e3c544.patch",
"app-builder-lib@26.0.0-alpha.7": "patch:app-builder-lib@npm%3A26.0.0-alpha.7#./.yarn/patches/app-builder-lib-npm-26.0.0-alpha.7-e1b3dca119.patch",
"app-builder-lib@24.13.3": "patch:app-builder-lib@npm%3A24.13.3#./.yarn/patches/app-builder-lib-npm-24.13.3-86a66c0bf3.patch",
"react-native-sqlite-storage@6.0.1": "patch:react-native-sqlite-storage@npm%3A6.0.1#./.yarn/patches/react-native-sqlite-storage-npm-6.0.1-8369d747bd.patch"
"react-native-sqlite-storage@6.0.1": "patch:react-native-sqlite-storage@npm%3A6.0.1#./.yarn/patches/react-native-sqlite-storage-npm-6.0.1-8369d747bd.patch",
"react-native-paper@5.13.1": "patch:react-native-paper@npm%3A5.13.1#./.yarn/patches/react-native-paper-npm-5.13.1-f153e542e2.patch"
}
}

View File

@@ -72,7 +72,7 @@
"@joplin/tools": "~3.3",
"@types/fs-extra": "11.0.4",
"@types/jest": "29.5.12",
"@types/node": "18.19.64",
"@types/node": "18.19.67",
"@types/proper-lockfile": "^4.1.2",
"gulp": "4.0.2",
"jest": "29.7.0",

View File

@@ -262,15 +262,25 @@ export default class ElectronAppWrapper {
// the easiest is to use a timeout. Keep in mind that if you get a white window on Windows it might be due
// to this line though.
if (debugEarlyBugs) {
setTimeout(() => {
// Since a recent release of Electron (v34?), calling openDevTools() here does nothing
// if a plugin devtool window is already opened. Maybe because they do a check on
// `isDevToolsOpened` which indeed returns `true` (but shouldn't since it's for a
// different window). However, if you open the dev tools twice from the Help menu it
// works. So instead we do that here and call openDevTool() three times.
let openDevToolCount = 0;
const openDevToolInterval = setInterval(() => {
try {
this.win_.webContents.openDevTools();
openDevToolCount++;
if (openDevToolCount >= 3) {
clearInterval(openDevToolInterval);
}
} catch (error) {
// This will throw an exception "Object has been destroyed" if the app is closed
// in less that the timeout interval. It can be ignored.
// This will throw an exception "Object has been destroyed" if the app is closed
// in less that the timeout interval. It can be ignored.
console.warn('Error opening dev tools', error);
}
}, 3000);
}, 1000);
}
const addWindowEventHandlers = (webContents: WebContents) => {

View File

@@ -118,6 +118,8 @@ export class Bridge {
return event;
}
},
integrations: [Sentry.electronMinidumpIntegration()],
};
if (this.autoUploadCrashDumps_) options.dsn = 'https://cceec550871b1e8a10fee4c7a28d5cf2@o4506576757522432.ingest.sentry.io/4506594281783296';

View File

@@ -63,23 +63,70 @@ const Dialog: React.FC<Props> = props => {
</div>;
};
// We keep track of the mouse events to allow the action to be cancellable on the mouseup
// If dialogElement is the source of the mouse event it means
// that the user clicked in the dimmed background and not in the content of the dialog
const useClickedOutsideContent = (dialogElement: HTMLDialogElement|null) => {
const mouseDownOutsideContent = useRef(false);
mouseDownOutsideContent.current = false;
const [clickedOutsideContent, setClickedOutsideContent] = useState(false);
useEffect(() => {
if (!dialogElement) return () => {};
const mouseDownListener = (event: MouseEvent) => {
if (event.target === dialogElement) {
mouseDownOutsideContent.current = true;
} else {
mouseDownOutsideContent.current = false;
}
};
const mouseUpListener = (event: MouseEvent) => {
if (!mouseDownOutsideContent.current) return;
if (mouseDownOutsideContent.current && event.target === dialogElement) {
setClickedOutsideContent(true);
mouseDownOutsideContent.current = false;
} else {
setClickedOutsideContent(false);
mouseDownOutsideContent.current = false;
}
};
dialogElement.addEventListener('mousedown', mouseDownListener);
dialogElement.addEventListener('mouseup', mouseUpListener);
return () => {
dialogElement.removeEventListener('mousedown', mouseDownListener);
dialogElement.removeEventListener('mouseup', mouseUpListener);
};
}, [dialogElement]);
return [clickedOutsideContent, setClickedOutsideContent] as const;
};
const useDialogElement = (containerDocument: Document, onCancel: undefined|OnCancelListener) => {
const [dialogElement, setDialogElement] = useState<HTMLDialogElement|null>(null);
const onCancelRef = useRef(onCancel);
onCancelRef.current = onCancel;
const [clickedOutsideContent, setClickedOutsideContent] = useClickedOutsideContent(dialogElement);
useEffect(() => {
if (clickedOutsideContent) {
const onCancel = onCancelRef.current;
if (onCancel) {
onCancel();
} else {
setClickedOutsideContent(false);
}
}
}, [clickedOutsideContent, setClickedOutsideContent]);
useEffect(() => {
if (!containerDocument) return () => {};
const dialog = containerDocument.createElement('dialog');
dialog.addEventListener('click', event => {
const onCancel = onCancelRef.current;
const isBackgroundClick = event.target === dialog;
if (isBackgroundClick && onCancel) {
onCancel();
}
});
dialog.classList.add('dialog-modal-layer');
dialog.addEventListener('cancel', event => {
const canCancel = !!onCancelRef.current;

View File

@@ -999,6 +999,7 @@ function useMenu(props: Props) {
rootMenus.go.submenu.push(menuItemDic.gotoAnything);
rootMenus.tools.submenu.push(menuItemDic.commandPalette);
rootMenus.tools.submenu.push(menuItemDic.linkToNote);
rootMenus.tools.submenu.push(menuItemDic.openMasterPasswordDialog);
for (const view of props.pluginMenuItems) {

View File

@@ -383,10 +383,13 @@ const CodeMirror = (props: NoteBodyEditorProps, ref: ForwardedRef<NoteBodyEditor
// Update the editor's value
useEffect(() => {
if (editorRef.current?.updateBody(props.content)) {
// Include the noteId in the update props to give plugins access
// to the current note ID.
const updateProps = { noteId: props.noteId };
if (editorRef.current?.updateBody(props.content, updateProps)) {
editorRef.current?.clearHistory();
}
}, [props.content]);
}, [props.content, props.noteId]);
const renderEditor = () => {
return (
@@ -394,6 +397,7 @@ const CodeMirror = (props: NoteBodyEditorProps, ref: ForwardedRef<NoteBodyEditor
<Editor
style={styles.editor}
initialText={props.content}
initialNoteId={props.noteId}
ref={editorRef}
settings={editorSettings}
pluginStates={props.plugins}

View File

@@ -617,6 +617,13 @@ const TinyMCE = (props: NoteBodyEditorProps, ref: any) => {
background: none;
background-color: ${theme.backgroundColor3} !important;
}
.tox .tox-tbtn,
.tox .tox-tbtn button,
.tox .tox-split-button,
.tox .tox-split-button button {
margin: 0 !important;
}
`));
return () => {
@@ -673,7 +680,8 @@ const TinyMCE = (props: NoteBodyEditorProps, ref: any) => {
// we create small groups of just one button towards the end.
const toolbar = [
'bold', 'italic', 'joplinHighlight', 'joplinStrikethrough', 'formattingExtras', '|',
'bold', 'italic', 'joplinHighlight', 'joplinStrikethrough', '|',
'joplinInsert', 'joplinSup', 'joplinSub', 'forecolor', '|',
'link', 'joplinInlineCode', 'joplinCodeBlock', 'joplinAttach', '|',
'bullist', 'numlist', 'joplinChecklist', '|',
'h1', 'h2', 'h3', '|',
@@ -1344,7 +1352,9 @@ const TinyMCE = (props: NoteBodyEditorProps, ref: any) => {
editor.on(TinyMceEditorEvents.KeyUp, onKeyUp);
editor.on(TinyMceEditorEvents.KeyDown, onKeyDown);
editor.on(TinyMceEditorEvents.KeyPress, onKeypress);
editor.on(TinyMceEditorEvents.Paste, onPaste);
// Passing `true` adds the listener to the front of the listener list.
// This allows overriding TinyMCE's built-in paste handler with .preventDefault.
editor.on(TinyMceEditorEvents.Paste, onPaste, true);
editor.on(TinyMceEditorEvents.PasteAsText, onPasteAsText);
editor.on(TinyMceEditorEvents.Copy, onCopy);
// `compositionend` means that a user has finished entering a Chinese

View File

@@ -60,14 +60,12 @@ export default function(editor: any) {
});
}
const items: string[] = definitions.filter(d => !!d.grouped).map(d => d.name);
// Additional built-in buttons to show in the formatting sub-menu:
items.push('forecolor');
editor.ui.registry.addGroupToolbarButton('formattingExtras', {
icon: 'image-options',
tooltip: _('Formatting'),
items: items.join(' '),
});
// Old code to format a group of buttons into a dropdown
// const items: string[] = definitions.filter(d => !!d.grouped).map(d => d.name);
// items.push('forecolor');
// editor.ui.registry.addGroupToolbarButton('formattingExtras', {
// icon: 'image-options',
// tooltip: _('Formatting'),
// items: items.join(' '),
// });
}

View File

@@ -8,7 +8,7 @@ import { menuItems } from '../../../utils/contextMenu';
import MenuUtils from '@joplin/lib/services/commands/MenuUtils';
import CommandService from '@joplin/lib/services/CommandService';
import Setting from '@joplin/lib/models/Setting';
import type { Event as ElectronEvent } from 'electron';
import type { Event as ElectronEvent, MenuItemConstructorOptions } from 'electron';
import Resource from '@joplin/lib/models/Resource';
import { TinyMceEditorEvents } from './types';
@@ -17,6 +17,7 @@ import { Editor } from 'tinymce';
import { EditDialogControl } from './useEditDialog';
import { Dispatch } from 'redux';
import { _ } from '@joplin/lib/locale';
import type { MenuItem as MenuItemType } from 'electron';
const Menu = bridge().Menu;
const MenuItem = bridge().MenuItem;
@@ -137,13 +138,20 @@ export default function(editor: Editor, plugins: PluginStates, dispatch: Dispatc
event.preventDefault();
const menu = new Menu();
const menuItems = [];
const menuItems: MenuItemType[] = [];
const toMenuItems = (specs: MenuItemConstructorOptions[]) => {
return specs.map(spec => new MenuItem(spec));
};
menuItems.push(...makeEditableMenuItems(element));
menuItems.push(...makeMainMenuItems(element));
const spellCheckerMenuItems = SpellCheckerService.instance().contextMenuItems(params.misspelledWord, params.dictionarySuggestions);
menuItems.push(...spellCheckerMenuItems);
menuItems.push(...menuUtils.pluginContextMenuItems(plugins, MenuItemLocation.EditorContextMenu));
menuItems.push(
...toMenuItems(spellCheckerMenuItems),
);
menuItems.push(
...toMenuItems(menuUtils.pluginContextMenuItems(plugins, MenuItemLocation.EditorContextMenu)),
);
for (const item of menuItems) {
menu.append(item);

View File

@@ -52,10 +52,8 @@ import Logger from '@joplin/utils/Logger';
import usePluginEditorView from './utils/usePluginEditorView';
import { stateUtils } from '@joplin/lib/reducer';
import { WindowIdContext } from '../NewWindowOrIFrame';
import { EditorActivationCheckFilterObject } from '@joplin/lib/services/plugins/api/types';
import PluginService from '@joplin/lib/services/plugins/PluginService';
import WebviewController from '@joplin/lib/services/plugins/WebviewController';
import AsyncActionQueue, { IntervalType } from '@joplin/lib/AsyncActionQueue';
import EditorPluginHandler from '@joplin/lib/services/plugins/EditorPluginHandler';
import useResourceUnwatcher from './utils/useResourceUnwatcher';
import StatusBar from './StatusBar';
@@ -72,15 +70,6 @@ const toolbarButtonUtils = new ToolbarButtonUtils(CommandService.instance());
const onDragOver: React.DragEventHandler = event => event.preventDefault();
let editorIdCounter = 0;
const makeNoteUpdateAction = (shownEditorViewIds: string[]) => {
return async () => {
for (const viewId of shownEditorViewIds) {
const controller = PluginService.instance().viewControllerByViewId(viewId) as WebviewController;
if (controller) controller.emitUpdate();
}
};
};
function NoteEditorContent(props: NoteEditorProps) {
const [showRevisions, setShowRevisions] = useState(false);
const [titleHasBeenManuallyChanged, setTitleHasBeenManuallyChanged] = useState(false);
@@ -90,7 +79,10 @@ function NoteEditorContent(props: NoteEditorProps) {
const titleInputRef = useRef<HTMLInputElement>();
const isMountedRef = useRef(true);
const noteSearchBarRef = useRef(null);
const viewUpdateAsyncQueue_ = useRef<AsyncActionQueue>(new AsyncActionQueue(100, IntervalType.Fixed));
const editorPluginHandler = useMemo(() => {
return new EditorPluginHandler(PluginService.instance());
}, []);
const shownEditorViewIds = props['plugins.shownEditorViewIds'];
@@ -114,25 +106,15 @@ function NoteEditorContent(props: NoteEditorProps) {
const effectiveNoteId = useEffectiveNoteId(props);
useAsyncEffect(async (event) => {
useAsyncEffect(async (_event) => {
if (!props.startupPluginsLoaded) return;
let filterObject: EditorActivationCheckFilterObject = {
activatedEditors: [],
};
filterObject = await eventManager.filterEmit('editorActivationCheck', filterObject);
if (event.cancelled) return;
for (const editor of filterObject.activatedEditors) {
const controller = PluginService.instance().pluginById(editor.pluginId).viewController(editor.viewId) as WebviewController;
controller.setActive(editor.isActive);
}
}, [effectiveNoteId, props.startupPluginsLoaded]);
await editorPluginHandler.emitActivationCheck();
}, [effectiveNoteId, editorPluginHandler, props.startupPluginsLoaded]);
useEffect(() => {
if (!props.startupPluginsLoaded) return;
viewUpdateAsyncQueue_.current.push(makeNoteUpdateAction(shownEditorViewIds));
}, [effectiveNoteId, shownEditorViewIds, props.startupPluginsLoaded]);
editorPluginHandler.emitUpdate(shownEditorViewIds);
}, [effectiveNoteId, editorPluginHandler, shownEditorViewIds, props.startupPluginsLoaded]);
const { editorPlugin, editorView } = usePluginEditorView(props.plugins, shownEditorViewIds);
const builtInEditorVisible = !editorPlugin;

View File

@@ -38,6 +38,7 @@ const incompatiblePluginIds = [
'ylc395.noteLinkSystem',
'outline',
'joplin.plugin.cmoptions',
'com.asdibiase.joplin-languagetool',
// cSpell:enable
];

View File

@@ -1,15 +1,11 @@
import { useMemo } from 'react';
import { PluginStates } from '@joplin/lib/services/plugins/reducer';
import getActivePluginEditorView from '@joplin/lib/services/plugins/utils/getActivePluginEditorView';
import getShownPluginEditorView from '@joplin/lib/services/plugins/utils/getShownPluginEditorView';
// If a plugin editor should be shown for the current note, this function will return the plugin and
// associated view.
export default (plugins: PluginStates, shownEditorViewIds: string[]) => {
return useMemo(() => {
const { editorPlugin, editorView } = getActivePluginEditorView(plugins);
if (editorView) {
if (!shownEditorViewIds.includes(editorView.id)) return { editorPlugin: null, editorView: null };
}
return { editorPlugin, editorView };
return getShownPluginEditorView(plugins, shownEditorViewIds);
}, [plugins, shownEditorViewIds]);
};

View File

@@ -118,6 +118,8 @@ const NoteList = (props: Props) => {
props.notes.length,
listRenderer.flow,
itemsPerLine,
props.showCompletedTodos,
props.uncompletedTodosOnTop,
);
useItemCss(listRenderer.itemCss);

View File

@@ -23,6 +23,8 @@ const useOnKeyDown = (
noteCount: number,
flow: ItemFlow,
itemsPerLine: number,
showCompletedTodos: boolean,
uncompletedTodosOnTop: boolean,
) => {
const scrollNoteIndex = useCallback((visibleItemCount: number, key: KeyboardEventKey, ctrlKey: boolean, metaKey: boolean, noteIndex: number) => {
if (flow === ItemFlow.TopToBottom) {
@@ -142,13 +144,32 @@ const useOnKeyDown = (
const todos = selectedNotes.filter(n => !!n.is_todo);
if (!todos.length) return;
const firstNoteIndex = notes.findIndex(n => n.id === todos[0].id);
let nextSelectedNoteIndex = firstNoteIndex + 1;
if (nextSelectedNoteIndex > notes.length - 1) nextSelectedNoteIndex = notes.length - 1;
const nextSelectedNote = nextSelectedNoteIndex >= 0 ? notes[nextSelectedNoteIndex] : todos[0];
for (let i = 0; i < todos.length; i++) {
const toggledTodo = Note.toggleTodoCompleted(todos[i]);
await Note.save(toggledTodo);
}
// When the settings `uncompletedTodosOnTop` or `showCompletedTodos` are enabled, the
// note that got set as completed or uncompleted is going to disappear from view,
// possibly hidden or moved to the top or bottom of the note list. It is assumed that
// the user does not want to keep that note selected since the to-do is indeed
// "completed". And by keeping that selection, the cursor would jump, making you lose
// context if you have multiple to-dos that need to be ticked. For that reason we set
// the selection to the next note in the list, which also ensures that the scroll
// position doesn't change. This is the same behaviour as when deleting a note.
const maintainScrollPosition = !showCompletedTodos || uncompletedTodosOnTop;
if (maintainScrollPosition) {
dispatch({ type: 'NOTE_SELECT', noteId: nextSelectedNote.id });
}
dispatch({ type: 'NOTE_SORT' });
focusNote(todos[0].id);
if (!maintainScrollPosition) focusNote(todos[0].id);
const wasCompleted = !!todos[0].todo_completed;
announceForAccessibility(!wasCompleted ? _('Complete') : _('Incomplete'));
}
@@ -171,7 +192,7 @@ const useOnKeyDown = (
type: 'NOTE_SELECT_ALL',
});
}
}, [moveNote, focusNote, visibleItemCount, scrollNoteIndex, makeItemIndexVisible, notes, selectedNoteIds, activeNoteId, dispatch, flow, itemsPerLine]);
}, [moveNote, focusNote, visibleItemCount, scrollNoteIndex, makeItemIndexVisible, notes, selectedNoteIds, activeNoteId, dispatch, flow, itemsPerLine, showCompletedTodos, uncompletedTodosOnTop]);
return onKeyDown;

View File

@@ -1,7 +1,6 @@
import * as React from 'react';
import shim from '@joplin/lib/shim';
import { Size } from '@joplin/utils/types';
import { useCallback, useState, useRef, useEffect, useMemo } from 'react';
import { useCallback, useState, useRef, useMemo } from 'react';
const useScroll = (itemsPerLine: number, noteCount: number, itemSize: Size, listSize: Size, listRef: React.MutableRefObject<HTMLDivElement>) => {
const [scrollTop, setScrollTop] = useState(0);
@@ -29,36 +28,36 @@ const useScroll = (itemsPerLine: number, noteCount: number, itemSize: Size, list
// but still fails now and then. Setting it after 500ms would probably work
// reliably but it's too slow so it makes sense to do it in an interval.
const setScrollTopLikeYouMeanItTimer = useRef(null);
const setScrollTopLikeYouMeanItStartTime = useRef(0);
const setScrollTopLikeYouMeanIt = useCallback((newScrollTop: number) => {
if (setScrollTopLikeYouMeanItTimer.current) shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
setScrollTopLikeYouMeanItStartTime.current = Date.now();
// const setScrollTopLikeYouMeanItTimer = useRef(null);
// const setScrollTopLikeYouMeanItStartTime = useRef(0);
// const setScrollTopLikeYouMeanIt = useCallback((newScrollTop: number) => {
// if (setScrollTopLikeYouMeanItTimer.current) shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
// setScrollTopLikeYouMeanItStartTime.current = Date.now();
listRef.current.scrollTop = newScrollTop;
lastScrollSetTime.current = Date.now();
// listRef.current.scrollTop = newScrollTop;
// lastScrollSetTime.current = Date.now();
setScrollTopLikeYouMeanItTimer.current = shim.setInterval(() => {
if (!listRef.current) {
shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
setScrollTopLikeYouMeanItTimer.current = null;
return;
}
// setScrollTopLikeYouMeanItTimer.current = shim.setInterval(() => {
// if (!listRef.current) {
// shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
// setScrollTopLikeYouMeanItTimer.current = null;
// return;
// }
listRef.current.scrollTop = newScrollTop;
lastScrollSetTime.current = Date.now();
// listRef.current.scrollTop = newScrollTop;
// lastScrollSetTime.current = Date.now();
if (Date.now() - setScrollTopLikeYouMeanItStartTime.current > 1000) {
shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
setScrollTopLikeYouMeanItTimer.current = null;
}
}, 10);
}, [listRef]);
// if (Date.now() - setScrollTopLikeYouMeanItStartTime.current > 1000) {
// shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
// setScrollTopLikeYouMeanItTimer.current = null;
// }
// }, 10);
// }, [listRef]);
useEffect(() => {
if (setScrollTopLikeYouMeanItTimer.current) shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
setScrollTopLikeYouMeanItTimer.current = null;
}, []);
// useEffect(() => {
// if (setScrollTopLikeYouMeanItTimer.current) shim.clearInterval(setScrollTopLikeYouMeanItTimer.current);
// setScrollTopLikeYouMeanItTimer.current = null;
// }, []);
const makeItemIndexVisible = useCallback((itemIndex: number) => {
const lineTopFloat = scrollTop / itemSize.height;
@@ -83,13 +82,17 @@ const useScroll = (itemsPerLine: number, noteCount: number, itemSize: Size, list
if (newScrollTop > maxScrollTop) newScrollTop = maxScrollTop;
setScrollTop(newScrollTop);
setScrollTopLikeYouMeanIt(newScrollTop);
}, [itemsPerLine, noteCount, itemSize.height, scrollTop, listSize.height, maxScrollTop, setScrollTopLikeYouMeanIt]);
listRef.current.scrollTop = newScrollTop;
lastScrollSetTime.current = Date.now();
// setScrollTopLikeYouMeanIt(newScrollTop);
}, [itemsPerLine, noteCount, itemSize.height, scrollTop, listSize.height, maxScrollTop, listRef]); // , setScrollTopLikeYouMeanIt]);
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
const onScroll = useCallback((event: any) => {
// console.info('ON SCROLL', event.target.scrollTop, 'Ignore:', Date.now() - lastScrollSetTime.current < 500);
// Ignore the scroll event if it has just been set programmatically.
if (Date.now() - lastScrollSetTime.current < 500) return;
if (Date.now() - lastScrollSetTime.current < 10) return;
setScrollTop(event.target.scrollTop);
}, []);

View File

@@ -40,6 +40,12 @@ const useVisibleRange = (itemsPerLine: number, scrollTop: number, listSize: Size
return Math.ceil(noteCount / itemsPerLine);
}, [noteCount, itemsPerLine]);
// Note: Leave this here to test the note list scroll behaviour. Also add "item.index" to the
// rows in defaultListRenderer to check whether the value here matches what's being displayed.
// `useScroll` can also be changed to display the effective scroll value.
// console.info('=======================================');
// console.info('scrollTop', scrollTop);
// console.info('itemsPerLine', itemsPerLine);
// console.info('listSize.height', listSize.height);
// console.info('itemSize.height', itemSize.height);
@@ -52,6 +58,7 @@ const useVisibleRange = (itemsPerLine: number, scrollTop: number, listSize: Size
// console.info('endLineIndex', endLineIndex);
// console.info('totalLineCount', totalLineCount);
// console.info('visibleItemCount', visibleItemCount);
// console.info('=======================================');
return [startNoteIndex, endNoteIndex, startLineIndex, endLineIndex, totalLineCount, visibleItemCount];
};

View File

@@ -37,6 +37,9 @@ interface State {
formNote: FormNote;
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
editedValue: any;
isValid: {
location: boolean;
};
}
const uniqueId = (key: string) => `note-properties-dialog-${key}`;
@@ -60,6 +63,7 @@ class NotePropertiesDialog extends React.Component<Props, State> {
this.revisionsLink_click = this.revisionsLink_click.bind(this);
this.buttonRow_click = this.buttonRow_click.bind(this);
this.locationOnChange = this.locationOnChange.bind(this);
this.okButton = React.createRef();
this.inputRef = React.createRef();
@@ -67,6 +71,9 @@ class NotePropertiesDialog extends React.Component<Props, State> {
formNote: null,
editedKey: null,
editedValue: null,
isValid: {
location: true,
},
};
this.keyToLabel_ = {
@@ -195,6 +202,17 @@ class NotePropertiesDialog extends React.Component<Props, State> {
borderColor: theme.dividerColor,
};
this.styles_.invalidInput = {
border: '1px solid',
borderColor: theme.colorWarn,
};
this.styles_.invalidMessage = {
marginTop: '0.3em',
color: theme.color,
fontSize: theme.fontSize * 0.9,
};
return this.styles_;
}
@@ -276,6 +294,24 @@ class NotePropertiesDialog extends React.Component<Props, State> {
});
}
public async locationOnChange(event: React.ChangeEvent<HTMLInputElement>) {
this.setState({ editedValue: event.target.value });
if (!event.target.value) {
this.setState({ isValid: { ...this.state.isValid, location: true } });
return;
}
if (event.target.value.includes(',')) {
const [lat, log] = event.target.value.split(',');
if (parseFloat(lat) < 90 && parseFloat(lat) > -90 && parseFloat(log) < 180 && parseFloat(log) > -180) {
this.setState({ isValid: { ...this.state.isValid, location: true } });
return;
}
}
this.setState({ isValid: { ...this.state.isValid, location: false } });
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
public createNoteField(key: keyof FormNote, value: any) {
const styles = this.styles(this.props.themeId);
@@ -288,8 +324,7 @@ class NotePropertiesDialog extends React.Component<Props, State> {
let editCompIcon = null;
let editComDescription = null;
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
const onKeyDown = (event: any) => {
const onKeyDown = (event: React.KeyboardEvent) => {
if (event.keyCode === 13) {
void this.saveProperty();
} else if (event.keyCode === 27) {
@@ -315,6 +350,30 @@ class NotePropertiesDialog extends React.Component<Props, State> {
};
editCompIcon = 'fa-save';
editComDescription = _('Save changes');
} else if (this.state.editedKey === 'location') {
controlComp = (
<React.Fragment>
<input
defaultValue={value}
type="text"
ref={this.inputRef}
onChange={this.locationOnChange}
onKeyDown={event => onKeyDown(event)}
style={this.state.isValid.location ? styles.input : { ...styles.input, ...styles.invalidInput }}
id={uniqueId(key)}
name={uniqueId(key)}
aria-invalid={!this.state.isValid.location}
/>
{
this.state.isValid.location ? null
: <React.Fragment>
<div aria-live='polite' style={styles.invalidMessage}>
{_('Invalid format. E.g.: 48.8581372, 2.2926735')}
</div>
</React.Fragment>
}
</React.Fragment>
);
} else {
controlComp = (
<input

View File

@@ -1,6 +1,7 @@
import * as React from 'react';
import { AppState } from '../../app.reducer';
import { FolderEntity, TagsWithNoteCountEntity } from '@joplin/lib/services/database/types';
import areAllFoldersCollapsed from '@joplin/lib/models/utils/areAllFoldersCollapsed';
import { PluginStates } from '@joplin/lib/services/plugins/reducer';
import { connect } from 'react-redux';
import { Dispatch } from 'redux';
@@ -41,6 +42,10 @@ const FolderAndTagList: React.FC<Props> = props => {
listItems: listItems,
});
const allFoldersCollapsed = useMemo(() => {
return areAllFoldersCollapsed(props.folders, props.collapsedFolderIds);
}, [props.collapsedFolderIds, props.folders]);
const listContainerRef = useRef<HTMLDivElement|null>(null);
const onRenderItem = useOnRenderItem({
...props,
@@ -67,7 +72,7 @@ const FolderAndTagList: React.FC<Props> = props => {
const listHeight = useElementHeight(itemListContainer);
const listStyle = useMemo(() => ({ height: listHeight }), [listHeight]);
const onRenderContentWrapper = useOnRenderListWrapper({ selectedIndex, onKeyDown: onKeyEventHandler });
const onRenderContentWrapper = useOnRenderListWrapper({ allFoldersCollapsed, selectedIndex, onKeyDown: onKeyEventHandler });
return (
<div

View File

@@ -417,6 +417,7 @@ const useOnRenderItem = (props: Props) => {
key={item.key}
anchorRef={anchorRef}
selected={selected}
item={item}
index={index}
itemCount={itemCount}
/>;
@@ -425,7 +426,7 @@ const useOnRenderItem = (props: Props) => {
<ListItemWrapper
key={item.key}
containerRef={anchorRef}
depth={0}
depth={1}
selected={selected}
itemIndex={index}
itemCount={itemCount}

View File

@@ -6,16 +6,39 @@ import CommandService from '@joplin/lib/services/CommandService';
interface Props {
selectedIndex: number;
onKeyDown: React.KeyboardEventHandler;
allFoldersCollapsed: boolean;
}
const onAddFolderButtonClick = () => {
void CommandService.instance().execute('newFolder');
};
const onToggleAllFolders = (allFoldersCollapsed: boolean) => {
void CommandService.instance().execute('toggleAllFolders', !allFoldersCollapsed);
};
interface CollapseExpandAllButtonProps {
allFoldersCollapsed: boolean;
}
const CollapseExpandAllButton = (props: CollapseExpandAllButtonProps) => {
// To allow it to be accessed by accessibility tools, the new folder button
// is not included in the portion of the list with role='tree'.
const icon = props.allFoldersCollapsed ? 'far fa-caret-square-right' : 'far fa-caret-square-down';
return <button onClick={() => onToggleAllFolders(props.allFoldersCollapsed)} className='sidebar-header-button -collapseall'>
<i
aria-label={_('Collapse / Expand all notebooks')}
role='img'
className={icon}
/>
</button>;
};
const NewFolderButton = () => {
// To allow it to be accessed by accessibility tools, the new folder button
// is not included in the portion of the list with role='tree'.
return <button onClick={onAddFolderButtonClick} className='new-folder-button'>
return <button onClick={onAddFolderButtonClick} className='sidebar-header-button -newfolder'>
<i
aria-label={_('New notebook')}
role='img'
@@ -24,22 +47,23 @@ const NewFolderButton = () => {
</button>;
};
const useOnRenderListWrapper = ({ selectedIndex, onKeyDown }: Props) => {
const useOnRenderListWrapper = (props: Props) => {
return useCallback((listItems: React.ReactNode[]) => {
const listHasValidSelection = selectedIndex >= 0;
const listHasValidSelection = props.selectedIndex >= 0;
const allowContainerFocus = !listHasValidSelection;
return <>
<CollapseExpandAllButton allFoldersCollapsed={props.allFoldersCollapsed}/>
<NewFolderButton/>
<div
role='tree'
className='sidebar-list-items-wrapper'
tabIndex={allowContainerFocus ? 0 : undefined}
onKeyDown={onKeyDown}
onKeyDown={props.onKeyDown}
>
{...listItems}
</div>
</>;
}, [selectedIndex, onKeyDown]);
}, [props.selectedIndex, props.onKeyDown, props.allFoldersCollapsed]);
};
export default useOnRenderListWrapper;

View File

@@ -12,7 +12,6 @@ interface Props {
updateSelectedIndex: SetSelectedIndexCallback;
}
const isToggleShortcut = (keyCode: string, selectedItem: ListItem, collapsedFolderIds: string[]) => {
if (selectedItem.kind !== ListItemType.Header && selectedItem.kind !== ListItemType.Folder) {
return false;
@@ -22,6 +21,10 @@ const isToggleShortcut = (keyCode: string, selectedItem: ListItem, collapsedFold
return false;
}
if (!selectedItem.hasChildren) {
return false;
}
if (keyCode === 'Space') {
return true;
}
@@ -30,6 +33,22 @@ const isToggleShortcut = (keyCode: string, selectedItem: ListItem, collapsedFold
return (keyCode === 'ArrowRight') === isCollapsed;
};
const getParentOffset = (childIndex: number, listItems: ListItem[]): number|null => {
const childItem = listItems[childIndex];
const targetDepth = childItem.depth - 1;
let indexChange = 0;
for (let i = childIndex; i >= 0; i--) {
const otherItem = listItems[i];
if (otherItem.depth === targetDepth) {
return indexChange;
}
indexChange --;
}
return null;
};
const useOnSidebarKeyDownHandler = (props: Props) => {
const { updateSelectedIndex, listItems, selectedIndex, collapsedFolderIds, dispatch } = props;
@@ -48,12 +67,21 @@ const useOnSidebarKeyDownHandler = (props: Props) => {
} else if (selectedItem.kind === ListItemType.Header) {
toggleHeader(selectedItem.id);
}
} else if ((event.ctrlKey || event.metaKey) && event.code === 'KeyA') { // ctrl+a or cmd+a
event.preventDefault();
} else if (selectedItem && event.code === 'ArrowLeft') { // Jump to parent
const isFolderWithParent = selectedItem.kind === ListItemType.Folder && selectedItem.folder.parent_id;
// For now, only allow this shortcut for folders with parents -- jumping to the tags or
// folders headers could be confusing.
if (isFolderWithParent) {
indexChange = getParentOffset(selectedIndex, listItems) ?? 0;
}
} else if (selectedItem?.hasChildren && event.code === 'ArrowRight') { // Jump to first child
indexChange = 1;
} else if (event.code === 'ArrowUp') {
indexChange = -1;
} else if (event.code === 'ArrowDown') {
indexChange = 1;
} else if ((event.ctrlKey || event.metaKey) && event.code === 'KeyA') { // ctrl+a or cmd+a
event.preventDefault();
} else if (event.code === 'Enter' && !event.shiftKey) {
event.preventDefault();
void CommandService.instance().execute('focusElement', 'noteList');

View File

@@ -20,6 +20,8 @@ const useSidebarListData = (props: Props): ListItem[] => {
kind: ListItemType.Tag,
tag,
key: tag.id,
depth: 1,
hasChildren: false,
};
});
}, [props.tags]);
@@ -38,7 +40,9 @@ const useSidebarListData = (props: Props): ListItem[] => {
kind: ListItemType.Folder,
folder,
hasChildren,
depth,
// The toplevel headers have depth 1, so the toplevel notebook needs
// depth 2.
depth: depth + 1,
key: folder.id,
};
});
@@ -57,11 +61,13 @@ const useSidebarListData = (props: Props): ListItem[] => {
['data-folder-id']: '',
},
supportsFolderDrop: true,
depth: 1,
hasChildren: folderItems.items.length > 0,
};
const foldersSectionContent: ListItem[] = props.folderHeaderIsExpanded ? [
{ kind: ListItemType.AllNotes, key: 'all-notes' },
{ kind: ListItemType.AllNotes, key: 'all-notes', depth: 2, hasChildren: false },
...folderItems.items,
{ kind: ListItemType.Spacer, key: 'after-folders-spacer' },
{ kind: ListItemType.Spacer, key: 'after-folders-spacer', depth: 1, hasChildren: false },
] : [];
const tagsHeader: HeaderListItem = {
@@ -74,6 +80,8 @@ const useSidebarListData = (props: Props): ListItem[] => {
onClick: toggleHeader,
extraProps: { },
supportsFolderDrop: false,
depth: 1,
hasChildren: tagItems.items.length > 0,
};
const tagsSectionContent: ListItem[] = props.tagHeaderIsExpanded ? tagItems.items : [];

View File

@@ -11,6 +11,7 @@ import { _ } from '@joplin/lib/locale';
import { connect } from 'react-redux';
import EmptyExpandLink from './EmptyExpandLink';
import ListItemWrapper, { ListItemRef } from './ListItemWrapper';
import { ListItem } from '../types';
const { ALL_NOTES_FILTER_ID } = require('@joplin/lib/reserved-ids');
const Menu = bridge().Menu;
@@ -20,6 +21,7 @@ interface Props {
dispatch: Dispatch;
anchorRef: ListItemRef;
selected: boolean;
item: ListItem;
index: number;
itemCount: number;
}
@@ -53,7 +55,7 @@ const AllNotesItem: React.FC<Props> = props => {
containerRef={props.anchorRef}
key="allNotesHeader"
selected={props.selected}
depth={1}
depth={props.item.depth}
className={'list-item-container list-item-depth-0 all-notes'}
highlightOnHover={true}
itemIndex={props.index}

View File

@@ -52,7 +52,7 @@ const HeaderItem: React.FC<Props> = props => {
itemCount={props.itemCount}
expanded={props.item.expanded}
onContextMenu={onContextMenu}
depth={0}
depth={item.depth}
highlightOnHover={false}
className='sidebar-header-container'
{...item.extraProps}

View File

@@ -40,8 +40,7 @@ const ListItemWrapper: React.FC<Props> = props => {
aria-setsize={props.itemCount}
aria-selected={props.selected}
aria-expanded={props.expanded}
// aria-level is 1-based, where depth is zero-based
aria-level={props.depth + 1}
aria-level={props.depth}
tabIndex={props.selected ? 0 : -1}
onContextMenu={props.onContextMenu}

View File

@@ -5,4 +5,4 @@
@use 'styles/sidebar-expand-link.scss';
@use 'styles/sidebar-header-container.scss';
@use 'styles/sidebar-spacer-item.scss';
@use 'styles/new-folder-button.scss';
@use 'styles/sidebar-header-button.scss';

View File

@@ -5,7 +5,10 @@
display: flex;
flex-direction: row;
align-items: center;
padding-left: calc(var(--joplin-main-padding) + (var(--depth) * 16px) - 16px);
// The top-level folder has depth 2, so we need to subtract for the item
// to have the correct padding
--absolute-depth: calc(var(--depth) - 2);
padding-left: calc(var(--joplin-main-padding) + (var(--absolute-depth) * 16px));
background: none;
transition: 0.1s;

View File

@@ -1,11 +1,11 @@
.new-folder-button {
.sidebar-header-button {
position: absolute;
top: 0;
inset-inline-end: 0;
padding-inline-end: 15px;
padding-top: 4px;
padding-top: 8px;
height: 30px;
border: none;
@@ -22,4 +22,8 @@
color: var(--joplin-color-active2);
background: none;
}
&.-collapseall {
right: 25px;
}
}

View File

@@ -16,9 +16,15 @@ export enum ListItemType {
interface BaseListItem {
key: string;
depth: number;
hasChildren: boolean;
}
export interface HeaderListItem extends BaseListItem {
interface ToplevelListItem extends BaseListItem {
depth: 1;
}
export interface HeaderListItem extends ToplevelListItem {
kind: ListItemType.Header;
label: string;
expanded: boolean;
@@ -42,10 +48,9 @@ export interface FolderListItem extends BaseListItem {
kind: ListItemType.Folder;
folder: FolderEntity;
hasChildren: boolean;
depth: number;
}
export interface SpacerListItem extends BaseListItem {
export interface SpacerListItem extends ToplevelListItem {
kind: ListItemType.Spacer;
}

View File

@@ -1,5 +1,6 @@
import { CommandRuntime, CommandDeclaration, CommandContext } from '@joplin/lib/services/CommandService';
import { _ } from '@joplin/lib/locale';
import { GotoAnythingUserData, Mode, UserDataCallbackReject, UserDataCallbackResolve } from '../../../plugins/GotoAnything';
const PluginManager = require('@joplin/lib/services/PluginManager');
export enum UiType {
@@ -8,6 +9,10 @@ export enum UiType {
ControlledApi = 'controlledApi',
}
export interface GotoAnythingOptions {
mode?: Mode;
}
export const declaration: CommandDeclaration = {
name: 'gotoAnything',
label: () => _('Goto Anything...'),
@@ -24,19 +29,26 @@ function menuItemById(id: string) {
// calling the click() handler.
export const runtime = (): CommandRuntime => {
return {
execute: async (_context: CommandContext, uiType: UiType = UiType.GotoAnything) => {
execute: async (_context: CommandContext, uiType: UiType = UiType.GotoAnything, options: GotoAnythingOptions = null) => {
options = {
mode: Mode.Default,
...options,
};
if (uiType === UiType.GotoAnything) {
menuItemById('gotoAnything').click();
} else if (uiType === UiType.CommandPalette) {
menuItemById('commandPalette').click();
} else if (uiType === UiType.ControlledApi) {
// eslint-disable-next-line @typescript-eslint/ban-types -- Old code before rule was applied
return new Promise((resolve: Function, reject: Function) => {
return new Promise((resolve: UserDataCallbackResolve, reject: UserDataCallbackReject) => {
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
const menuItem = PluginManager.instance().menuItems().find((i: any) => i.id === 'controlledApi');
menuItem.userData = {
const userData: GotoAnythingUserData = {
callback: { resolve, reject },
mode: options.mode,
};
menuItem.userData = userData;
menuItem.click();
});
}

View File

@@ -8,6 +8,7 @@ import * as exportPdf from './exportPdf';
import * as gotoAnything from './gotoAnything';
import * as hideModalMessage from './hideModalMessage';
import * as leaveSharedFolder from './leaveSharedFolder';
import * as linkToNote from './linkToNote';
import * as moveToFolder from './moveToFolder';
import * as newFolder from './newFolder';
import * as newNote from './newNote';
@@ -28,7 +29,6 @@ import * as restoreNote from './restoreNote';
import * as revealResourceFile from './revealResourceFile';
import * as search from './search';
import * as setTags from './setTags';
import * as showEditorPlugin from './showEditorPlugin';
import * as showModalMessage from './showModalMessage';
import * as showNoteContentProperties from './showNoteContentProperties';
import * as showNoteProperties from './showNoteProperties';
@@ -36,7 +36,6 @@ import * as showPrompt from './showPrompt';
import * as showShareFolderDialog from './showShareFolderDialog';
import * as showShareNoteDialog from './showShareNoteDialog';
import * as showSpellCheckerMenu from './showSpellCheckerMenu';
import * as toggleEditorPlugin from './toggleEditorPlugin';
import * as toggleEditors from './toggleEditors';
import * as toggleLayoutMoveMode from './toggleLayoutMoveMode';
import * as toggleMenuBar from './toggleMenuBar';
@@ -58,6 +57,7 @@ const index: any[] = [
gotoAnything,
hideModalMessage,
leaveSharedFolder,
linkToNote,
moveToFolder,
newFolder,
newNote,
@@ -78,7 +78,6 @@ const index: any[] = [
revealResourceFile,
search,
setTags,
showEditorPlugin,
showModalMessage,
showNoteContentProperties,
showNoteProperties,
@@ -86,7 +85,6 @@ const index: any[] = [
showShareFolderDialog,
showShareNoteDialog,
showSpellCheckerMenu,
toggleEditorPlugin,
toggleEditors,
toggleLayoutMoveMode,
toggleMenuBar,

View File

@@ -0,0 +1,37 @@
import CommandService, { CommandRuntime, CommandDeclaration, CommandContext } from '@joplin/lib/services/CommandService';
import { _ } from '@joplin/lib/locale';
import { Mode } from '../../../plugins/GotoAnything';
import { GotoAnythingOptions, UiType } from './gotoAnything';
import { ModelType } from '@joplin/lib/BaseModel';
import Logger from '@joplin/utils/Logger';
import markdownUtils from '@joplin/lib/markdownUtils';
const logger = Logger.create('linkToNote');
export const declaration: CommandDeclaration = {
name: 'linkToNote',
label: () => _('Link to note...'),
};
export const runtime = (): CommandRuntime => {
return {
execute: async (_context: CommandContext) => {
const options: GotoAnythingOptions = {
mode: Mode.TitleOnly,
};
const result = await CommandService.instance().execute('gotoAnything', UiType.ControlledApi, options);
if (!result) return result;
if (result.type !== ModelType.Note) {
logger.warn('Retrieved item is not a note:', result);
return null;
}
const link = `[${markdownUtils.escapeTitleText(result.item.title)}](:/${markdownUtils.escapeLinkUrl(result.item.id)})`;
await CommandService.instance().execute('insertText', link);
return result;
},
enabledCondition: 'markdownEditorPaneVisible || richTextEditorVisible',
};
};

View File

@@ -33,6 +33,10 @@ export const SearchInput = styled(StyledInput)`
padding-right: 20px;
flex: 1;
width: 10px;
&::-webkit-search-cancel-button {
display: none;
}
`;
interface Props {

View File

@@ -59,6 +59,7 @@ export default function() {
'editor.sortSelectedLines',
'editor.swapLineUp',
'editor.swapLineDown',
'linkToNote',
'exportDeletionLog',
'toggleSafeMode',
'showShareNoteDialog',

View File

@@ -44,6 +44,48 @@ test.describe('sidebar', () => {
await expect(mainWindow.locator(':focus')).toHaveText('All notes');
});
test('left/right arrow keys should expand/collapse notebooks', async ({ electronApp, mainWindow }) => {
const mainScreen = await new MainScreen(mainWindow).setup();
const sidebar = mainScreen.sidebar;
// Build the folder hierarchy
const folderAHeader = await sidebar.createNewFolder('Folder A');
await expect(folderAHeader).toBeVisible();
const folderBHeader = await sidebar.createNewFolder('Folder B');
const folderCHeader = await sidebar.createNewFolder('Folder C');
const folderDHeader = await sidebar.createNewFolder('Folder D');
await folderBHeader.dragTo(folderAHeader);
await folderCHeader.dragTo(folderAHeader);
await folderDHeader.dragTo(folderCHeader);
// Folders should have correct initial levels
await expect(folderAHeader).toHaveJSProperty('ariaLevel', '2');
await expect(folderBHeader).toHaveJSProperty('ariaLevel', '3');
await expect(folderCHeader).toHaveJSProperty('ariaLevel', '3');
await expect(folderDHeader).toHaveJSProperty('ariaLevel', '4');
await sidebar.forceUpdateSorting(electronApp);
await folderBHeader.click();
// Pressing [left] on a folder with no children should jump to its parent
await mainWindow.keyboard.press('ArrowLeft');
await expect(mainWindow.locator(':focus')).toHaveText('Folder A');
// Pressing [left] again should collapse the folder
await expect(folderAHeader).toHaveJSProperty('ariaExpanded', 'true');
await mainWindow.keyboard.press('ArrowLeft');
await expect(folderAHeader).toHaveJSProperty('ariaExpanded', 'false');
// Should still be focused
await expect(mainWindow.locator(':focus')).toHaveText('Folder A');
// Pressing [right] on a collapsed folder should expand it
await mainWindow.keyboard.press('ArrowRight');
await expect(folderAHeader).toHaveJSProperty('ariaExpanded', 'true');
// Pressing [right] again should move to the next item
await mainWindow.keyboard.press('ArrowRight');
await expect(mainWindow.locator(':focus')).toHaveText('Folder B');
});
test('should allow changing the parent of a folder by drag-and-drop', async ({ electronApp, mainWindow }) => {
const mainScreen = await new MainScreen(mainWindow).setup();
const sidebar = mainScreen.sidebar;

View File

@@ -25,7 +25,7 @@ const getAndResizeMainWindow = async (electronApp: ElectronApplication) => {
// Setting the viewport size helps keep test environments consistent.
await mainWindow.setViewportSize({
width: 1200,
width: 1300,
height: 800,
});

View File

@@ -29,6 +29,7 @@ const EncryptionService = require('@joplin/lib/services/e2ee/EncryptionService')
const FileApiDriverLocal = require('@joplin/lib/file-api-driver-local').default;
const React = require('react');
const nodeSqlite = require('sqlite3');
const nodeSqliteCipher = require('@journeyapps/sqlcipher');
const initLib = require('@joplin/lib/initLib').default;
const pdfJs = require('pdfjs-dist');
require('@sentry/electron/renderer');
@@ -109,6 +110,7 @@ const main = async () => {
appVersion,
electronBridge: bridge(),
nodeSqlite,
nodeSqliteCipher,
pdfJs,
});

View File

@@ -1,6 +1,6 @@
{
"name": "@joplin/app-desktop",
"version": "3.3.1",
"version": "3.3.2",
"description": "Joplin for Desktop",
"main": "main.js",
"private": true,
@@ -137,7 +137,7 @@
"@playwright/test": "1.45.3",
"@testing-library/react-hooks": "8.0.1",
"@types/jest": "29.5.12",
"@types/node": "18.19.64",
"@types/node": "18.19.67",
"@types/react": "18.3.3",
"@types/react-dom": "18.3.0",
"@types/react-redux": "7.1.33",
@@ -166,6 +166,7 @@
"@joplin/lib": "~3.3",
"@joplin/renderer": "~3.3",
"@joplin/utils": "~3.3",
"@journeyapps/sqlcipher": "5.3.1",
"@sentry/electron": "4.24.0",
"@types/mustache": "4.2.5",
"async-mutex": "0.5.0",

View File

@@ -23,7 +23,6 @@ import Resource from '@joplin/lib/models/Resource';
import { NoteEntity, ResourceEntity } from '@joplin/lib/services/database/types';
import Dialog from '../gui/Dialog';
import AsyncActionQueue from '@joplin/lib/AsyncActionQueue';
import { htmlentities } from '@joplin/utils/html';
const logger = Logger.create('GotoAnything');
@@ -41,6 +40,39 @@ interface GotoAnythingSearchResult {
item_type?: ModelType;
}
// GotoAnything supports several modes:
//
// - Default: Search in note title, body. Can search for folders, tags, etc. This is the full
// featured GotoAnything.
//
// - TitleOnly: Search in note titles only.
//
// These different modes can be set from the `gotoAnything` command.
export enum Mode {
Default = 0,
TitleOnly,
}
export interface UserDataCallbackEvent {
type: ModelType;
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
item: any;
}
export type UserDataCallbackResolve = (event: UserDataCallbackEvent)=> void;
export type UserDataCallbackReject = (error: Error)=> void;
export interface UserDataCallback {
resolve: UserDataCallbackResolve;
reject: UserDataCallbackReject;
}
export interface GotoAnythingUserData {
startString?: string;
mode?: Mode;
callback?: UserDataCallback;
}
interface Props {
themeId: number;
// eslint-disable-next-line @typescript-eslint/ban-types -- Old code before rule was applied
@@ -48,8 +80,7 @@ interface Props {
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
folders: any[];
showCompletedTodos: boolean;
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
userData: any;
userData: GotoAnythingUserData;
}
interface State {
@@ -132,8 +163,8 @@ class DialogComponent extends React.PureComponent<Props, State> {
private itemListRef: any;
private listUpdateQueue_: AsyncActionQueue;
private markupToHtml_: MarkupToHtml;
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
private userCallback_: any = null;
private userCallback_: UserDataCallback|null = null;
private mode_: Mode;
public constructor(props: Props) {
super(props);
@@ -143,6 +174,8 @@ class DialogComponent extends React.PureComponent<Props, State> {
this.userCallback_ = props?.userData?.callback;
this.listUpdateQueue_ = new AsyncActionQueue(100);
this.mode_ = props?.userData?.mode ? props.userData.mode : Mode.Default;
this.state = {
query: startString,
results: [],
@@ -342,6 +375,13 @@ class DialogComponent extends React.PureComponent<Props, State> {
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- Old code before rule was applied
resultsInBody = !!results.find((row: any) => row.fields.includes('body'));
if (this.mode_ === Mode.TitleOnly) {
resultsInBody = false;
results = results.filter(r => {
return r.fields.includes('title');
});
}
const resourceIds = results.filter(r => r.item_type === ModelType.Resource).map(r => r.item_id);
const resources = await Resource.resourceOcrTextsByIds(resourceIds);
@@ -561,9 +601,7 @@ class DialogComponent extends React.PureComponent<Props, State> {
);
};
const titleHtml = item.fragments
? `<span style="font-weight: bold; color: ${theme.color};">${htmlentities(item.title)}</span>`
: wrapKeywordMatches(item.title);
const titleHtml = wrapKeywordMatches(item.title);
const fragmentsHtml = !item.fragments ? null : wrapKeywordMatches(item.fragments);
@@ -587,8 +625,8 @@ class DialogComponent extends React.PureComponent<Props, State> {
aria-posinset={index + 1}
>
<div style={style.rowTitle} dangerouslySetInnerHTML={{ __html: titleHtml }}></div>
{fragmentComp}
{pathComp}
{this.mode_ === Mode.TitleOnly ? null : fragmentComp}
{this.mode_ === Mode.TitleOnly ? null : pathComp}
</div>
);
}
@@ -671,6 +709,14 @@ class DialogComponent extends React.PureComponent<Props, State> {
);
}
private helpText() {
if (this.mode_ === Mode.TitleOnly) {
return _('Type a note title to search for it.');
} else {
return _('Type a note title or part of its content to jump to it. Or type # followed by a tag name, or @ followed by a notebook name. Or type : to search for commands.');
}
}
public render() {
const style = this.style();
const helpTextId = 'goto-anything-help-text';
@@ -681,7 +727,7 @@ class DialogComponent extends React.PureComponent<Props, State> {
id={helpTextId}
style={style.help}
hidden={!this.state.showHelp}
>{_('Type a note title or part of its content to jump to it. Or type # followed by a tag name, or @ followed by a notebook name. Or type : to search for commands.')}</div>
>{this.helpText()}</div>
);
return (

View File

@@ -6,7 +6,7 @@
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
TEMP_PATH=~/src/plugin-tests
NEED_COMPILING=1
PLUGIN_PATH=~/src/joplin/packages/app-cli/tests/support/plugins/toast
PLUGIN_PATH=~/src/plugin-yesyoucan
if [[ $NEED_COMPILING == 1 ]]; then
mkdir -p "$TEMP_PATH"

View File

@@ -70,6 +70,13 @@ def enableProguardInReleaseBuilds = false
def jscFlavor = 'org.webkit:android-jsc:+'
android {
externalNativeBuild {
cmake {
path file('src/main/cpp/CMakeLists.txt')
version '3.22.1'
}
}
ndkVersion rootProject.ext.ndkVersion
buildToolsVersion rootProject.ext.buildToolsVersion
compileSdk rootProject.ext.compileSdkVersion
@@ -79,14 +86,19 @@ android {
applicationId "net.cozic.joplin"
minSdkVersion rootProject.ext.minSdkVersion
targetSdkVersion rootProject.ext.targetSdkVersion
versionCode 2097763
versionName "3.3.0"
ndk {
abiFilters "armeabi-v7a", "x86", "arm64-v8a", "x86_64"
}
versionCode 2097764
versionName "3.3.1"
ndk {
abiFilters "armeabi-v7a", "x86", "arm64-v8a", "x86_64"
}
// Needed to fix: The number of method references in a .dex file cannot exceed 64K
multiDexEnabled true
externalNativeBuild {
cmake {
cppFlags '-DCMAKE_BUILD_TYPE=Release'
}
}
}
signingConfigs {
debug {
@@ -95,14 +107,14 @@ android {
keyAlias 'androiddebugkey'
keyPassword 'android'
}
release {
if (project.hasProperty('JOPLIN_RELEASE_STORE_FILE')) {
storeFile file(JOPLIN_RELEASE_STORE_FILE)
storePassword JOPLIN_RELEASE_STORE_PASSWORD
keyAlias JOPLIN_RELEASE_KEY_ALIAS
keyPassword JOPLIN_RELEASE_KEY_PASSWORD
}
}
release {
if (project.hasProperty('JOPLIN_RELEASE_STORE_FILE')) {
storeFile file(JOPLIN_RELEASE_STORE_FILE)
storePassword JOPLIN_RELEASE_STORE_PASSWORD
keyAlias JOPLIN_RELEASE_KEY_ALIAS
keyPassword JOPLIN_RELEASE_KEY_PASSWORD
}
}
}
buildTypes {
debug {
@@ -127,10 +139,6 @@ dependencies {
} else {
implementation jscFlavor
}
// Needed for Whisper speech-to-text
implementation 'com.microsoft.onnxruntime:onnxruntime-android:latest.release'
implementation 'com.microsoft.onnxruntime:onnxruntime-extensions-android:latest.release'
}
apply from: file("../../node_modules/@react-native-community/cli-platform-android/native_modules.gradle"); applyNativeModulesAppBuildGradle(project)

View File

@@ -0,0 +1,64 @@
# For more information about using CMake with Android Studio, read the
# documentation: https://d.android.com/studio/projects/add-native-code.html.
# For more examples on how to use CMake, see https://github.com/android/ndk-samples.
# Sets the minimum CMake version required for this project.
cmake_minimum_required(VERSION 3.22.1)
# Declares the project name. The project name can be accessed via ${ PROJECT_NAME},
# Since this is the top level CMakeLists.txt, the project name is also accessible
# with ${CMAKE_PROJECT_NAME} (both CMake variables are in-sync within the top level
# build script scope).
project("joplin")
# Creates and names a library, sets it as either STATIC
# or SHARED, and provides the relative paths to its source code.
# You can define multiple libraries, and CMake builds them for you.
# Gradle automatically packages shared libraries with your APK.
#
# In this top level CMakeLists.txt, ${CMAKE_PROJECT_NAME} is used to define
# the target library name; in the sub-module's CMakeLists.txt, ${PROJECT_NAME}
# is preferred for the same purpose.
#
# In order to load a library into your app from Java/Kotlin, you must call
# System.loadLibrary() and pass the name of the library defined here;
# for GameActivity/NativeActivity derived applications, the same library name must be
# used in the AndroidManifest.xml file.
add_library(${CMAKE_PROJECT_NAME} SHARED
# List C/C++ source files with relative paths to this CMakeLists.txt.
whisperWrapper.cpp
utils/WhisperSession.cpp
utils/findLongestSilence.cpp
utils/findLongestSilence_test.cpp
)
set(WHISPER_LIB_DIR ${CMAKE_SOURCE_DIR}/../../../../vendor/whisper.cpp)
# Based on the Whisper.cpp Android example:
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -O3 ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O3 -fvisibility=hidden -fvisibility-inlines-hidden -ffunction-sections -fdata-sections")
# Whisper: See https://stackoverflow.com/a/76290722
add_subdirectory(${WHISPER_LIB_DIR} ./whisper)
# Directories for header files
target_include_directories(
${CMAKE_PROJECT_NAME}
PUBLIC
${PROJECT_BASE_DIR}/shared
${WHISPER_LIB_DIR}/include
)
# Specifies libraries CMake should link to your target library. You
# can link libraries from various origins, such as libraries defined in this
# build script, prebuilt third-party libraries, or Android system libraries.
target_link_libraries(${CMAKE_PROJECT_NAME}
whisper
# List libraries link to the target library
android
log
)

View File

@@ -0,0 +1,154 @@
#include "WhisperSession.h"
#include <utility>
#include <sstream>
#include <algorithm>
#include "whisper.h"
#include "findLongestSilence.h"
#include "androidUtil.h"
WhisperSession::WhisperSession(const std::string& modelPath, std::string lang, std::string prompt)
: lang_ {std::move(lang)}, prompt_ {std::move(prompt)} {
whisper_context_params contextParams = whisper_context_default_params();
// Lifetime(pModelPath): Whisper.cpp creates a copy of pModelPath and stores it in a std::string.
// whisper_init_from_file_with_params doesn't seem to otherwise save pModelPath. As such, it's
// safe to pass a pointer to a std::string's representation:
const char *pModelPath = modelPath.c_str();
pContext_ = whisper_init_from_file_with_params(pModelPath, contextParams);
if (pContext_ == nullptr) {
throw std::runtime_error("Unable to initialize the Whisper context.");
}
}
WhisperSession::~WhisperSession() {
if (pContext_ != nullptr) {
whisper_free(pContext_);
}
}
whisper_full_params
WhisperSession::buildWhisperParams_() {
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
// WHISPER_SAMPLING_BEAM_SEARCH is an alternative to greedy:
// params.beam_search = { .beam_size = 2 };
params.print_realtime = false;
// Disable timestamps: They make creating custom Whisper models more difficult:
params.print_timestamps = false;
params.no_timestamps = true;
params.print_progress = false;
params.translate = false;
params.offset_ms = 0;
params.single_segment = true;
// Avoid non-speech tokens (e.g. "(crackle)"). For now, this is disabled because it seems to
// cause increased hallucinations (e.g. repeated "Thank you"s).
// params.suppress_nst = true;
params.temperature = 0; // Initial randomness
// There's also a temperature_inc variable, which is used when decoding fails (Whisper increases
// the temperature by temperature_inc and retries).
// Following the whisper streaming example in setting prompt_tokens to nullptr
// when using VAD (Voice Activity Detection)
params.initial_prompt = prompt_.c_str();
params.prompt_tokens = nullptr;
params.prompt_n_tokens = 0;
// Lifetime: lifetime(params) < lifetime(lang_) = lifetime(this).
params.language = lang_.c_str();
return params;
}
std::string
WhisperSession::transcribe_(const std::vector<float>& audio, size_t transcribeCount) {
int minTranscribeLength = WHISPER_SAMPLE_RATE / 2; // 0.5s
if (transcribeCount < minTranscribeLength) {
return "";
}
whisper_full_params params = buildWhisperParams_();
whisper_reset_timings(pContext_);
transcribeCount = std::min(audio.size(), transcribeCount);
if (whisper_full(pContext_, params, audio.data(), transcribeCount) != 0) {
throw std::runtime_error("Failed to run Whisper (non-zero exit status).");
} else {
whisper_print_timings(pContext_);
}
// Tokens to be used as a prompt for the next run of Whisper
unsigned int segmentCount = whisper_full_n_segments(pContext_);
// Build the results
std::stringstream results;
for (int i = 0; i < segmentCount; i++) {
results << " " << whisper_full_get_segment_text(pContext_, i);
}
std::string result = results.str();
LOGD("Transcribed: %s (audio len %.2f)", result.c_str(), audio.size() / (float) WHISPER_SAMPLE_RATE);
return result;
}
std::string
WhisperSession::splitAndTranscribeBefore_(int transcribeUpTo, int trimTo) {
std::string result = transcribe_(audioBuffer_, transcribeUpTo);
// Trim
LOGI("Trim to %.2f s, transcribe to %.2f s", (float) trimTo / WHISPER_SAMPLE_RATE, (float) transcribeUpTo / WHISPER_SAMPLE_RATE);
audioBuffer_ = std::vector(audioBuffer_.begin() + trimTo, audioBuffer_.end());
return result;
}
std::string
WhisperSession::transcribeNextChunk(const float *pAudio, int sizeAudio) {
std::string finalizedContent;
// Update the local audio buffer
for (int i = 0; i < sizeAudio; i++) {
audioBuffer_.push_back(pAudio[i]);
}
// Does the audio buffer need to be split somewhere?
int maximumSamples = WHISPER_SAMPLE_RATE * 25;
if (audioBuffer_.size() >= maximumSamples) {
float minSilenceSeconds = 0.3f;
auto silenceRange = findLongestSilence(
audioBuffer_, WHISPER_SAMPLE_RATE, minSilenceSeconds, maximumSamples
);
// In this case, the audio is long enough that it needs to be split somewhere. If there's
// no suitable pause available, default to splitting in the middle.
int halfBufferSize = audioBuffer_.size() / 2;
int transcribeTo = silenceRange.isValid ? silenceRange.start : halfBufferSize;
int trimTo = silenceRange.isValid ? silenceRange.end : halfBufferSize;
finalizedContent = splitAndTranscribeBefore_(transcribeTo, trimTo);
} else if (audioBuffer_.size() > WHISPER_SAMPLE_RATE * 3) {
// Allow brief pauses to create new paragraphs:
float minSilenceSeconds = 2.0f;
auto splitPoint = findLongestSilence(
audioBuffer_, WHISPER_SAMPLE_RATE, minSilenceSeconds, maximumSamples
);
if (splitPoint.isValid) {
int tolerance = WHISPER_SAMPLE_RATE / 20; // 0.05s
bool isCompletelySilent = splitPoint.start < tolerance && splitPoint.end > audioBuffer_.size() - tolerance;
if (isCompletelySilent) {
audioBuffer_.clear();
} else {
finalizedContent = splitAndTranscribeBefore_(splitPoint.start, splitPoint.end);
}
}
}
previewText_ = transcribe_(audioBuffer_, audioBuffer_.size());
return finalizedContent;
}
std::string WhisperSession::getPreview() {
return previewText_;
}

View File

@@ -0,0 +1,27 @@
#pragma once
#include <string>
#include "whisper.h"
class WhisperSession {
public:
WhisperSession(const std::string& modelPath, std::string lang, std::string prompt);
~WhisperSession();
std::string transcribeNextChunk(const float *pAudio, int sizeAudio);
std::string getPreview();
private:
// Current preview state
std::string previewText_;
whisper_full_params buildWhisperParams_();
std::string transcribe_(const std::vector<float>& audio, size_t samplesToTranscribe);
std::string splitAndTranscribeBefore_(int transcribeUpTo, int trimTo);
whisper_context *pContext_;
const std::string lang_;
const std::string prompt_;
std::vector<float> audioBuffer_;
};

View File

@@ -0,0 +1,10 @@
#pragma once
#include <android/log.h>
// Use macros for these rather than functions. Functions generate a "may be unsafe"
// warning because the compiler can't check that the first argument is a string
// literal.
#define LOGW(...) __android_log_print(ANDROID_LOG_WARN, "Whisper::JNI", __VA_ARGS__);
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO, "Whisper::JNI", __VA_ARGS__);
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, "Whisper::JNI", __VA_ARGS__);

View File

@@ -0,0 +1,111 @@
#include "findLongestSilence.h"
#include "androidUtil.h"
static void highpass(std::vector<float>& data, int sampleRate) {
// Highpass filter. See https://en.wikipedia.org/wiki/High-pass_filter and
// the example in whisper.cpp/streaming.
float highpassCutoffHz = 60.0f;
float RC = 1.0f / (2 * 3.1416f * highpassCutoffHz);
float timePerSample = 1.0f / sampleRate;
float alpha = RC / (RC + timePerSample);
float lastInput = data[0];
for (int i = 1; i < data.size(); i++) {
float currentInput = data[i];
data[i] = alpha * data[i - 1] + alpha * (currentInput - lastInput);
lastInput = currentInput;
}
}
SilenceRange findLongestSilence(
const std::vector<float>& audioData,
int sampleRate,
float minSilenceLengthSeconds,
int maxSilencePosition
) {
int bestCandidateLength = 0;
int bestCandidateStart = -1;
int bestCandidateEnd = -1;
int currentCandidateStart = -1;
std::vector<float> processedAudio { audioData };
highpass(processedAudio, sampleRate);
// Break into windows of size `windowSize`:
int windowSize = 256;
int windowsPerSecond = sampleRate / windowSize;
int quietWindows = 0;
// Finishes the current candidate for longest silence
auto finalizeCandidate = [&] (int currentOffset) {
bool hasCandidate = currentCandidateStart >= 0;
if (!hasCandidate) {
return;
}
int currentCandidateLength = currentOffset - currentCandidateStart;
if (currentCandidateLength > bestCandidateLength && currentCandidateStart <= maxSilencePosition) {
bestCandidateLength = currentCandidateLength;
bestCandidateStart = currentCandidateStart;
bestCandidateEnd = currentOffset;
LOGD("New best candidate with length %d", currentCandidateLength);
}
currentCandidateStart = -1;
};
int windowOffset;
for (windowOffset = 0; windowOffset < processedAudio.size() && windowOffset <= maxSilencePosition; windowOffset += windowSize) {
int rollingAverageSize = 24;
float threshold = static_cast<float>(rollingAverageSize) / 80.0f;
// Count the number of samples that (when averaged with the nearby samples)
// are below some threshold value.
float absSum = 0;
int silentSamples = 0;
for (int i = windowOffset; i < windowOffset + windowSize && i < processedAudio.size(); i++) {
absSum += abs(processedAudio[i]);
bool isSumComplete = i - rollingAverageSize >= windowOffset;
if (isSumComplete) {
absSum -= abs(processedAudio[i - rollingAverageSize]);
if (absSum < threshold) {
silentSamples++;
}
}
}
// The window should be considered "quiet" if enough samples were below the threshold.
// Don't require all of them to be to allow clicks and pops.
if (silentSamples >= windowSize * 3 / 4) {
quietWindows ++;
} else {
quietWindows = 0;
}
int minQuietWindows = static_cast<int>(windowsPerSecond * minSilenceLengthSeconds);
if (quietWindows >= minQuietWindows && currentCandidateStart == -1) {
// Found a candidate. Start it.
currentCandidateStart = windowOffset;
} else if (quietWindows == 0) {
// Ended a candidate. Is it better than the best?
finalizeCandidate(windowOffset);
}
}
finalizeCandidate(windowOffset);
// Return the best candidate.
if (bestCandidateLength == 0) {
return { .isValid = false, .start = 0, .end = 0 };
} else {
return {
.isValid=true,
.start=bestCandidateStart,
.end=bestCandidateEnd
};
}
}

View File

@@ -0,0 +1,24 @@
#pragma once
#include <vector>
#include <optional>
#include <tuple>
struct SilenceRange {
bool isValid;
int start;
int end;
};
SilenceRange findLongestSilence(
const std::vector<float>& audioData,
int sampleRate,
// Minimum length of silence in seconds
float minSilenceLengthSeconds,
// Doesn't check for silence at a position greater than maximumSilenceStart
int maximumSilenceStart
);

View File

@@ -0,0 +1,169 @@
#include "findLongestSilence_test.h"
#include "findLongestSilence.h"
#include "androidUtil.h"
#include <string>
#include <vector>
#include <sstream>
#include <cmath>
#include <random>
static void testTones();
static void testToneWithPause();
static void testSilence();
static void testNoise();
static void fail(const std::string& message);
struct GeneratedAudio {
std::vector<float> data;
int sampleRate;
int sampleCount;
};
using AudioGenerator = std::function<const float(float)>;
static GeneratedAudio makeAudio(const AudioGenerator& generator, int sampleRate, float duration);
static void expectNoSilence(const GeneratedAudio& audio, const std::string& testLabel);
static void expectSilenceBetween(const GeneratedAudio& audio, float startTimeSeconds, float stopTimeSeconds, const std::string& testLabel);
void findLongestSilence_test() {
testTones();
testToneWithPause();
testSilence();
testNoise();
}
static void testTones() {
for (int frequency = 440; frequency < 1600; frequency += 300) {
std::stringstream messageBuilder;
messageBuilder << "Should not find silence in tone with frequency " << frequency << " HZ.";
auto audioTone = makeAudio([frequency](float t) {
// Also set the amplitude to 0.2f (to more closely match mic input).
return std::sin(t * static_cast<float>(frequency)) * 0.2f;
}, 15000, 10.0f);
expectNoSilence(audioTone, messageBuilder.str());
}
auto lowFrequencyTone = makeAudio([](float t) {
return std::sin(t * 8) * 0.3f;
}, 15000, 10.0f);
expectSilenceBetween(lowFrequencyTone, 0.0f, 10.0f, "Should find silence in a very low-frequency tone");
}
static void testToneWithPause() {
auto audioToneWithPause = makeAudio([](float t) {
if (t < 5.0f || t > 6.0f) {
return std::sin(t * 880);
} else {
return 0.0f;
}
}, 15000, 11.0f);
expectSilenceBetween(audioToneWithPause, 5.0f, 6.0f, "Should find silence when completely silent in a region");
auto audioToneWithTwoPauses = makeAudio([](float t) {
if (t < 1.0f || (t > 8.0f && t < 10.0f)) {
return 0.0f;
} else {
return std::sin(t * 880);
}
}, 15000, 20.0f);
expectSilenceBetween(audioToneWithPause, 5.0f, 6.0f, "Should find silence when completely silent in a region");
}
static void testSilence() {
auto silence = makeAudio([](float t) {
return 0.0f;
}, 16000, 10.0f);
expectSilenceBetween(silence, 0.0f, 10.0f, "Should find silence in a completely silent signal");
}
static void testNoise() {
std::minstd_rand randomness {2};
std::uniform_real_distribution noiseGenerator {-1.0, 1.0};
auto quietNoise = makeAudio([&](float t) {
return noiseGenerator(randomness) * 0.02f;
}, 16000, 5.0f);
expectSilenceBetween(quietNoise, 0.0f, 5.0f, "Should find silence in a tone with low-amplitude noise");
}
static void fail(const std::string& message) {
throw std::runtime_error(message);
}
static GeneratedAudio makeAudio(const AudioGenerator& generator, int sampleRate, float duration) {
std::vector<float> result { };
int numSamples = static_cast<int>(static_cast<float>(sampleRate) * duration);
for (int i = 0; i < numSamples; i++) {
float time = static_cast<float>(i) / static_cast<float>(sampleRate);
result.push_back(generator(time));
}
return {
.data=result,
.sampleRate=sampleRate,
.sampleCount=numSamples,
};
}
static void logTestPass(const std::string& message) {
LOGI("Test PASS: %s", message.c_str());
}
static float samplesToSeconds(int samples, int sampleRate) {
return static_cast<float>(samples) / static_cast<float>(sampleRate);
}
static void expectNoSilence(const GeneratedAudio& audio, const std::string& testLabel) {
auto silence = findLongestSilence(
audio.data,
audio.sampleRate,
0.02f,
audio.sampleCount
);
if (silence.isValid) {
std::stringstream errorBuilder;
float startSeconds = samplesToSeconds(silence.start, audio.sampleRate);
float stopSeconds = samplesToSeconds(silence.end, audio.sampleRate);
errorBuilder << "Error: Found silence between " << startSeconds << "s and " << stopSeconds << "s";
errorBuilder << ": " << testLabel;
fail(errorBuilder.str());
}
logTestPass(testLabel);
}
static void expectSilenceBetween(const GeneratedAudio& audio, float startTimeSeconds, float stopTimeSeconds, const std::string& testLabel) {
auto silenceResult = findLongestSilence(
audio.data,
audio.sampleRate,
0.02f,
audio.sampleCount
);
if (!silenceResult.isValid) {
fail("Error: No silence found: " + testLabel);
}
auto checkEndpoint = [&] (int actualValueSamples, float expectedValueSeconds, const std::string& description) {
float actualValueSeconds = samplesToSeconds(actualValueSamples, audio.sampleRate);
float tolerance = 0.1f; // 100ms
if (std::abs(expectedValueSeconds - actualValueSeconds) > tolerance) {
std::stringstream messageBuilder;
messageBuilder << "Error: Silence " << description << " mismatch: ";
messageBuilder << "got " << actualValueSeconds << "s expected " << expectedValueSeconds << "s. ";
messageBuilder << testLabel;
fail(messageBuilder.str());
}
};
checkEndpoint(silenceResult.start, startTimeSeconds, "start time");
checkEndpoint(silenceResult.end, stopTimeSeconds, "stop time");
logTestPass(testLabel);
}

View File

@@ -0,0 +1,3 @@
#pragma once
void findLongestSilence_test();

View File

@@ -0,0 +1,125 @@
// Write C++ code here.
//
// Do not forget to dynamically load the C++ library into your application.
//
// For instance,
//
// In MainActivity.java:
// static {
// System.loadLibrary("joplin");
// }
//
// Or, in MainActivity.kt:
// companion object {
// init {
// System.loadLibrary("joplin")
// }
// }
#include <jni.h>
#include <memory>
#include <string>
#include <sstream>
#include <android/log.h>
#include "whisper.h"
#include "utils/WhisperSession.h"
#include "utils/androidUtil.h"
#include "utils/findLongestSilence_test.h"
void log_android(enum ggml_log_level level, const char* message, void* user_data) {
android_LogPriority priority = level == 4 ? ANDROID_LOG_ERROR : ANDROID_LOG_INFO;
__android_log_print(priority, "Whisper::JNI::cpp", "%s", message);
}
jstring stringToJava(JNIEnv *env, const std::string& source) {
return env->NewStringUTF(source.c_str());
}
std::string stringToCXX(JNIEnv *env, jstring jString) {
const char *jStringChars = env->GetStringUTFChars(jString, nullptr);
std::string result { jStringChars };
env->ReleaseStringUTFChars(jString, jStringChars);
return result;
}
void throwException(JNIEnv *env, const std::string& message) {
jclass errorClass = env->FindClass("java/lang/Exception");
env->ThrowNew(errorClass, message.c_str());
}
extern "C"
JNIEXPORT jlong JNICALL
Java_net_cozic_joplin_audio_NativeWhisperLib_00024Companion_init(
JNIEnv *env,
jobject thiz,
jstring modelPath,
jstring language,
jstring prompt
) {
whisper_log_set(log_android, nullptr);
try {
auto *pSession = new WhisperSession(
stringToCXX(env, modelPath), stringToCXX(env, language), stringToCXX(env, prompt)
);
return (jlong) pSession;
} catch (const std::exception& exception) {
LOGW("Failed to init whisper: %s", exception.what());
throwException(env, exception.what());
return 0;
}
}
extern "C"
JNIEXPORT void JNICALL
Java_net_cozic_joplin_audio_NativeWhisperLib_00024Companion_free(JNIEnv *env, jobject thiz,
jlong pointer) {
std::free(reinterpret_cast<WhisperSession *>(pointer));
}
extern "C"
JNIEXPORT jstring JNICALL
Java_net_cozic_joplin_audio_NativeWhisperLib_00024Companion_fullTranscribe(JNIEnv *env,
jobject thiz,
jlong pointer,
jfloatArray audio_data) {
auto *pSession = reinterpret_cast<WhisperSession *> (pointer);
jfloat *pAudioData = env->GetFloatArrayElements(audio_data, nullptr);
jsize lenAudioData = env->GetArrayLength(audio_data);
std::string result;
try {
LOGD("Starting Whisper, transcribe %d", lenAudioData);
result = pSession->transcribeNextChunk(pAudioData, lenAudioData);
auto preview = pSession->getPreview();
LOGD("Ran Whisper. Got %s (preview %s)", result.c_str(), preview.c_str());
} catch (const std::exception& exception) {
LOGW("Failed to run whisper: %s", exception.what());
throwException(env, exception.what());
}
// JNI_ABORT: "free the buffer without copying back the possible changes", pass 0 to copy
// changes (there should be no changes)
env->ReleaseFloatArrayElements(audio_data, pAudioData, JNI_ABORT);
return stringToJava(env, result);
}
extern "C"
JNIEXPORT jstring JNICALL
Java_net_cozic_joplin_audio_NativeWhisperLib_00024Companion_getPreview(
JNIEnv *env, jobject thiz, jlong pointer
) {
auto *pSession = reinterpret_cast<WhisperSession *> (pointer);
return stringToJava(env, pSession->getPreview());
}
extern "C"
JNIEXPORT void JNICALL
Java_net_cozic_joplin_audio_NativeWhisperLib_00024Companion_runTests(JNIEnv *env, jobject thiz) {
try {
findLongestSilence_test();
} catch (const std::exception& exception) {
LOGW("Failed to run tests: %s", exception.what());
throwException(env, exception.what());
}
}

View File

@@ -21,7 +21,7 @@ class AudioRecorder(context: Context) : Closeable {
private var bufferWriteOffset = 0
// Accessor must not modify result
val bufferedData: FloatArray get() = buffer.sliceArray(0 until bufferWriteOffset)
private val bufferedData: FloatArray get() = buffer.sliceArray(0 until bufferWriteOffset)
val bufferLengthSeconds: Double get() = bufferWriteOffset.toDouble() / sampleRate
init {
@@ -74,11 +74,16 @@ class AudioRecorder(context: Context) : Closeable {
}
// Pulls all available data from the audio recorder's buffer
fun pullAvailable() {
return read(maxBufferSize, AudioRecord.READ_NON_BLOCKING)
fun pullAvailable(): FloatArray {
read(maxBufferSize, AudioRecord.READ_NON_BLOCKING)
val result = bufferedData
buffer.fill(0.0f, 0, maxBufferSize);
bufferWriteOffset = 0
return result
}
fun pullNextSeconds(seconds: Double) {
fun pullNextSeconds(seconds: Double):FloatArray {
val remainingSize = maxBufferSize - bufferWriteOffset
val requestedSize = (seconds * sampleRate).toInt()
@@ -87,7 +92,8 @@ class AudioRecorder(context: Context) : Closeable {
advanceStartBySamples(maxBufferSize / 3)
}
return read(requestedSize, AudioRecord.READ_BLOCKING)
read(requestedSize, AudioRecord.READ_BLOCKING)
return pullAvailable()
}
override fun close() {

View File

@@ -0,0 +1,54 @@
package net.cozic.joplin.audio
import java.io.Closeable
class NativeWhisperLib(
modelPath: String,
languageCode: String,
prompt: String,
) : Closeable {
companion object {
init {
System.loadLibrary("joplin")
}
external fun runTests(): Unit;
// TODO: The example whisper.cpp project transfers pointers as Longs to the Kotlin code.
// This seems unsafe. Try changing how this is managed.
private external fun init(modelPath: String, languageCode: String, prompt: String): Long;
private external fun free(pointer: Long): Unit;
private external fun fullTranscribe(pointer: Long, audioData: FloatArray): String;
private external fun getPreview(pointer: Long): String;
}
private var closed = false
private val pointer: Long = init(modelPath, languageCode, prompt)
fun transcribe(audioData: FloatArray): String {
if (closed) {
throw Exception("Cannot transcribe using a closed session")
}
return fullTranscribe(pointer, audioData)
}
fun getPreview(): String {
if (closed) {
throw Exception("Cannot get preview from a closed session")
}
return getPreview(pointer)
}
override fun close() {
if (closed) {
throw Exception("Cannot close a whisper session twice")
}
closed = true
free(pointer)
}
}

View File

@@ -1,110 +1,33 @@
package net.cozic.joplin.audio
import ai.onnxruntime.OnnxTensor
import ai.onnxruntime.OrtEnvironment
import ai.onnxruntime.OrtSession
import ai.onnxruntime.extensions.OrtxPackage
import android.annotation.SuppressLint
import android.content.Context
import android.util.Log
import java.io.Closeable
import java.nio.FloatBuffer
import java.nio.IntBuffer
import kotlin.time.DurationUnit
import kotlin.time.measureTimedValue
class SpeechToTextConverter(
modelPath: String,
locale: String,
prompt: String,
recorderFactory: AudioRecorderFactory,
private val environment: OrtEnvironment,
context: Context,
) : Closeable {
private val recorder = recorderFactory(context)
private val session: OrtSession = environment.createSession(
modelPath,
OrtSession.SessionOptions().apply {
// Needed for audio decoding
registerCustomOpLibrary(OrtxPackage.getLibraryPath())
},
)
private val languageCode = Regex("_.*").replace(locale, "")
private val decoderInputIds = when (languageCode) {
// Add 50363 to the end to omit timestamps
"en" -> intArrayOf(50258, 50259, 50359)
"fr" -> intArrayOf(50258, 50265, 50359)
"es" -> intArrayOf(50258, 50262, 50359)
"de" -> intArrayOf(50258, 50261, 50359)
"it" -> intArrayOf(50258, 50274, 50359)
"nl" -> intArrayOf(50258, 50271, 50359)
"ko" -> intArrayOf(50258, 50264, 50359)
"th" -> intArrayOf(50258, 50289, 50359)
"ru" -> intArrayOf(50258, 50263, 50359)
"pt" -> intArrayOf(50258, 50267, 50359)
"pl" -> intArrayOf(50258, 50269, 50359)
"id" -> intArrayOf(50258, 50275, 50359)
"hi" -> intArrayOf(50258, 50276, 50359)
// Let Whisper guess the language
else -> intArrayOf(50258)
}
private var whisper = NativeWhisperLib(
modelPath,
languageCode,
prompt,
)
fun start() {
recorder.start()
}
private fun getInputs(data: FloatArray): MutableMap<String, OnnxTensor> {
fun intTensor(value: Int) = OnnxTensor.createTensor(
environment,
IntBuffer.wrap(intArrayOf(value)),
longArrayOf(1),
)
fun floatTensor(value: Float) = OnnxTensor.createTensor(
environment,
FloatBuffer.wrap(floatArrayOf(value)),
longArrayOf(1),
)
val audioPcmTensor = OnnxTensor.createTensor(
environment,
FloatBuffer.wrap(data),
longArrayOf(1, data.size.toLong()),
)
val decoderInputIdsTensor = OnnxTensor.createTensor(
environment,
IntBuffer.wrap(decoderInputIds),
longArrayOf(1, decoderInputIds.size.toLong())
)
return mutableMapOf(
"audio_pcm" to audioPcmTensor,
"max_length" to intTensor(412),
"min_length" to intTensor(0),
"num_return_sequences" to intTensor(1),
"num_beams" to intTensor(1),
"length_penalty" to floatTensor(1.1f),
"repetition_penalty" to floatTensor(3f),
"decoder_input_ids" to decoderInputIdsTensor,
// Required for timestamps
"logits_processor" to intTensor(1)
)
}
// TODO .get() fails on older Android versions
@SuppressLint("NewApi")
private fun convert(data: FloatArray): String {
val (inputs, convertInputsTime) = measureTimedValue {
getInputs(data)
}
val (outputs, getOutputsTime) = measureTimedValue {
session.run(inputs, setOf("str"))
}
val mainOutput = outputs.get("str").get().value as Array<Array<String>>
outputs.close()
Log.i("Whisper", "Converted ${data.size / 16000}s of data in ${
getOutputsTime.toString(DurationUnit.SECONDS, 2)
} converted inputs in ${convertInputsTime.inWholeMilliseconds}ms")
return mainOutput[0][0]
Log.d("Whisper", "Pre-transcribe data of size ${data.size}")
val result = whisper.transcribe(data)
Log.d("Whisper", "Post transcribe. Got $result")
return result;
}
fun dropFirstSeconds(seconds: Double) {
@@ -114,23 +37,26 @@ class SpeechToTextConverter(
val bufferLengthSeconds: Double get() = recorder.bufferLengthSeconds
fun expandBufferAndConvert(seconds: Double): String {
recorder.pullNextSeconds(seconds)
// Also pull any extra available data, in case the speech-to-text converter
// is lagging behind the audio recorder.
recorder.pullAvailable()
return convert(recorder.bufferedData)
fun convertNext(seconds: Double): String {
val buffer = recorder.pullNextSeconds(seconds)
val result = convert(buffer)
dropFirstSeconds(seconds)
return result
}
// Converts as many seconds of buffered data as possible, without waiting
fun expandBufferAndConvert(): String {
recorder.pullAvailable()
return convert(recorder.bufferedData)
fun convertRemaining(): String {
val buffer = recorder.pullAvailable()
return convert(buffer)
}
fun getPreview(): String {
return whisper.getPreview()
}
override fun close() {
Log.d("Whisper", "Close")
recorder.close()
session.close()
whisper.close()
}
}

View File

@@ -1,6 +1,5 @@
package net.cozic.joplin.audio
import ai.onnxruntime.OrtEnvironment
import com.facebook.react.ReactPackage
import com.facebook.react.bridge.LifecycleEventListener
import com.facebook.react.bridge.NativeModule
@@ -24,7 +23,6 @@ class SpeechToTextPackage : ReactPackage {
class SpeechToTextModule(
private var context: ReactApplicationContext,
) : ReactContextBaseJavaModule(context), LifecycleEventListener {
private var environment: OrtEnvironment? = null
private val executorService: ExecutorService = Executors.newFixedThreadPool(1)
private val sessionManager = SpeechToTextSessionManager(executorService)
@@ -32,21 +30,24 @@ class SpeechToTextPackage : ReactPackage {
override fun onHostResume() { }
override fun onHostPause() { }
override fun onHostDestroy() {
environment?.close()
override fun onHostDestroy() { }
@ReactMethod
fun runTests(promise: Promise) {
try {
NativeWhisperLib.runTests()
promise.resolve(true)
} catch (exception: Throwable) {
promise.reject(exception)
}
}
@ReactMethod
fun openSession(modelPath: String, locale: String, promise: Promise) {
fun openSession(modelPath: String, locale: String, prompt: String, promise: Promise) {
val appContext = context.applicationContext
// Initialize environment as late as possible:
val ortEnvironment = environment ?: OrtEnvironment.getEnvironment()
if (environment != null) {
environment = ortEnvironment
}
try {
val sessionId = sessionManager.openSession(modelPath, locale, ortEnvironment, appContext)
val sessionId = sessionManager.openSession(modelPath, locale, prompt, appContext)
promise.resolve(sessionId)
} catch (exception: Throwable) {
promise.reject(exception)
@@ -69,8 +70,8 @@ class SpeechToTextPackage : ReactPackage {
}
@ReactMethod
fun expandBufferAndConvert(sessionId: Int, duration: Double, promise: Promise) {
sessionManager.expandBufferAndConvert(sessionId, duration, promise)
fun convertNext(sessionId: Int, duration: Double, promise: Promise) {
sessionManager.convertNext(sessionId, duration, promise)
}
@ReactMethod
@@ -78,6 +79,11 @@ class SpeechToTextPackage : ReactPackage {
sessionManager.convertAvailable(sessionId, promise)
}
@ReactMethod
fun getPreview(sessionId: Int, promise: Promise) {
sessionManager.getPreview(sessionId, promise)
}
@ReactMethod
fun closeSession(sessionId: Int, promise: Promise) {
sessionManager.closeSession(sessionId, promise)

View File

@@ -1,6 +1,5 @@
package net.cozic.joplin.audio
import ai.onnxruntime.OrtEnvironment
import android.content.Context
import com.facebook.react.bridge.Promise
import java.util.concurrent.Executor
@@ -21,13 +20,13 @@ class SpeechToTextSessionManager(
fun openSession(
modelPath: String,
locale: String,
environment: OrtEnvironment,
prompt: String,
context: Context,
): Int {
val sessionId = nextSessionId++
sessions[sessionId] = SpeechToTextSession(
SpeechToTextConverter(
modelPath, locale, recorderFactory = AudioRecorder.factory, environment, context,
modelPath, locale, prompt, recorderFactory = AudioRecorder.factory, context,
)
)
return sessionId
@@ -87,9 +86,9 @@ class SpeechToTextSessionManager(
}
// Waits for the next [duration] seconds to become available, then converts
fun expandBufferAndConvert(sessionId: Int, duration: Double, promise: Promise) {
fun convertNext(sessionId: Int, duration: Double, promise: Promise) {
this.concurrentWithSession(sessionId, promise::reject) { session ->
val result = session.converter.expandBufferAndConvert(duration)
val result = session.converter.convertNext(duration)
promise.resolve(result)
}
}
@@ -97,7 +96,14 @@ class SpeechToTextSessionManager(
// Converts all available recorded data
fun convertAvailable(sessionId: Int, promise: Promise) {
this.concurrentWithSession(sessionId, promise::reject) { session ->
val result = session.converter.expandBufferAndConvert()
val result = session.converter.convertRemaining()
promise.resolve(result)
}
}
fun getPreview(sessionId: Int, promise: Promise) {
this.concurrentWithSession(sessionId, promise::reject) { session ->
val result = session.converter.getPreview()
promise.resolve(result)
}
}

View File

@@ -0,0 +1,9 @@
whisper.cpp/.gitmodules
whisper.cpp/scripts/
whisper.cpp/samples/
whisper.cpp/tests/
whisper.cpp/models/
whisper.cpp/examples/
whisper.cpp/.*/
whisper.cpp/bindings/
whisper.cpp/**/*.Dockerfile

View File

@@ -0,0 +1,7 @@
# Vendored Android packages
This directory contains upstream packages that can't be added as direct dependencies (e.g. through `npm`).
## whisper.cpp
`whisper.cpp` provides voice typing capabilities. It can be updated by replacing the contents of the `whisper.cpp` directory with the latest content from https://github.com/ggerganov/whisper.cpp. To decrease the size of the `whisper.cpp` directory, some files are ignored by the `.gitignore`.

View File

@@ -0,0 +1,60 @@
*.o
*.a
*.d
.cache/
.coreml/
.test/
.venv/
.vs/
.vscode/
.DS_Store
.vimspector.json
/CMakeSettings.json
/talk-llama.dSYM/
build/
build-*/
# SPM
.build/
.swiftpm
*.metallib
ggml-metal-embed.metal
ggml-metal-embed.metal.tmp
/main
/stream
/command
/talk
/talk-llama
/bench
/quantize
/server
/lsp
arm_neon.h
sync.sh
libwhisper.a
libwhisper.so
compile_commands.json
examples/arm_neon.h
examples/whisper.objc/whisper.objc.xcodeproj/xcshareddata
examples/whisper.objc/whisper.objc.xcodeproj/xcuserdata/
examples/whisper.objc/whisper.objc.xcodeproj/project.xcworkspace/xcuserdata
extra/bench-gg.txt
models/*.mlmodel
models/*.mlmodelc
models/*.mlpackage
bindings/java/.gradle/
bindings/java/.idea/
.idea/
benchmark_results.csv
cmake-build-debug/
.cxx/
.gradle/
local.properties

View File

@@ -0,0 +1,510 @@
# date: Tue Feb 4 13:03:35 EET 2025
# this file is auto-generated by scripts/gen-authors.sh
0/0 <zero@imaskeleton.me>
0cc4m <picard12@live.de>
0xsourcecode <134374803+0xsourcecode@users.noreply.github.com>
65a <10104049+65a@users.noreply.github.com>
AIWintermuteAI <32562299+AIWintermuteAI@users.noreply.github.com>
AT <manyoso@users.noreply.github.com>
Aarni Koskela <akx@iki.fi>
Aaron Pham <29749331+aarnphm@users.noreply.github.com>
Aaron Taylor <aaron@exphat.com>
Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
Abitofevrything <54505189+abitofevrything@users.noreply.github.com>
Adam Jones <domdomegg+git@gmail.com>
Adrien Gallouët <adrien@gallouet.fr>
Adrien Gallouët <angt@huggingface.co>
AfryMask <AfryMask@163.com>
Ahmad Bilal <ahmad.bilal@empglabs.com>
Ahmad Tameem <113388789+Tameem-10xE@users.noreply.github.com>
AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com>
AidanBeltonS <aidan.belton@codeplay.com>
Akarshan Biswas <akarshan.biswas@gmail.com>
Akarshan Biswas <akarshanbiswas@fedoraproject.org>
Akash Mahajan <akash7190@gmail.com>
Akash Mahajan <akashmjn@stanford.edu>
Al Hoang <3811822-hoanga@users.noreply.gitlab.com>
Alan <unknown>
Albert Jin <albert.jin@gmail.com>
Alberto Cabrera Pérez <alberto.cabrera@codeplay.com>
Alberto Cabrera Pérez <alberto.cabrera@intel.com>
Aleksander Andrzejewski <18704749+aleksanderandrzejewski@users.noreply.github.com>
Alex Azarov <alex@azarov.by>
Alex Bacart <13940752+alex-bacart@users.noreply.github.com>
Alex Evgrashin <aevgrashin@yandex.ru>
Alex O'Connell <35843486+acon96@users.noreply.github.com>
Alexandr Graschenkov <alexandr.graschenkov91@gmail.com>
Alexandru Mariuti <alex@mariuti.com>
Alexey Kharlamov <alexey@kharlamov.biz>
Alfredo Montesinos <alfredo.montesinos@g.austincc.edu>
Ali Alameh <ali.alameh@isae.edu.lb>
Alter <0x7c48@gmail.com>
Ananta Bastola <anantarajbastola@gmail.com>
Andreas Kieslinger <47689530+aendk@users.noreply.github.com>
Andreas Lubbe <git@lubbe.org>
Andreu Huguet <andreuhuguet@gmail.com>
Andrew Huynh <a5thuynh@gmail.com>
Andrew Minh Nguyen <40281306+amqdn@users.noreply.github.com>
Andrew S <andrews54757@gmail.com>
Andy Maloney <asmaloney@gmail.com>
Anton Kostin <masguit42@users.noreply.github.com>
Artyom Mezin <psycho.fading@gmail.com>
Asad Memon <asad.lionpk@gmail.com>
Ashraful Islam <ashraful.meche@gmail.com>
AsukaMinato <asukaminato@nyan.eu.org>
AustinMroz <austinmroz@utexas.edu>
Avik Sengupta <avik@sengupta.net>
Bader-eddine Ouaich <49657842+baderouaich@users.noreply.github.com>
Baffin Lee <baffinlee@gmail.com>
Ben Ashbaugh <ben.ashbaugh@intel.com>
Ben Nortier <bjnortier@gmail.com>
Benjamin Heiniger <benjamin.heiniger@bluewin.ch>
Bernhard M. Wiedemann <githubbmwprimary@lsmod.de>
Binozo <70137898+Binozo@users.noreply.github.com>
Bo-Yi Wu <appleboy.tw@gmail.com>
Boris Bliznioukov <blib@mail.com>
Borislav Stanimirov <b.stanimirov@abv.bg>
Brad Murray <59848399+bradmurray-dt@users.noreply.github.com>
Brian Murray <brian@bmurray.ca>
CRD716 <crd716@gmail.com>
Canis Lupus <Canis-UK@users.noreply.github.com>
Carlos Zoido <mrgalleta@gmail.com>
Carolinabanana <140120812+Carolinabanana@users.noreply.github.com>
CarterLi999 <664681047@qq.com>
ChangSeok Oh <shivamidow@users.noreply.github.com>
Changyeon Kim <cyzero.kim@samsung.com>
Chaoqun <27287694+OpenWaygate@users.noreply.github.com>
Charles Xu <63788048+chaxu01@users.noreply.github.com>
Charles Xu <charles.xu@arm.com>
Chen Xi <xi2.chen@intel.com>
Chen Xi <xixichen08@foxmail.com>
Chenguang Li <87689256+noemotiovon@users.noreply.github.com>
Chia-Hsiang Cheng <88014292+garychia@users.noreply.github.com>
Chidi Williams <williamschidi1@gmail.com>
Chris Elrod <elrodc@gmail.com>
Christian <12550267+iceychris@users.noreply.github.com>
Christian Kastner <ckk@kvr.at>
Clifford Heath <clifford.heath@gmail.com>
Clint Herron <hanclinto@gmail.com>
Colin <github@whoisc.cc>
Conrad Kramer <conrad@conradkramer.com>
Corey Earwood <iamcgn+github@gmail.com>
CrispStrobe <154636388+CrispStrobe@users.noreply.github.com>
DAN™ <dranger003@gmail.com>
DGdev91 <DGdev91@users.noreply.github.com>
Damian Czaja <trojan295@protonmail.com>
Dan Johansson <164997844+eddnjjn@users.noreply.github.com>
Dan Johansson <dan.johansson@arm.com>
Daniel Bevenius <daniel.bevenius@gmail.com>
Daniel Valdivia <18384552+dvaldivia@users.noreply.github.com>
Daniel Ziegenberg <daniel@ziegenberg.at>
Daniele <57776841+daniandtheweb@users.noreply.github.com>
Dave <dave-fl@users.noreply.github.com>
Dave Airlie <airlied@gmail.com>
Dave Airlie <airlied@redhat.com>
Daven Sanassy <daven@vochlea.co.uk>
David <dnhkng@gmail.com>
David Thorpe <djt@mutablelogic.com>
DavidKorczynski <david@adalogics.com>
Davidson Francis <davidsondfgl@gmail.com>
Dener Stassun <denerstassun@gmail.com>
Dibakar Gope <dibakar.gope@arm.com>
Didzis Gosko <didzis@users.noreply.github.com>
Diego Devesa <slarengh@gmail.com>
Digipom <admin@digipom.com>
Dimo <dimo@ieee.org>
Djip007 <3705339+Djip007@users.noreply.github.com>
Djip007 <djip.perois@free.fr>
Dody Suria Wijaya <dodysw@gmail.com>
Dou Xinpeng <15529241576@163.com>
Dou Xinpeng <81913537+Dou-Git@users.noreply.github.com>
Dr. Tom Murphy VII Ph.D <499244+tom7@users.noreply.github.com>
Duncan McConnell <ddmcconnell4@gmail.com>
Egor Egorov <me@egorfine.com>
Elkana Bardugo <ttv200@gmail.com>
Emmanuel Schmidbauer <eschmidbauer@gmail.com>
Engininja2 <139037756+Engininja2@users.noreply.github.com>
Eric Curtin <ericcurtin17@gmail.com>
Eric Swanson <eswanson@alloscomp.com>
Eric Tendian <erictendian@gmail.com>
Eric Zhang <34133756+EZForever@users.noreply.github.com>
Erik Scholz <Green-Sky@users.noreply.github.com>
Evan Jones <evan.q.jones@gmail.com>
Evan Martin <evan.martin@gmail.com>
Eve <139727413+netrunnereve@users.noreply.github.com>
Evgeny Kuznetsov <evgeny@kuznetsov.md>
F1L1P <78918286+F1L1Pv2@users.noreply.github.com>
Faisal Zaghloul <quic_fzaghlou@quicinc.com>
Fangjun Kuang <csukuangfj@gmail.com>
Felix <stenbackfelix@gmail.com>
Finn Voorhees <finnvoorhees@gmail.com>
FirstTimeEZ <179362031+FirstTimeEZ@users.noreply.github.com>
FlippFuzz <41221030+FlippFuzz@users.noreply.github.com>
Frankie Robertson <frankier@users.noreply.github.com>
Gang Chen <goncha@gmail.com>
Gavin Cai <gavin1818@hotmail.com>
George Hindle <george@georgehindle.com>
Georgi Gerganov <ggerganov@gmail.com>
Gilad S <7817232+giladgd@users.noreply.github.com>
Gilad S <giladgd@users.noreply.github.com>
Gilad S. <7817232+giladgd@users.noreply.github.com>
GitAritron <103900385+GitAritron@users.noreply.github.com>
GiviMAD <GiviMAD@users.noreply.github.com>
Gleicon Moraes <gleicon@gmail.com>
Gregor Jasny <gjasny@googlemail.com>
Guillaume Wenzek <gwenzek@users.noreply.github.com>
HY. Kelvin Lee <34256578+hykelvinlee42@users.noreply.github.com>
Halalaluyafail3 <55773281+Halalaluyafail3@users.noreply.github.com>
Hang <bebound@gmail.com>
Haus1 <haus.xda@gmail.com>
Herman Semenov <GermanAizek@yandex.ru>
HimariO <dsfhe49854@gmail.com>
Hong Bo PENG <penghb@cn.ibm.com>
Hrishikesh Barman <geekodour@users.noreply.github.com>
Hugo <hugo@whynothugo.nl>
Ian Bicking <ian@ianbicking.org>
Ian Bull <irbull@eclipsesource.com>
Ihar Hrachyshka <ihrachys@redhat.com>
Ikko Ashimine <eltociear@gmail.com>
Ikko Eltociear Ashimine <eltociear@gmail.com>
InconsolableCellist <23345188+InconsolableCellist@users.noreply.github.com>
Ismatulla Mansurov <47342870+sapoepsilon@users.noreply.github.com>
Ivan <nekotekina@gmail.com>
Ivan Filipov <159561759+vanaka11@users.noreply.github.com>
Ivan Gorin <ivangorin21@gmail.com>
Ivo von Putzer Reibegg <ivo.putzer@gmail.com>
JJ <103335846+computerscienceiscool@users.noreply.github.com>
Jack Mousseau <jmousseau@users.noreply.github.com>
JacobLinCool <jacoblincool@gmail.com>
Jakub Ráček <blizzcz@gmail.com>
Jared Van Bortel <jared@nomic.ai>
Jay Binks <jaybinks@gmail.com>
Jayant <jayantyadav202@gmail.com>
Jeff Bolz <jbolz@nvidia.com>
Jeroen Mostert <jeroen.mostert@cm.com>
Jhen-Jie Hong <developer@jhen.me>
Jhen-Jie Hong <iainst0409@gmail.com>
JidongZhang-THU <1119708529@qq.com>
Jo Liss <joliss42@gmail.com>
Joe Todd <joe.todd@codeplay.com>
Johan <jr.raffin@gmail.com>
Johannes Gäßler <johannesg@5d6.de>
John Balis <phobossystems@gmail.com>
JohnnyB <jboero@users.noreply.github.com>
Jonathan Soo <jcsoo@agora.com>
Jonno <1160532+razodactyl@users.noreply.github.com>
Joonas Pihlajamaa <joonas.pihlajamaa@iki.fi>
Jose <34888496+Jerry-Master@users.noreply.github.com>
Josh Bleecher Snyder <josharian@gmail.com>
Josscii <jossciiweiyi@gmail.com>
Judd <foldl@users.noreply.github.com>
Jumper775 <78500318+jumpers775@users.noreply.github.com>
Jun Hee Yoo <contact.jhyoo@gmail.com>
Junil Kim <logyourself@gmail.com>
Justina Cho <justcho5@gmail.com>
Justine Tunney <jtunney@gmail.com>
Justine Tunney <jtunney@mozilla.com>
KITAITI Makoto <KitaitiMakoto@gmail.com>
KP Kaiser <kirk@zothcorp.com>
Kamilake <exjang0@gmail.com>
Karol Kontny <82021046+kkontny@users.noreply.github.com>
Karthick <j.karthic2004@gmail.com>
Kartik Saranathan <278928+Kartiku@users.noreply.github.com>
Kasumi <90275229+kasumi-1@users.noreply.github.com>
Kawrakow <48489457+ikawrakow@users.noreply.github.com>
Kendrick Taylor <kendrick@circuitsix.com>
Kevin Brothaler <admin@digipom.com>
Kevin Gibbons <bakkot@gmail.com>
Konosuke Sakai <konosuke@konosuke.work>
Konstantin Zhuravlyov <konstantin.zhuravlyov@amd.com>
Kreijstal <rainb@tfwno.gf>
Kylin <56434533+KyL0N@users.noreply.github.com>
LBlue <153975653+lbluep@users.noreply.github.com>
Larry Battle <larry.battle.tech@gmail.com>
Laytan Laats <laytanlaats@hotmail.com>
Leo Moll <leo.moll@yeasoft.com>
Lexevolution <31176843+Lexevolution@users.noreply.github.com>
LittleLoli <26589867+WhichWho@users.noreply.github.com>
Lucas Zanek <57494138+LucasZNK@users.noreply.github.com>
Luis Herrera <herrera-luis@users.noreply.github.com>
Lukas Rist <glaslos@gmail.com>
M. A. Ali <73258591+MightyStud@users.noreply.github.com>
M. Eren Akbiyik <erenakbiyik@gmail.com>
Ma Mingfei <mingfei.ma@intel.com>
Maciek <maciek.mab122@gmail.com>
Mahesh Madhav <67384846+heshpdx@users.noreply.github.com>
Marcin Mielniczuk <marmistrz.dev@zoho.eu>
Mark Karpelès <MagicalTux@users.noreply.github.com>
Mark Zhuang <zhuangqiubin@gmail.com>
Markus Tavenrath <mtavenrath@users.noreply.github.com>
Martin Delille <martin@delille.org>
Martin Warnaar <martinwarnaar@gmail.com>
Masaya, Kato <62578291+msy-kato@users.noreply.github.com>
Matheus de Sousa <23645013+keyehzy@users.noreply.github.com>
Mathieu Baudier <mbaudier@argeo.org>
Mathijs de Bruin <mathijs@mathijsfietst.nl>
Matija Pevec <mightymatth@users.noreply.github.com>
Matt Stephenson <mstephenson6@users.noreply.github.com>
Max Krasnyansky <max.krasnyansky@gmail.com>
Max Krasnyansky <quic_maxk@quicinc.com>
Maximiliano Levi <8160966+maxilevi@users.noreply.github.com>
Meng, Hengyu <hengyu.meng@intel.com>
Mengqing Cao <cmq0113@163.com>
Michael Podvitskiy <podvitskiymichael@gmail.com>
Michael Rienstra <mrienstra@gmail.com>
Mikhail Grigorev <sleuthhound@gmail.com>
Mohammadreza Hendiani <hendiani.mohammadreza@gmail.com>
Mohit Agarwal <mohit@sdf.org>
Molly Sophia <mollysophia379@gmail.com>
Murilo Santana <mvrilo@gmail.com>
NETZkultur GmbH <mulholland@netzkultur.de>
Natsu <chino@hotococoa.moe>
Neil Chudleigh <nchudleigh@users.noreply.github.com>
Neo Zhang <14088817+arthw@users.noreply.github.com>
Neo Zhang Jianyu <jianyu.zhang@intel.com>
Neuman Vong <neuman.vong@gmail.com>
Nicholai Tukanov <nicholaitukanov@gmail.com>
Nicholas Albion <nalbion@yahoo.com>
Nico Bosshard <nico@bosshome.ch>
Nicolò Scipione <nicolo.scipione@codeplay.com>
Niels Mayer <Niels.Mayer@gmail.com>
Nikita Sarychev <42014488+sARY77@users.noreply.github.com>
Nikolaj Olsson <nikse.dk@gmail.com>
Okabintaro <103938900+Okabintaro@users.noreply.github.com>
Oleg Sidorov <me@whitebox.io>
Oleg Sidorov <oleg@sidorov.nl>
Olivier Chafik <ochafik@users.noreply.github.com>
Ondrej Kokes <ondrej.kokes@gmail.com>
Ouadie EL FAROUKI <ouadie.elfarouki@codeplay.com>
PAB <pierreantoine.bannier@gmail.com>
Paul Tsochantaris <ptsochantaris@icloud.com>
Pedro Probst <pprobst@insiberia.net>
Peng <hzp1024@qq.com>
Peter <peter277@users.noreply.github.com>
Philipp Zabel <philipp.zabel@gmail.com>
Philippe Normand <phil@base-art.net>
Philippe Normand <philn@igalia.com>
Plamen Minev <pacominev@gmail.com>
Prashant Vithule <119530321+Vithulep@users.noreply.github.com>
Przemysław Pawełczyk <przemoc@gmail.com>
Qianhe Chen <54462604+chenqianhe@users.noreply.github.com>
R0CKSTAR <xiaodong.ye@mthreads.com>
R0CKSTAR <yeahdongcn@gmail.com>
Radoslav Gerganov <rgerganov@gmail.com>
Radosław Gryta <radek.gryta@gmail.com>
Rahul Vadhyar <107788610+RahulVadhyar@users.noreply.github.com>
Raiya Araki <83504221+rai62@users.noreply.github.com>
Reinforce-II <fate@eastal.com>
Reinis Muiznieks <muiznieks.reinis@gmail.com>
RelatedTitle <r3latedtitle@gmail.com>
Rémy Oudompheng <oudomphe@phare.normalesup.org>
RhinoDevel <RhinoDevel@users.noreply.github.com>
Rich Jones <miserlou@gmail.com>
Robert Ormandi <52251610+ormandi@users.noreply.github.com>
Robin <robin.xw@hotmail.com>
Roddur Dasgupta <roddurd@gmail.com>
Roland Rabien <figbug@gmail.com>
Romain Biessy <romain.biessy@codeplay.com>
Ronsor <ronsor@ronsor.pw>
Rotem Dan <rotemdan@gmail.com>
Ryan Hitchman <hitchmanr@gmail.com>
Ryan Metcalfe <107415876+RyanMetcalfeInt8@users.noreply.github.com>
RyanChang <ftes90015@gmail.com>
SRHMorris <69468379+SRHMorris@users.noreply.github.com>
SXX <sxx1136965276@gmail.com>
Sacha Arbonel <sacha.arbonel@hotmail.fr>
Salman Faroz <stsfaroz@gmail.com>
Salvatore Mesoraca <s.mesoraca16@gmail.com>
Sam <49637763+Onlyartist9@users.noreply.github.com>
Sam Pullara <spullara@gmail.com>
Samuel Durante <44513615+samueldurantes@users.noreply.github.com>
Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Sandro Hanea <40202887+sandrohanea@users.noreply.github.com>
Sergio López <slp@redhat.com>
Sergio López <slp@sinrega.org>
Shanshan Shen <467638484@qq.com>
Shijie <821898965@qq.com>
Shupei Fan <dymarkfan@outlook.com>
Siddharth Ramakrishnan <srr2141@columbia.edu>
Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Simon Moisselin <simon.moisstoll@gmail.com>
Sindre Sorhus <sindresorhus@gmail.com>
Slava Primenko <primenko.s@gmail.com>
Srihari-mcw <96763064+Srihari-mcw@users.noreply.github.com>
Stavros Panakakis <53979866+Stavrospanakakis@users.noreply.github.com>
Stefan Sydow <s.sydow@heinlein-video.de>
Stefan Sydow <stefan@sydow.email>
Syahmi Azhar <prsyahmi@gmail.com>
Syed Jafri <syedjafri97@gmail.com>
Sơn Phan Trung <phantrungson17@gmail.com>
Taisei Mima <bhbstar.me@gmail.com>
Takeshi Inoue <inoue.takeshi@gmail.com>
Tamotsu Takahashi <ttakah+github@gmail.com>
Taras Glek <taras@thegp.com>
Tauseef Mohiuddin <35351464+tauseefmohammed2@users.noreply.github.com>
Thamster <Thamster@users.noreply.github.com>
Thijs Raymakers <thijs@raymakers.nl>
Thomas Fitzsimmons <fitzsim@fitzsim.org>
Tiago Fassoni <tiagofassoni@users.noreply.github.com>
Tienshiao Ma <tienshiao@tienshiao.org>
Tim Miller <drasticactions@users.noreply.github.com>
Timothy Cronin <40186632+4imothy@users.noreply.github.com>
Tobrun <tobrun.van.nuland@gmail.com>
Todd <taf2@users.noreply.github.com>
Toliver <teejae@gmail.com>
Tong Li <31761981+litongjava@users.noreply.github.com>
Tony Wasserka <4840017+neobrain@users.noreply.github.com>
Topping1 <78745143+Topping1@users.noreply.github.com>
Travis Cline <travis.cline@gmail.com>
UEXTM.com <84163508+uextm@users.noreply.github.com>
UsernamesLame <156965854+UsernamesLame@users.noreply.github.com>
Vadim Peretokin <vperetokin@hey.com>
Valentin Gosu <1454649+valenting@users.noreply.github.com>
Vin Misra <vinith@alum.mit.edu>
Vulcan <93451215+trholding@users.noreply.github.com>
WhiteOlivierus <36532695+WhiteOlivierus@users.noreply.github.com>
William Tambellini <william.tambellini@gmail.com>
William Tambellini <wtambellini@sdl.com>
Wilson Silva <wilson.dsigns@gmail.com>
Xiang (Kevin) Li <kevinli020508@gmail.com>
Xiao-Yong Jin <jinxiaoyong@gmail.com>
XiaotaoChen <chenxiaotao1234@gmail.com>
Xingchen Song(宋星辰) <xingchensong1996@163.com>
Xinpeng Dou <81913537+Dou-Git@users.noreply.github.com>
Xuan Son Nguyen <thichthat@gmail.com>
Yajing Tang <phillis@google.com>
Yang Shen <aplshenyang@gmail.com>
Yunès <jean.baptiste.yunes@free.fr>
Yuri Khrustalev <ykhrustalev@users.noreply.github.com>
Yusuf Redžić <48274562+redzic@users.noreply.github.com>
ZaBlazzingZephyrus <119159668+blazingzephyr@users.noreply.github.com>
Zhenwei Jin <109658203+kylo5aby@users.noreply.github.com>
Zhiyuan Li <lizhiyuan@uniartisan.com>
Zhiyuan Li <uniartisan2017@gmail.com>
Zigfrid Zvezdin <ziggerZZ@gmail.com>
Zollner <24618122+Zolliner@users.noreply.github.com>
a3sh <38979186+A3shTnT@users.noreply.github.com>
ag2s20150909 <19373730+ag2s20150909@users.noreply.github.com>
agray3 <agray3@users.noreply.github.com>
ai-at-home <149282006+ai-at-home@users.noreply.github.com>
aldorof <aldorof@users.noreply.github.com>
alonfaraj <alonfaraj@gmail.com>
amd-dwang <dong.wang@amd.com>
amritahs-ibm <amritahs@linux.vnet.ibm.com>
andypayne <apayne@gmail.com>
ardfork <134447697+ardfork@users.noreply.github.com>
arizhih <40765267+arizhih@users.noreply.github.com>
automaticcat <daogiatuank54@gmail.com>
bandoti <141645996+bandoti@users.noreply.github.com>
be-next <jerome.ramette@gmail.com>
bert hubert <bert@hubertnet.nl>
billyct <billy_allen@126.com>
bmwl <brian.marshall@tolko.com>
bobqianic <129547291+bobqianic@users.noreply.github.com>
bocytko <bocytko+github@gmail.com>
boolemancer <48014766+boolemancer@users.noreply.github.com>
boolemancer <boolemancer@gmail.com>
bradmit <151883577+bradmit@users.noreply.github.com>
brunofaustino <b.fa.amorim@gmail.com>
bssrdf <merlintiger@hotmail.com>
byte-6174 <88070277+byte-6174@users.noreply.github.com>
cdosoftei <ciprian.dosoftei@gmail.com>
clach04 <Chris.Clark@actian.com>
compilade <113953597+compilade@users.noreply.github.com>
compilade <git@compilade.net>
conradg <conradjgodfrey@gmail.com>
crummyh <elijah@crums.us>
ddpasa <112642920+ddpasa@users.noreply.github.com>
denersc <denerstassun@gmail.com>
dscripka <dscripka@users.noreply.github.com>
duthils <duthils@duthils.net>
ecneladis <ecneladis@users.noreply.github.com>
faker <nspyia2002@gmail.com>
fitzsim <fitzsim@fitzsim.org>
fj-y-saito <85871716+fj-y-saito@users.noreply.github.com>
fraxy-v <65565042+fraxy-v@users.noreply.github.com>
genevera (she/her) <genevera@users.noreply.github.com>
geniusnut <geniusnut@gmail.com>
gilbertgong <gilbert.gong@gmail.com>
gn64 <yukikaze.jp@gmail.com>
goldwaving <77494627+goldwaving@users.noreply.github.com>
greeshmay <greeshmay@gmail.com>
haopeng <657407891@qq.com>
hipudding <huafengchun@gmail.com>
hsinhoyeh <yhh92u@gmail.com>
hydai <z54981220@gmail.com>
iamthad <thadeus.j.fleming@gmail.com>
issixx <46835150+issixx@users.noreply.github.com>
james wolf <contractorwolf@hotmail.com>
jdomke <28772296+jdomke@users.noreply.github.com>
jettoblack <jettoblack@gmail.com>
jiez <373447296@qq.com>
joecryptotoo <80373433+joecryptotoo@users.noreply.github.com>
jorismertz <35079666+jorismertz@users.noreply.github.com>
junchao-loongson <68935141+junchao-loongson@users.noreply.github.com>
junkfood <69683722+JunkFood02@users.noreply.github.com>
jwijffels <jwijffels@bnosac.be>
k.h.lai <adrian.k.h.lai@outlook.com>
kamranjon <kamranjon@gmail.com>
katsu560 <katsu560oo-@docomo.ne.jp>
kennethge <57784063+kenneth-ge@users.noreply.github.com>
keyehzy <msamuel@aluno.puc-rio.br>
kunnis <kunnis@users.noreply.github.com>
l3utterfly <gc.pthzfoldr@gmail.com>
leejet <leejet714@gmail.com>
leo-pony <nengjunma@outlook.com>
lhez <quic_lih@quicinc.com>
litong <31761981+litongjava@users.noreply.github.com>
liuwei-git <14815172+liuwei-git@users.noreply.github.com>
lnyan <lkwq007@gmail.com>
luoyu-intel <yu.luo@intel.com>
m.bell <m.bell@techsmith.com>
mahorozte <41834471+mahorozte@users.noreply.github.com>
mashizora <30516315+mashizora@users.noreply.github.com>
matt23654 <matthew.webber@protonmail.com>
matteo <matteogeniaccio@yahoo.it>
mgrachten <maarten@grachten.eu>
mkiol <mkiol@users.noreply.github.com>
mky_coder <47767389+mkycoder@users.noreply.github.com>
novag <7754358+novag@users.noreply.github.com>
pajowu <pajowu@pajowu.de>
pengxin99 <pengxin.yuan@intel.com>
petterreinholdtsen <pere-github@hungry.com>
polarmoon <90010972+polarmoon@users.noreply.github.com>
rlapray <lapray.romain@gmail.com>
sandrohanea <40202887+sandrohanea@users.noreply.github.com>
semiformal-net <84111142+semiformal-net@users.noreply.github.com>
shibukazu <61775791+shibukazu@users.noreply.github.com>
shikokuchuo <53399081+shikokuchuo@users.noreply.github.com>
slaren <slarengh@gmail.com>
slashlib <slashlib@users.noreply.github.com>
snadampal <87143774+snadampal@users.noreply.github.com>
someone13574 <81528246+someone13574@users.noreply.github.com>
st-gr <38470677+st-gr@users.noreply.github.com>
stduhpf <stephduh@live.fr>
stormofice <58337328+stormofice@users.noreply.github.com>
texmex76 <40733439+texmex76@users.noreply.github.com>
thefinaldegree <thefinaldegree@gmail.com>
thewh1teagle <61390950+thewh1teagle@users.noreply.github.com>
toboil-features <160222185+toboil-features@users.noreply.github.com>
trixirt <trix@redhat.com>
ulatekh <ulatekh@yahoo.com>
undef <undefdev@gmail.com>
uvos <devnull@uvos.xyz>
uvos <philipp@uvos.xyz>
valVk <valVk@users.noreply.github.com>
venkr <venkateshrameshkumar+1@gmail.com>
vicalloy <zbirder@gmail.com>
wangshuai09 <391746016@qq.com>
woachk <24752637+woachk@users.noreply.github.com>
xctan <axunlei@gmail.com>
xdrudis <xavierdrudis@yahoo.es>
yuri@FreeBSD <yuri@FreeBSD>
zhangjixiong <code.zjx@gmail.com>
zhentaoyu <zhentao.yu@intel.com>
zhouwg <6889919+zhouwg@users.noreply.github.com>
zhouwg <zhouwg2000@gmail.com>
谢乃闻 <sienaiwun@users.noreply.github.com>
布客飞龙 <562826179@qq.com>
Артём Земляк <azemlyak@smart-consulting.ru>

View File

@@ -0,0 +1,185 @@
cmake_minimum_required(VERSION 3.5) # for add_link_options and implicit target directories.
project("whisper.cpp" C CXX)
project("whisper.cpp" VERSION 1.7.4)
include(CheckIncludeFileCXX)
set(SOVERSION 1)
#set(CMAKE_WARN_DEPRECATED YES)
set(CMAKE_WARN_UNUSED_CLI YES)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
if (NOT XCODE AND NOT MSVC AND NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release CACHE STRING "Build type" FORCE)
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Debug" "Release" "MinSizeRel" "RelWithDebInfo")
endif()
# Add path to modules
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake/")
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
if (CMAKE_SOURCE_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR)
set(WHISPER_STANDALONE ON)
include(git-vars)
# configure project version
configure_file(${CMAKE_SOURCE_DIR}/bindings/javascript/package-tmpl.json ${CMAKE_SOURCE_DIR}/bindings/javascript/package.json @ONLY)
else()
set(WHISPER_STANDALONE OFF)
endif()
if (EMSCRIPTEN)
set(BUILD_SHARED_LIBS_DEFAULT OFF)
option(WHISPER_WASM_SINGLE_FILE "whisper: embed WASM inside the generated whisper.js" ON)
# TODO: without these, we get the following error:
# wasm-ld: error: --shared-memory is disallowed by whisper.cpp.o because it was not compiled with 'atomics' or 'bulk-memory' features.
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -pthread -s TOTAL_STACK=5242880")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -pthread -s TOTAL_STACK=5242880")
else()
if (MINGW)
set(BUILD_SHARED_LIBS_DEFAULT OFF)
else()
set(BUILD_SHARED_LIBS_DEFAULT ON)
endif()
endif()
option(BUILD_SHARED_LIBS "build shared libraries" ${BUILD_SHARED_LIBS_DEFAULT})
#
# option list
#
# general
option(WHISPER_CCACHE "whisper: use ccache if available" ON)
# debug
option(WHISPER_ALL_WARNINGS "whisper: enable all compiler warnings" ON)
option(WHISPER_ALL_WARNINGS_3RD_PARTY "whisper: enable all compiler warnings in 3rd party libs" OFF)
# build
option(WHISPER_FATAL_WARNINGS "whisper: enable -Werror flag" OFF)
# sanitizers
option(WHISPER_SANITIZE_THREAD "whisper: enable thread sanitizer" OFF)
option(WHISPER_SANITIZE_ADDRESS "whisper: enable address sanitizer" OFF)
option(WHISPER_SANITIZE_UNDEFINED "whisper: enable undefined sanitizer" OFF)
# extra artifacts
option(WHISPER_BUILD_TESTS "whisper: build tests" ${WHISPER_STANDALONE})
option(WHISPER_BUILD_EXAMPLES "whisper: build examples" ${WHISPER_STANDALONE})
option(WHISPER_BUILD_SERVER "whisper: build server example" ${WHISPER_STANDALONE})
# 3rd party libs
option(WHISPER_CURL "whisper: use libcurl to download model from an URL" OFF)
option(WHISPER_SDL2 "whisper: support for libSDL2" OFF)
if (CMAKE_SYSTEM_NAME MATCHES "Linux")
option(WHISPER_FFMPEG "whisper: support building and linking with ffmpeg libs (avcodec, swresample, ...)" OFF)
endif()
option(WHISPER_COREML "whisper: enable Core ML framework" OFF)
option(WHISPER_COREML_ALLOW_FALLBACK "whisper: allow non-CoreML fallback" OFF)
option(WHISPER_OPENVINO "whisper: support for OpenVINO" OFF)
# Required for relocatable CMake package
include(${CMAKE_CURRENT_SOURCE_DIR}/cmake/build-info.cmake)
# override ggml options
set(GGML_CCACHE ${WHISPER_CCACHE})
set(GGML_SANITIZE_THREAD ${WHISPER_SANITIZE_THREAD})
set(GGML_SANITIZE_ADDRESS ${WHISPER_SANITIZE_ADDRESS})
set(GGML_SANITIZE_UNDEFINED ${WHISPER_SANITIZE_UNDEFINED})
set(GGML_ALL_WARNINGS ${WHISPER_ALL_WARNINGS})
set(GGML_FATAL_WARNINGS ${WHISPER_FATAL_WARNINGS})
# transition helpers
function (whisper_option_depr TYPE OLD NEW)
if (${OLD})
message(${TYPE} "${OLD} is deprecated and will be removed in the future.\nUse ${NEW} instead\n")
set(${NEW} ON)
endif()
endfunction()
whisper_option_depr(FATAL_ERROR WHISPER_CUBLAS GGML_CUDA)
whisper_option_depr(WARNING WHISPER_CUDA GGML_CUDA)
whisper_option_depr(WARNING WHISPER_KOMPUTE GGML_KOMPUTE)
whisper_option_depr(WARNING WHISPER_METAL GGML_METAL)
whisper_option_depr(WARNING WHISPER_METAL_EMBED_LIBRARY GGML_METAL_EMBED_LIBRARY)
whisper_option_depr(WARNING WHISPER_NATIVE GGML_NATIVE)
whisper_option_depr(WARNING WHISPER_OPENMP GGML_OPENMP)
whisper_option_depr(WARNING WHISPER_RPC GGML_RPC)
whisper_option_depr(WARNING WHISPER_SYCL GGML_SYCL)
whisper_option_depr(WARNING WHISPER_SYCL_F16 GGML_SYCL_F16)
#
# build the library
#
if (NOT TARGET ggml)
add_subdirectory(ggml)
# ... otherwise assume ggml is added by a parent CMakeLists.txt
endif()
add_subdirectory(src)
#
# install
#
include(GNUInstallDirs)
include(CMakePackageConfigHelpers)
set(WHISPER_BUILD_NUMBER ${BUILD_NUMBER})
set(WHISPER_BUILD_COMMIT ${BUILD_COMMIT})
set(WHISPER_INSTALL_VERSION ${CMAKE_PROJECT_VERSION})
set(WHISPER_INCLUDE_INSTALL_DIR ${CMAKE_INSTALL_INCLUDEDIR} CACHE PATH "Location of header files")
set(WHISPER_LIB_INSTALL_DIR ${CMAKE_INSTALL_LIBDIR} CACHE PATH "Location of library files")
set(WHISPER_BIN_INSTALL_DIR ${CMAKE_INSTALL_BINDIR} CACHE PATH "Location of binary files")
get_directory_property(WHISPER_TRANSIENT_DEFINES COMPILE_DEFINITIONS)
set_target_properties(whisper PROPERTIES PUBLIC_HEADER ${CMAKE_CURRENT_SOURCE_DIR}/include/whisper.h)
install(TARGETS whisper LIBRARY PUBLIC_HEADER)
configure_package_config_file(
${CMAKE_CURRENT_SOURCE_DIR}/cmake/whisper-config.cmake.in
${CMAKE_CURRENT_BINARY_DIR}/whisper-config.cmake
INSTALL_DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/whisper
PATH_VARS
WHISPER_INCLUDE_INSTALL_DIR
WHISPER_LIB_INSTALL_DIR
WHISPER_BIN_INSTALL_DIR )
write_basic_package_version_file(
${CMAKE_CURRENT_BINARY_DIR}/whisper-version.cmake
VERSION ${WHISPER_INSTALL_VERSION}
COMPATIBILITY SameMajorVersion)
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/whisper-config.cmake
${CMAKE_CURRENT_BINARY_DIR}/whisper-version.cmake
DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/whisper)
configure_file(cmake/whisper.pc.in
"${CMAKE_CURRENT_BINARY_DIR}/whisper.pc"
@ONLY)
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/whisper.pc"
DESTINATION lib/pkgconfig)
#
# programs, examples and tests
#
if (WHISPER_BUILD_TESTS AND NOT CMAKE_JS_VERSION)
#include(CTest)
#add_subdirectory(tests)
endif ()
if (WHISPER_BUILD_EXAMPLES)
add_subdirectory(examples)
endif()

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023-2024 The ggml authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,19 @@
// swift-tools-version:5.5
import PackageDescription
let package = Package(
name: "whisper",
platforms: [
.macOS(.v12),
.iOS(.v14),
.watchOS(.v4),
.tvOS(.v14)
],
products: [
.library(name: "whisper", targets: ["whisper"]),
],
targets: [
.systemLibrary(name: "whisper", pkgConfig: "whisper"),
]
)

View File

@@ -0,0 +1,679 @@
# whisper.cpp
![whisper.cpp](https://user-images.githubusercontent.com/1991296/235238348-05d0f6a4-da44-4900-a1de-d0707e75b763.jpeg)
[![Actions Status](https://github.com/ggerganov/whisper.cpp/workflows/CI/badge.svg)](https://github.com/ggerganov/whisper.cpp/actions)
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Conan Center](https://shields.io/conan/v/whisper-cpp)](https://conan.io/center/whisper-cpp)
[![npm](https://img.shields.io/npm/v/whisper.cpp.svg)](https://www.npmjs.com/package/whisper.cpp/)
> [!NOTE]
> New maintenance roadmap: https://github.com/ggerganov/whisper.cpp/discussions/2788
Stable: [v1.7.4](https://github.com/ggerganov/whisper.cpp/releases/tag/v1.7.4) / [Roadmap | F.A.Q.](https://github.com/ggerganov/whisper.cpp/discussions/126)
High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
- Plain C/C++ implementation without dependencies
- Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](#core-ml-support)
- AVX intrinsics support for x86 architectures
- [VSX intrinsics support for POWER architectures](#power-vsx-intrinsics)
- Mixed F16 / F32 precision
- [Integer quantization support](#quantization)
- Zero memory allocations at runtime
- [Vulkan support](#vulkan-gpu-support)
- Support for CPU-only inference
- [Efficient GPU support for NVIDIA](#nvidia-gpu-support)
- [OpenVINO Support](#openvino-support)
- [Ascend NPU Support](#ascend-npu-support)
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/include/whisper.h)
Supported platforms:
- [x] Mac OS (Intel and Arm)
- [x] [iOS](examples/whisper.objc)
- [x] [Android](examples/whisper.android)
- [x] [Java](bindings/java/README.md)
- [x] Linux / [FreeBSD](https://github.com/ggerganov/whisper.cpp/issues/56#issuecomment-1350920264)
- [x] [WebAssembly](examples/whisper.wasm)
- [x] Windows ([MSVC](https://github.com/ggerganov/whisper.cpp/blob/master/.github/workflows/build.yml#L117-L144) and [MinGW](https://github.com/ggerganov/whisper.cpp/issues/168)]
- [x] [Raspberry Pi](https://github.com/ggerganov/whisper.cpp/discussions/166)
- [x] [Docker](https://github.com/ggerganov/whisper.cpp/pkgs/container/whisper.cpp)
The entire high-level implementation of the model is contained in [whisper.h](include/whisper.h) and [whisper.cpp](src/whisper.cpp).
The rest of the code is part of the [`ggml`](https://github.com/ggerganov/ggml) machine learning library.
Having such a lightweight implementation of the model allows to easily integrate it in different platforms and applications.
As an example, here is a video of running the model on an iPhone 13 device - fully offline, on-device: [whisper.objc](examples/whisper.objc)
https://user-images.githubusercontent.com/1991296/197385372-962a6dea-bca1-4d50-bf96-1d8c27b98c81.mp4
You can also easily make your own offline voice assistant application: [command](examples/command)
https://user-images.githubusercontent.com/1991296/204038393-2f846eae-c255-4099-a76d-5735c25c49da.mp4
On Apple Silicon, the inference runs fully on the GPU via Metal:
https://github.com/ggerganov/whisper.cpp/assets/1991296/c82e8f86-60dc-49f2-b048-d2fdbd6b5225
## Quick start
First clone the repository:
```bash
git clone https://github.com/ggerganov/whisper.cpp.git
```
Navigate into the directory:
```
cd whisper.cpp
```
Then, download one of the Whisper [models](models/README.md) converted in [`ggml` format](#ggml-format). For example:
```bash
sh ./models/download-ggml-model.sh base.en
```
Now build the [whisper-cli](examples/cli) example and transcribe an audio file like this:
```bash
# build the project
cmake -B build
cmake --build build --config Release
# transcribe an audio file
./build/bin/whisper-cli -f samples/jfk.wav
```
---
For a quick demo, simply run `make base.en`.
The command downloads the `base.en` model converted to custom `ggml` format and runs the inference on all `.wav` samples in the folder `samples`.
For detailed usage instructions, run: `./build/bin/whisper-cli -h`
Note that the [whisper-cli](examples/cli) example currently runs only with 16-bit WAV files, so make sure to convert your input before running the tool.
For example, you can use `ffmpeg` like this:
```bash
ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
```
## More audio samples
If you want some extra audio samples to play with, simply run:
```
make -j samples
```
This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via `ffmpeg`.
You can download and run the other models as follows:
```
make -j tiny.en
make -j tiny
make -j base.en
make -j base
make -j small.en
make -j small
make -j medium.en
make -j medium
make -j large-v1
make -j large-v2
make -j large-v3
make -j large-v3-turbo
```
## Memory usage
| Model | Disk | Mem |
| ------ | ------- | ------- |
| tiny | 75 MiB | ~273 MB |
| base | 142 MiB | ~388 MB |
| small | 466 MiB | ~852 MB |
| medium | 1.5 GiB | ~2.1 GB |
| large | 2.9 GiB | ~3.9 GB |
## POWER VSX Intrinsics
`whisper.cpp` supports POWER architectures and includes code which
significantly speeds operation on Linux running on POWER9/10, making it
capable of faster-than-realtime transcription on underclocked Raptor
Talos II. Ensure you have a BLAS package installed, and replace the
standard cmake setup with:
```bash
# build with GGML_BLAS defined
cmake -B build -DGGML_BLAS=1
cmake --build build --config Release
./build/bin/whisper-cli [ .. etc .. ]
## Quantization
`whisper.cpp` supports integer quantization of the Whisper `ggml` models.
Quantized models require less memory and disk space and depending on the hardware can be processed more efficiently.
Here are the steps for creating and using a quantized model:
```bash
# quantize a model with Q5_0 method
cmake -B build
cmake --build build --config Release
./build/bin/quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
# run the examples as usual, specifying the quantized model file
./build/bin/whisper-cli -m models/ggml-base.en-q5_0.bin ./samples/gb0.wav
```
## Core ML support
On Apple Silicon devices, the Encoder inference can be executed on the Apple Neural Engine (ANE) via Core ML. This can result in significant
speed-up - more than x3 faster compared with CPU-only execution. Here are the instructions for generating a Core ML model and using it with `whisper.cpp`:
- Install Python dependencies needed for the creation of the Core ML model:
```bash
pip install ane_transformers
pip install openai-whisper
pip install coremltools
```
- To ensure `coremltools` operates correctly, please confirm that [Xcode](https://developer.apple.com/xcode/) is installed and execute `xcode-select --install` to install the command-line tools.
- Python 3.10 is recommended.
- MacOS Sonoma (version 14) or newer is recommended, as older versions of MacOS might experience issues with transcription hallucination.
- [OPTIONAL] It is recommended to utilize a Python version management system, such as [Miniconda](https://docs.conda.io/en/latest/miniconda.html) for this step:
- To create an environment, use: `conda create -n py310-whisper python=3.10 -y`
- To activate the environment, use: `conda activate py310-whisper`
- Generate a Core ML model. For example, to generate a `base.en` model, use:
```bash
./models/generate-coreml-model.sh base.en
```
This will generate the folder `models/ggml-base.en-encoder.mlmodelc`
- Build `whisper.cpp` with Core ML support:
```bash
# using CMake
cmake -B build -DWHISPER_COREML=1
cmake --build build -j --config Release
```
- Run the examples as usual. For example:
```text
$ ./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/jfk.wav
...
whisper_init_state: loading Core ML model from 'models/ggml-base.en-encoder.mlmodelc'
whisper_init_state: first run on a device may take a while ...
whisper_init_state: Core ML model loaded
system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | COREML = 1 |
...
```
The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format.
Next runs are faster.
For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggerganov/whisper.cpp/pull/566).
## OpenVINO support
On platforms that support [OpenVINO](https://github.com/openvinotoolkit/openvino), the Encoder inference can be executed
on OpenVINO-supported devices including x86 CPUs and Intel GPUs (integrated & discrete).
This can result in significant speedup in encoder performance. Here are the instructions for generating the OpenVINO model and using it with `whisper.cpp`:
- First, setup python virtual env. and install python dependencies. Python 3.10 is recommended.
Windows:
```powershell
cd models
python -m venv openvino_conv_env
openvino_conv_env\Scripts\activate
python -m pip install --upgrade pip
pip install -r requirements-openvino.txt
```
Linux and macOS:
```bash
cd models
python3 -m venv openvino_conv_env
source openvino_conv_env/bin/activate
python -m pip install --upgrade pip
pip install -r requirements-openvino.txt
```
- Generate an OpenVINO encoder model. For example, to generate a `base.en` model, use:
```
python convert-whisper-to-openvino.py --model base.en
```
This will produce ggml-base.en-encoder-openvino.xml/.bin IR model files. It's recommended to relocate these to the same folder as `ggml` models, as that
is the default location that the OpenVINO extension will search at runtime.
- Build `whisper.cpp` with OpenVINO support:
Download OpenVINO package from [release page](https://github.com/openvinotoolkit/openvino/releases). The recommended version to use is [2023.0.0](https://github.com/openvinotoolkit/openvino/releases/tag/2023.0.0).
After downloading & extracting package onto your development system, set up required environment by sourcing setupvars script. For example:
Linux:
```bash
source /path/to/l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64/setupvars.sh
```
Windows (cmd):
```powershell
C:\Path\To\w_openvino_toolkit_windows_2023.0.0.10926.b4452d56304_x86_64\setupvars.bat
```
And then build the project using cmake:
```bash
cmake -B build -DWHISPER_OPENVINO=1
cmake --build build -j --config Release
```
- Run the examples as usual. For example:
```text
$ ./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/jfk.wav
...
whisper_ctx_init_openvino_encoder: loading OpenVINO model from 'models/ggml-base.en-encoder-openvino.xml'
whisper_ctx_init_openvino_encoder: first run on a device may take a while ...
whisper_openvino_init: path_model = models/ggml-base.en-encoder-openvino.xml, device = GPU, cache_dir = models/ggml-base.en-encoder-openvino-cache
whisper_ctx_init_openvino_encoder: OpenVINO model loaded
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | COREML = 0 | OPENVINO = 1 |
...
```
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
cached for the next run.
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
## NVIDIA GPU support
With NVIDIA cards the processing of the models is done efficiently on the GPU via cuBLAS and custom CUDA kernels.
First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-downloads
Now build `whisper.cpp` with CUDA support:
```
cmake -B build -DGGML_CUDA=1
cmake --build build -j --config Release
```
## Vulkan GPU support
Cross-vendor solution which allows you to accelerate workload on your GPU.
First, make sure your graphics card driver provides support for Vulkan API.
Now build `whisper.cpp` with Vulkan support:
```
cmake -B build -DGGML_VULKAN=1
cmake --build build -j --config Release
```
## BLAS CPU support via OpenBLAS
Encoder processing can be accelerated on the CPU via OpenBLAS.
First, make sure you have installed `openblas`: https://www.openblas.net/
Now build `whisper.cpp` with OpenBLAS support:
```
cmake -B build -DGGML_BLAS=1
cmake --build build -j --config Release
```
## Ascend NPU support
Ascend NPU provides inference acceleration via [`CANN`](https://www.hiascend.com/en/software/cann) and AI cores.
First, check if your Ascend NPU device is supported:
**Verified devices**
| Ascend NPU | Status |
|:-----------------------------:|:-------:|
| Atlas 300T A2 | Support |
Then, make sure you have installed [`CANN toolkit`](https://www.hiascend.com/en/software/cann/community) . The lasted version of CANN is recommanded.
Now build `whisper.cpp` with CANN support:
```
cmake -B build -DGGML_CANN=1
cmake --build build -j --config Release
```
Run the inference examples as usual, for example:
```
./build/bin/whisper-cli -f samples/jfk.wav -m models/ggml-base.en.bin -t 8
```
*Notes:*
- If you have trouble with Ascend NPU device, please create a issue with **[CANN]** prefix/tag.
- If you run successfully with your Ascend NPU device, please help update the table `Verified devices`.
## Docker
### Prerequisites
- Docker must be installed and running on your system.
- Create a folder to store big models & intermediate files (ex. /whisper/models)
### Images
We have two Docker images available for this project:
1. `ghcr.io/ggerganov/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
2. `ghcr.io/ggerganov/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
### Usage
```shell
# download model and persist it in a local folder
docker run -it --rm \
-v path/to/models:/models \
whisper.cpp:main "./models/download-ggml-model.sh base /models"
# transcribe an audio file
docker run -it --rm \
-v path/to/models:/models \
-v path/to/audios:/audios \
whisper.cpp:main "./main -m /models/ggml-base.bin -f /audios/jfk.wav"
# transcribe an audio file in samples folder
docker run -it --rm \
-v path/to/models:/models \
whisper.cpp:main "./main -m /models/ggml-base.bin -f ./samples/jfk.wav"
```
## Installing with Conan
You can install pre-built binaries for whisper.cpp or build it from source using [Conan](https://conan.io/). Use the following command:
```
conan install --requires="whisper-cpp/[*]" --build=missing
```
For detailed instructions on how to use Conan, please refer to the [Conan documentation](https://docs.conan.io/2/).
## Limitations
- Inference only
## Real-time audio input example
This is a naive example of performing real-time inference on audio from your microphone.
The [stream](examples/stream) tool samples the audio every half a second and runs the transcription continuously.
More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
```bash
cmake -B build -DWHISPER_SDL2=ON
cmake --build build --config Release
./build/bin/whisper-stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
```
https://user-images.githubusercontent.com/1991296/194935793-76afede7-cfa8-48d8-a80f-28ba83be7d09.mp4
## Confidence color-coding
Adding the `--print-colors` argument will print the transcribed text using an experimental color coding strategy
to highlight words with high or low confidence:
```bash
./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/gb0.wav --print-colors
```
<img width="965" alt="image" src="https://user-images.githubusercontent.com/1991296/197356445-311c8643-9397-4e5e-b46e-0b4b4daa2530.png">
## Controlling the length of the generated text segments (experimental)
For example, to limit the line length to a maximum of 16 characters, simply add `-ml 16`:
```text
$ ./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 16
whisper_model_load: loading model from './models/ggml-base.en.bin'
...
system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |
main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:00.850] And so my
[00:00:00.850 --> 00:00:01.590] fellow
[00:00:01.590 --> 00:00:04.140] Americans, ask
[00:00:04.140 --> 00:00:05.660] not what your
[00:00:05.660 --> 00:00:06.840] country can do
[00:00:06.840 --> 00:00:08.430] for you, ask
[00:00:08.430 --> 00:00:09.440] what you can do
[00:00:09.440 --> 00:00:10.020] for your
[00:00:10.020 --> 00:00:11.000] country.
```
## Word-level timestamp (experimental)
The `--max-len` argument can be used to obtain word-level timestamps. Simply use `-ml 1`:
```text
$ ./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 1
whisper_model_load: loading model from './models/ggml-base.en.bin'
...
system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |
main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:00.320]
[00:00:00.320 --> 00:00:00.370] And
[00:00:00.370 --> 00:00:00.690] so
[00:00:00.690 --> 00:00:00.850] my
[00:00:00.850 --> 00:00:01.590] fellow
[00:00:01.590 --> 00:00:02.850] Americans
[00:00:02.850 --> 00:00:03.300] ,
[00:00:03.300 --> 00:00:04.140] ask
[00:00:04.140 --> 00:00:04.990] not
[00:00:04.990 --> 00:00:05.410] what
[00:00:05.410 --> 00:00:05.660] your
[00:00:05.660 --> 00:00:06.260] country
[00:00:06.260 --> 00:00:06.600] can
[00:00:06.600 --> 00:00:06.840] do
[00:00:06.840 --> 00:00:07.010] for
[00:00:07.010 --> 00:00:08.170] you
[00:00:08.170 --> 00:00:08.190] ,
[00:00:08.190 --> 00:00:08.430] ask
[00:00:08.430 --> 00:00:08.910] what
[00:00:08.910 --> 00:00:09.040] you
[00:00:09.040 --> 00:00:09.320] can
[00:00:09.320 --> 00:00:09.440] do
[00:00:09.440 --> 00:00:09.760] for
[00:00:09.760 --> 00:00:10.020] your
[00:00:10.020 --> 00:00:10.510] country
[00:00:10.510 --> 00:00:11.000] .
```
## Speaker segmentation via tinydiarize (experimental)
More information about this approach is available here: https://github.com/ggerganov/whisper.cpp/pull/1058
Sample usage:
```py
# download a tinydiarize compatible model
./models/download-ggml-model.sh small.en-tdrz
# run as usual, adding the "-tdrz" command-line argument
./build/bin/whisper-cli -f ./samples/a13.wav -m ./models/ggml-small.en-tdrz.bin -tdrz
...
main: processing './samples/a13.wav' (480000 samples, 30.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, tdrz = 1, timestamps = 1 ...
...
[00:00:00.000 --> 00:00:03.800] Okay Houston, we've had a problem here. [SPEAKER_TURN]
[00:00:03.800 --> 00:00:06.200] This is Houston. Say again please. [SPEAKER_TURN]
[00:00:06.200 --> 00:00:08.260] Uh Houston we've had a problem.
[00:00:08.260 --> 00:00:11.320] We've had a main beam up on a volt. [SPEAKER_TURN]
[00:00:11.320 --> 00:00:13.820] Roger main beam interval. [SPEAKER_TURN]
[00:00:13.820 --> 00:00:15.100] Uh uh [SPEAKER_TURN]
[00:00:15.100 --> 00:00:18.020] So okay stand, by thirteen we're looking at it. [SPEAKER_TURN]
[00:00:18.020 --> 00:00:25.740] Okay uh right now uh Houston the uh voltage is uh is looking good um.
[00:00:27.620 --> 00:00:29.940] And we had a a pretty large bank or so.
```
## Karaoke-style movie generation (experimental)
The [whisper-cli](examples/cli) example provides support for output of karaoke-style movies, where the
currently pronounced word is highlighted. Use the `-wts` argument and run the generated bash script.
This requires to have `ffmpeg` installed.
Here are a few _"typical"_ examples:
```bash
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -owts
source ./samples/jfk.wav.wts
ffplay ./samples/jfk.wav.mp4
```
https://user-images.githubusercontent.com/1991296/199337465-dbee4b5e-9aeb-48a3-b1c6-323ac4db5b2c.mp4
---
```bash
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/mm0.wav -owts
source ./samples/mm0.wav.wts
ffplay ./samples/mm0.wav.mp4
```
https://user-images.githubusercontent.com/1991296/199337504-cc8fd233-0cb7-4920-95f9-4227de3570aa.mp4
---
```bash
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/gb0.wav -owts
source ./samples/gb0.wav.wts
ffplay ./samples/gb0.wav.mp4
```
https://user-images.githubusercontent.com/1991296/199337538-b7b0c7a3-2753-4a88-a0cd-f28a317987ba.mp4
---
## Video comparison of different models
Use the [scripts/bench-wts.sh](https://github.com/ggerganov/whisper.cpp/blob/master/scripts/bench-wts.sh) script to generate a video in the following format:
```bash
./scripts/bench-wts.sh samples/jfk.wav
ffplay ./samples/jfk.wav.all.mp4
```
https://user-images.githubusercontent.com/1991296/223206245-2d36d903-cf8e-4f09-8c3b-eb9f9c39d6fc.mp4
---
## Benchmarks
In order to have an objective comparison of the performance of the inference across different system configurations,
use the [whisper-bench](examples/bench) tool. The tool simply runs the Encoder part of the model and prints how much time it
took to execute it. The results are summarized in the following Github issue:
[Benchmark results](https://github.com/ggerganov/whisper.cpp/issues/89)
Additionally a script to run whisper.cpp with different models and audio files is provided [bench.py](scripts/bench.py).
You can run it with the following command, by default it will run against any standard model in the models folder.
```bash
python3 scripts/bench.py -f samples/jfk.wav -t 2,4,8 -p 1,2
```
It is written in python with the intention of being easy to modify and extend for your benchmarking use case.
It outputs a csv file with the results of the benchmarking.
## `ggml` format
The original models are converted to a custom binary format. This allows to pack everything needed into a single file:
- model parameters
- mel filters
- vocabulary
- weights
You can download the converted models using the [models/download-ggml-model.sh](models/download-ggml-model.sh) script
or manually from here:
- https://huggingface.co/ggerganov/whisper.cpp
- https://ggml.ggerganov.com
For more details, see the conversion script [models/convert-pt-to-ggml.py](models/convert-pt-to-ggml.py) or [models/README.md](models/README.md).
## [Bindings](https://github.com/ggerganov/whisper.cpp/discussions/categories/bindings)
- [x] Rust: [tazz4843/whisper-rs](https://github.com/tazz4843/whisper-rs) | [#310](https://github.com/ggerganov/whisper.cpp/discussions/310)
- [x] JavaScript: [bindings/javascript](bindings/javascript) | [#309](https://github.com/ggerganov/whisper.cpp/discussions/309)
- React Native (iOS / Android): [whisper.rn](https://github.com/mybigday/whisper.rn)
- [x] Go: [bindings/go](bindings/go) | [#312](https://github.com/ggerganov/whisper.cpp/discussions/312)
- [x] Java:
- [GiviMAD/whisper-jni](https://github.com/GiviMAD/whisper-jni)
- [x] Ruby: [bindings/ruby](bindings/ruby) | [#507](https://github.com/ggerganov/whisper.cpp/discussions/507)
- [x] Objective-C / Swift: [ggerganov/whisper.spm](https://github.com/ggerganov/whisper.spm) | [#313](https://github.com/ggerganov/whisper.cpp/discussions/313)
- [exPHAT/SwiftWhisper](https://github.com/exPHAT/SwiftWhisper)
- [x] .NET: | [#422](https://github.com/ggerganov/whisper.cpp/discussions/422)
- [sandrohanea/whisper.net](https://github.com/sandrohanea/whisper.net)
- [NickDarvey/whisper](https://github.com/NickDarvey/whisper)
- [x] Python: | [#9](https://github.com/ggerganov/whisper.cpp/issues/9)
- [stlukey/whispercpp.py](https://github.com/stlukey/whispercpp.py) (Cython)
- [AIWintermuteAI/whispercpp](https://github.com/AIWintermuteAI/whispercpp) (Updated fork of aarnphm/whispercpp)
- [aarnphm/whispercpp](https://github.com/aarnphm/whispercpp) (Pybind11)
- [abdeladim-s/pywhispercpp](https://github.com/abdeladim-s/pywhispercpp) (Pybind11)
- [x] R: [bnosac/audio.whisper](https://github.com/bnosac/audio.whisper)
- [x] Unity: [macoron/whisper.unity](https://github.com/Macoron/whisper.unity)
## Examples
There are various examples of using the library for different projects in the [examples](examples) folder.
Some of the examples are even ported to run in the browser using WebAssembly. Check them out!
| Example | Web | Description |
| --------------------------------------------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| [whisper-cli](examples/cli) | [whisper.wasm](examples/whisper.wasm) | Tool for translating and transcribing audio using Whisper |
| [whisper-bench](examples/bench) | [bench.wasm](examples/bench.wasm) | Benchmark the performance of Whisper on your machine |
| [whisper-stream](examples/stream) | [stream.wasm](examples/stream.wasm) | Real-time transcription of raw microphone capture |
| [whisper-command](examples/command) | [command.wasm](examples/command.wasm) | Basic voice assistant example for receiving voice commands from the mic |
| [whisper-server](examples/server) | | HTTP transcription server with OAI-like API |
| [whisper-talk-llama](examples/talk-llama) | | Talk with a LLaMA bot |
| [whisper.objc](examples/whisper.objc) | | iOS mobile application using whisper.cpp |
| [whisper.swiftui](examples/whisper.swiftui) | | SwiftUI iOS / macOS application using whisper.cpp |
| [whisper.android](examples/whisper.android) | | Android mobile application using whisper.cpp |
| [whisper.nvim](examples/whisper.nvim) | | Speech-to-text plugin for Neovim |
| [generate-karaoke.sh](examples/generate-karaoke.sh) | | Helper script to easily [generate a karaoke video](https://youtu.be/uj7hVta4blM) of raw audio capture |
| [livestream.sh](examples/livestream.sh) | | [Livestream audio transcription](https://github.com/ggerganov/whisper.cpp/issues/185) |
| [yt-wsp.sh](examples/yt-wsp.sh) | | Download + transcribe and/or translate any VOD [(original)](https://gist.github.com/DaniruKun/96f763ec1a037cc92fe1a059b643b818) |
| [wchess](examples/wchess) | [wchess.wasm](examples/wchess) | Voice-controlled chess |
## [Discussions](https://github.com/ggerganov/whisper.cpp/discussions)
If you have any kind of feedback about this project feel free to use the Discussions section and open a new topic.
You can use the [Show and tell](https://github.com/ggerganov/whisper.cpp/discussions/categories/show-and-tell) category
to share your own projects that use `whisper.cpp`. If you have a question, make sure to check the
[Frequently asked questions (#126)](https://github.com/ggerganov/whisper.cpp/discussions/126) discussion.

View File

@@ -0,0 +1,249 @@
# whisper.cpp for SYCL
[Background](#background)
[OS](#os)
[Intel GPU](#intel-gpu)
[Linux](#linux)
[Environment Variable](#environment-variable)
[Known Issue](#known-issue)
[Todo](#todo)
## Background
SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators�such as CPUs, GPUs, and FPGAs. It is a single-source embedded domain-specific language based on pure C++17.
oneAPI is a specification that is open and standards-based, supporting multiple architecture types including but not limited to GPU, CPU, and FPGA. The spec has both direct programming and API-based programming paradigms.
Intel uses the SYCL as direct programming language to support CPU, GPUs and FPGAs.
To avoid re-inventing the wheel, this code refers other code paths in llama.cpp (like OpenBLAS, cuBLAS, CLBlast). We use a open-source tool [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) (Commercial release [Intel� DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) migrate to SYCL.
The whisper.cpp for SYCL is used to support Intel GPUs.
For Intel CPU, recommend to use whisper.cpp for X86 (Intel MKL build).
## OS
|OS|Status|Verified|
|-|-|-|
|Linux|Support|Ubuntu 22.04|
|Windows|Ongoing| |
## Intel GPU
|Intel GPU| Status | Verified Model|
|-|-|-|
|Intel Data Center Max Series| Support| Max 1550|
|Intel Data Center Flex Series| Support| Flex 170|
|Intel Arc Series| Support| Arc 770|
|Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake|
|Intel iGPU| Support| iGPU in i5-1250P, i7-1165G7|
## Linux
### Setup Environment
1. Install Intel GPU driver.
a. Please install Intel GPU driver by official guide: [Install GPU Drivers](https://dgpu-docs.intel.com/driver/installation.html).
Note: for iGPU, please install the client GPU driver.
b. Add user to group: video, render.
```
sudo usermod -aG render username
sudo usermod -aG video username
```
Note: re-login to enable it.
c. Check
```
sudo apt install clinfo
sudo clinfo -l
```
Output (example):
```
Platform #0: Intel(R) OpenCL Graphics
`-- Device #0: Intel(R) Arc(TM) A770 Graphics
Platform #0: Intel(R) OpenCL HD Graphics
`-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49]
```
2. Install Intel� oneAPI Base toolkit.
a. Please follow the procedure in [Get the Intel� oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html).
Recommend to install to default folder: **/opt/intel/oneapi**.
Following guide use the default folder as example. If you use other folder, please modify the following guide info with your folder.
b. Check
```
source /opt/intel/oneapi/setvars.sh
sycl-ls
```
There should be one or more level-zero devices. Like **[ext_oneapi_level_zero:gpu:0]**.
Output (example):
```
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i7-13700K OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO [23.30.26918.50]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26918]
```
2. Build locally:
```
mkdir -p build
cd build
source /opt/intel/oneapi/setvars.sh
#for FP16
#cmake .. -DWHISPER_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DWHISPER_SYCL_F16=ON
#for FP32
cmake .. -DWHISPER_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
#build example/main only
#cmake --build . --config Release --target main
#build all binary
cmake --build . --config Release -v
```
or
```
./examples/sycl/build.sh
```
Note:
- By default, it will build for all binary files. It will take more time. To reduce the time, we recommend to build for **example/main** only.
### Run
1. Put model file to folder **models**
2. Enable oneAPI running environment
```
source /opt/intel/oneapi/setvars.sh
```
3. List device ID
Run without parameter:
```
./build/bin/ls-sycl-device
or
./build/bin/main
```
Check the ID in startup log, like:
```
found 4 SYCL devices:
Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
Device 1: Intel(R) FPGA Emulation Device, compute capability 1.2,
max compute_units 24, max work group size 67108864, max sub group size 64, global mem size 67065057280
Device 2: 13th Gen Intel(R) Core(TM) i7-13700K, compute capability 3.0,
max compute_units 24, max work group size 8192, max sub group size 64, global mem size 67065057280
Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
```
|Attribute|Note|
|-|-|
|compute capability 1.3|Level-zero running time, recommended |
|compute capability 3.0|OpenCL running time, slower than level-zero in most cases|
4. Set device ID and execute whisper.cpp
Set device ID = 0 by **GGML_SYCL_DEVICE=0**
```
GGML_SYCL_DEVICE=0 ./build/bin/main -m models/ggml-base.en.bin -f samples/jfk.wav
```
or run by script:
```
./examples/sycl/run_whisper.sh
```
5. Check the device ID in output
Like:
```
Using device **0** (Intel(R) Arc(TM) A770 Graphics) as main device
```
## Environment Variable
#### Build
|Name|Value|Function|
|-|-|-|
|WHISPER_SYCL|ON (mandatory)|Enable build with SYCL code path. <br>For FP32/FP16, WHISPER_SYCL=ON is mandatory.|
|WHISPER_SYCL_F16|ON (optional)|Enable FP16 build with SYCL code path.For FP32, do not set it.|
|CMAKE_C_COMPILER|icx|Use icx compiler for SYCL code path|
|CMAKE_CXX_COMPILER|icpx|use icpx for SYCL code path|
#### Running
|Name|Value|Function|
|-|-|-|
|GGML_SYCL_DEVICE|0 (default) or 1|Set the device id used. Check the device ids by default running output|
|GGML_SYCL_DEBUG|0 (default) or 1|Enable log function by macro: GGML_SYCL_DEBUG|
## Known Issue
- Error: `error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory`.
Miss to enable oneAPI running environment.
Install oneAPI base toolkit and enable it by: `source /opt/intel/oneapi/setvars.sh`.
- Hang during startup
llama.cpp use mmap as default way to read model file and copy to GPU. In some system, memcpy will be abnormal and block.
Solution: add **--no-mmap**.
## Todo
- Support to build in Windows.
- Support multiple cards.

View File

@@ -0,0 +1,5 @@
module whisper [system] {
header "whisper.h"
link "whisper"
export *
}

View File

@@ -0,0 +1,4 @@
#pragma once
#include <whisper.h>

View File

@@ -0,0 +1,28 @@
name: Close inactive issues
on:
schedule:
- cron: "42 0 * * *"
# Fine-grant permission
# https://docs.github.com/en/actions/security-for-github-actions/security-guides/automatic-token-authentication#modifying-the-permissions-for-the-github_token
permissions:
issues: write
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
exempt-issue-labels: "refactor,help wanted,good first issue,research,bug,roadmap"
days-before-issue-stale: 30
days-before-issue-close: 14
stale-issue-label: "stale"
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
operations-per-run: 10000
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -0,0 +1,16 @@
# Set the default compile features and properties for a target.
if (NOT TARGET)
message(FATAL_ERROR "TARGET not set before including DefaultTargetOptions")
endif()
target_compile_features(${TARGET}
PRIVATE
cxx_std_11
)
set_target_properties(${TARGET}
PROPERTIES
EXPORT_COMPILE_COMMANDS ON
RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/bin"
)

View File

@@ -0,0 +1,163 @@
# From
# https://github.com/snikulov/cmake-modules/blob/master/FindFFmpeg.cmake
#
# vim: ts=2 sw=2
# - Try to find the required ffmpeg components(default: AVFORMAT, AVUTIL, AVCODEC)
#
# Once done this will define
# FFMPEG_FOUND - System has the all required components.
# FFMPEG_INCLUDE_DIRS - Include directory necessary for using the required components headers.
# FFMPEG_LIBRARIES - Link these to use the required ffmpeg components.
# FFMPEG_DEFINITIONS - Compiler switches required for using the required ffmpeg components.
#
# For each of the components it will additionally set.
# - AVCODEC
# - AVDEVICE
# - AVFORMAT
# - AVFILTER
# - AVUTIL
# - POSTPROC
# - SWSCALE
# the following variables will be defined
# <component>_FOUND - System has <component>
# <component>_INCLUDE_DIRS - Include directory necessary for using the <component> headers
# <component>_LIBRARIES - Link these to use <component>
# <component>_DEFINITIONS - Compiler switches required for using <component>
# <component>_VERSION - The components version
#
# Copyright (c) 2006, Matthias Kretz, <kretz@kde.org>
# Copyright (c) 2008, Alexander Neundorf, <neundorf@kde.org>
# Copyright (c) 2011, Michael Jansen, <kde@michael-jansen.biz>
#
# Redistribution and use is allowed according to the terms of the BSD license.
# For details see the accompanying COPYING-CMAKE-SCRIPTS file.
include(FindPackageHandleStandardArgs)
# The default components were taken from a survey over other FindFFMPEG.cmake files
if (NOT FFmpeg_FIND_COMPONENTS)
set(FFmpeg_FIND_COMPONENTS AVFORMAT AVCODEC AVUTIL SWRESAMPLE)
endif()
#
### Macro: set_component_found
#
# Marks the given component as found if both *_LIBRARIES AND *_INCLUDE_DIRS is present.
#
macro(set_component_found _component )
if (${_component}_LIBRARIES AND ${_component}_INCLUDE_DIRS)
message(DEBUG " - ${_component} found.")
set(${_component}_FOUND TRUE)
else ()
message(DEBUG " - ${_component} not found.")
endif ()
endmacro()
#
### Macro: find_component
#
# Checks for the given component by invoking pkgconfig and then looking up the libraries and
# include directories.
#
macro(find_component _component _pkgconfig _library _header)
if (NOT WIN32)
# use pkg-config to get the directories and then use these values
# in the FIND_PATH() and FIND_LIBRARY() calls
find_package(PkgConfig)
if (PKG_CONFIG_FOUND)
pkg_check_modules(PC_${_component} ${_pkgconfig})
message(STATUS "Pkgconfig found: ${PC_${_component}_INCLUDEDIR}")
message(STATUS "Pkgconfig found: ${PC_${_component}_INCLUDE_DIRS}")
message(STATUS "${PC_${_component}_CFLAGS}")
endif ()
endif (NOT WIN32)
find_path(${_component}_INCLUDE_DIRS ${_header}
HINTS
${PC_${_component}_INCLUDEDIR}
${PC_${_component}_INCLUDE_DIRS}
PATH_SUFFIXES
ffmpeg
)
# CMake's default is to search first for shared libraries and then for static libraries.
# Todo later: add option to prefer static libs over dynamic:
find_library(${_component}_LIBRARIES NAMES ${_library} lib${_library}.a
HINTS
${PC_${_component}_LIBDIR}
${PC_${_component}_LIBRARY_DIRS}
)
set(${_component}_DEFINITIONS ${PC_${_component}_CFLAGS_OTHER} CACHE STRING "The ${_component} CFLAGS.")
set(${_component}_VERSION ${PC_${_component}_VERSION} CACHE STRING "The ${_component} version number.")
set_component_found(${_component})
mark_as_advanced(
${_component}_INCLUDE_DIRS
${_component}_LIBRARIES
${_component}_DEFINITIONS
${_component}_VERSION)
endmacro()
# Check for cached results. If there are skip the costly part.
if (NOT FFMPEG_LIBRARIES)
# Check for all possible component.
find_component(AVCODEC libavcodec avcodec libavcodec/avcodec.h)
find_component(AVFORMAT libavformat avformat libavformat/avformat.h)
find_component(AVDEVICE libavdevice avdevice libavdevice/avdevice.h)
#find_component(AVRESAMPLE libavresample avresample libavresample/avresample.h) # old name for swresample
find_component(AVUTIL libavutil avutil libavutil/avutil.h)
find_component(AVFILTER libavfilter avfilter libavfilter/avfilter.h)
find_component(SWSCALE libswscale swscale libswscale/swscale.h)
find_component(POSTPROC libpostproc postproc libpostproc/postprocess.h)
find_component(SWRESAMPLE libswresample swresample libswresample/swresample.h)
# Check if the required components were found and add their stuff to the FFMPEG_* vars.
foreach (_component ${FFmpeg_FIND_COMPONENTS})
if (${_component}_FOUND)
# message(STATUS "Required component ${_component} present.")
set(FFMPEG_LIBRARIES ${FFMPEG_LIBRARIES} ${${_component}_LIBRARIES})
set(FFMPEG_DEFINITIONS ${FFMPEG_DEFINITIONS} ${${_component}_DEFINITIONS})
list(APPEND FFMPEG_INCLUDE_DIRS ${${_component}_INCLUDE_DIRS})
else ()
# message(STATUS "Required component ${_component} missing.")
endif ()
endforeach ()
# Build the include path with duplicates removed.
if (FFMPEG_INCLUDE_DIRS)
list(REMOVE_DUPLICATES FFMPEG_INCLUDE_DIRS)
endif ()
# cache the vars.
set(FFMPEG_INCLUDE_DIRS ${FFMPEG_INCLUDE_DIRS} CACHE STRING "The FFmpeg include directories." FORCE)
set(FFMPEG_LIBRARIES ${FFMPEG_LIBRARIES} CACHE STRING "The FFmpeg libraries." FORCE)
set(FFMPEG_DEFINITIONS ${FFMPEG_DEFINITIONS} CACHE STRING "The FFmpeg cflags." FORCE)
mark_as_advanced(FFMPEG_INCLUDE_DIRS
FFMPEG_LIBRARIES
FFMPEG_DEFINITIONS)
endif ()
# Now set the noncached _FOUND vars for the components.
# whisper.cpp does not need SWSCALE
foreach (_component AVCODEC AVDEVICE AVFORMAT AVRESAMPLE AVUTIL POSTPROCESS)
set_component_found(${_component})
endforeach ()
# Compile the list of required vars
set(_FFmpeg_REQUIRED_VARS FFMPEG_LIBRARIES FFMPEG_INCLUDE_DIRS)
foreach (_component ${FFmpeg_FIND_COMPONENTS})
list(APPEND _FFmpeg_REQUIRED_VARS ${_component}_LIBRARIES ${_component}_INCLUDE_DIRS)
endforeach ()
# Give a nice error message if some of the required vars are missing.
find_package_handle_standard_args(FFmpeg DEFAULT_MSG ${_FFmpeg_REQUIRED_VARS})

View File

@@ -0,0 +1,60 @@
set(BUILD_NUMBER 0)
set(BUILD_COMMIT "unknown")
set(BUILD_COMPILER "unknown")
set(BUILD_TARGET "unknown")
# Look for git
find_package(Git)
if(NOT Git_FOUND)
find_program(GIT_EXECUTABLE NAMES git git.exe)
if(GIT_EXECUTABLE)
set(Git_FOUND TRUE)
message(STATUS "Found Git: ${GIT_EXECUTABLE}")
else()
message(WARNING "Git not found. Build info will not be accurate.")
endif()
endif()
# Get the commit count and hash
if(Git_FOUND)
execute_process(
COMMAND ${GIT_EXECUTABLE} rev-parse --short HEAD
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
OUTPUT_VARIABLE HEAD
OUTPUT_STRIP_TRAILING_WHITESPACE
RESULT_VARIABLE RES
)
if (RES EQUAL 0)
set(BUILD_COMMIT ${HEAD})
endif()
execute_process(
COMMAND ${GIT_EXECUTABLE} rev-list --count HEAD
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
OUTPUT_VARIABLE COUNT
OUTPUT_STRIP_TRAILING_WHITESPACE
RESULT_VARIABLE RES
)
if (RES EQUAL 0)
set(BUILD_NUMBER ${COUNT})
endif()
endif()
if(MSVC)
set(BUILD_COMPILER "${CMAKE_C_COMPILER_ID} ${CMAKE_C_COMPILER_VERSION}")
set(BUILD_TARGET ${CMAKE_VS_PLATFORM_NAME})
add_compile_options("$<$<COMPILE_LANGUAGE:C>:/utf-8>")
add_compile_options("$<$<COMPILE_LANGUAGE:CXX>:/utf-8>")
else()
execute_process(
COMMAND sh -c "$@ --version | head -1" _ ${CMAKE_C_COMPILER}
OUTPUT_VARIABLE OUT
OUTPUT_STRIP_TRAILING_WHITESPACE
)
set(BUILD_COMPILER ${OUT})
execute_process(
COMMAND ${CMAKE_C_COMPILER} -dumpmachine
OUTPUT_VARIABLE OUT
OUTPUT_STRIP_TRAILING_WHITESPACE
)
set(BUILD_TARGET ${OUT})
endif()

View File

@@ -0,0 +1,22 @@
find_package(Git)
# the commit's SHA1
execute_process(COMMAND
"${GIT_EXECUTABLE}" describe --match=NeVeRmAtCh --always --abbrev=8
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}"
OUTPUT_VARIABLE GIT_SHA1
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
# the date of the commit
execute_process(COMMAND
"${GIT_EXECUTABLE}" log -1 --format=%ad --date=local
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}"
OUTPUT_VARIABLE GIT_DATE
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
# the subject of the commit
execute_process(COMMAND
"${GIT_EXECUTABLE}" log -1 --format=%s
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}"
OUTPUT_VARIABLE GIT_COMMIT_SUBJECT
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)

View File

@@ -0,0 +1,65 @@
set(WHISPER_VERSION @WHISPER_INSTALL_VERSION@)
set(WHISPER_BUILD_COMMIT @WHISPER_BUILD_COMMIT@)
set(WHISPER_BUILD_NUMBER @WHISPER_BUILD_NUMBER@)
set(WHISPER_SHARED_LIB @BUILD_SHARED_LIBS@)
set(GGML_BLAS @GGML_BLAS@)
set(GGML_CUDA @GGML_CUDA@)
set(GGML_METAL @GGML_METAL@)
set(GGML_HIPBLAS @GGML_HIPBLAS@)
set(GGML_ACCELERATE @GGML_ACCELERATE@)
@PACKAGE_INIT@
set_and_check(WHISPER_INCLUDE_DIR "@PACKAGE_WHISPER_INCLUDE_INSTALL_DIR@")
set_and_check(WHISPER_LIB_DIR "@PACKAGE_WHISPER_LIB_INSTALL_DIR@")
set_and_check(WHISPER_BIN_DIR "@PACKAGE_WHISPER_BIN_INSTALL_DIR@")
# Ensure transient dependencies satisfied
find_package(Threads REQUIRED)
if (APPLE AND GGML_ACCELERATE)
find_library(ACCELERATE_FRAMEWORK Accelerate REQUIRED)
endif()
if (GGML_BLAS)
find_package(BLAS REQUIRED)
endif()
if (GGML_CUDA)
find_package(CUDAToolkit REQUIRED)
endif()
if (GGML_METAL)
find_library(FOUNDATION_LIBRARY Foundation REQUIRED)
find_library(METAL_FRAMEWORK Metal REQUIRED)
find_library(METALKIT_FRAMEWORK MetalKit REQUIRED)
endif()
if (GGML_HIPBLAS)
find_package(hip REQUIRED)
find_package(hipblas REQUIRED)
find_package(rocblas REQUIRED)
endif()
find_library(whisper_LIBRARY whisper
REQUIRED
HINTS ${WHISPER_LIB_DIR})
set(_whisper_link_deps "Threads::Threads" "@WHISPER_EXTRA_LIBS@")
set(_whisper_transient_defines "@WHISPER_TRANSIENT_DEFINES@")
add_library(whisper UNKNOWN IMPORTED)
set_target_properties(whisper
PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${WHISPER_INCLUDE_DIR}"
INTERFACE_LINK_LIBRARIES "${_whisper_link_deps}"
INTERFACE_COMPILE_DEFINITIONS "${_whisper_transient_defines}"
IMPORTED_LINK_INTERFACE_LANGUAGES "CXX"
IMPORTED_LOCATION "${whisper_LIBRARY}"
INTERFACE_COMPILE_FEATURES cxx_std_11
POSITION_INDEPENDENT_CODE ON )
check_required_components(whisper)

View File

@@ -0,0 +1,10 @@
prefix=@CMAKE_INSTALL_PREFIX@
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include
Name: whisper
Description: Port of OpenAI's Whisper model in C/C++
Version: @PROJECT_VERSION@
Libs: -L${libdir} -lggml -lggml-base -lwhisper
Cflags: -I${includedir}

View File

@@ -0,0 +1 @@
src/ggml-metal-embed.metal

View File

@@ -0,0 +1,343 @@
cmake_minimum_required(VERSION 3.14) # for add_link_options and implicit target directories.
project("ggml" C CXX)
include(CheckIncludeFileCXX)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
if (NOT XCODE AND NOT MSVC AND NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release CACHE STRING "Build type" FORCE)
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Debug" "Release" "MinSizeRel" "RelWithDebInfo")
endif()
if (CMAKE_SOURCE_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR)
set(GGML_STANDALONE ON)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
# configure project version
# TODO
else()
set(GGML_STANDALONE OFF)
endif()
if (EMSCRIPTEN)
set(BUILD_SHARED_LIBS_DEFAULT OFF)
option(GGML_WASM_SINGLE_FILE "ggml: embed WASM inside the generated ggml.js" ON)
else()
if (MINGW)
set(BUILD_SHARED_LIBS_DEFAULT OFF)
else()
set(BUILD_SHARED_LIBS_DEFAULT ON)
endif()
endif()
# remove the lib prefix on win32 mingw
if (WIN32)
set(CMAKE_STATIC_LIBRARY_PREFIX "")
set(CMAKE_SHARED_LIBRARY_PREFIX "")
set(CMAKE_SHARED_MODULE_PREFIX "")
endif()
option(BUILD_SHARED_LIBS "ggml: build shared libraries" ${BUILD_SHARED_LIBS_DEFAULT})
option(GGML_BACKEND_DL "ggml: build backends as dynamic libraries (requires BUILD_SHARED_LIBS)" OFF)
#
# option list
#
# TODO: mark all options as advanced when not GGML_STANDALONE
if (APPLE)
set(GGML_METAL_DEFAULT ON)
set(GGML_BLAS_DEFAULT ON)
set(GGML_BLAS_VENDOR_DEFAULT "Apple")
else()
set(GGML_METAL_DEFAULT OFF)
set(GGML_BLAS_DEFAULT OFF)
set(GGML_BLAS_VENDOR_DEFAULT "Generic")
endif()
if (CMAKE_CROSSCOMPILING OR DEFINED ENV{SOURCE_DATE_EPOCH})
message(STATUS "Setting GGML_NATIVE_DEFAULT to OFF")
set(GGML_NATIVE_DEFAULT OFF)
else()
set(GGML_NATIVE_DEFAULT ON)
endif()
# defaults
if (NOT GGML_LLAMAFILE_DEFAULT)
set(GGML_LLAMAFILE_DEFAULT OFF)
endif()
if (NOT GGML_CUDA_GRAPHS_DEFAULT)
set(GGML_CUDA_GRAPHS_DEFAULT OFF)
endif()
# general
option(GGML_STATIC "ggml: static link libraries" OFF)
option(GGML_NATIVE "ggml: optimize the build for the current system" ${GGML_NATIVE_DEFAULT})
option(GGML_LTO "ggml: enable link time optimization" OFF)
option(GGML_CCACHE "ggml: use ccache if available" ON)
# debug
option(GGML_ALL_WARNINGS "ggml: enable all compiler warnings" ON)
option(GGML_ALL_WARNINGS_3RD_PARTY "ggml: enable all compiler warnings in 3rd party libs" OFF)
option(GGML_GPROF "ggml: enable gprof" OFF)
# build
option(GGML_FATAL_WARNINGS "ggml: enable -Werror flag" OFF)
# sanitizers
option(GGML_SANITIZE_THREAD "ggml: enable thread sanitizer" OFF)
option(GGML_SANITIZE_ADDRESS "ggml: enable address sanitizer" OFF)
option(GGML_SANITIZE_UNDEFINED "ggml: enable undefined sanitizer" OFF)
# instruction set specific
if (GGML_NATIVE OR NOT GGML_NATIVE_DEFAULT)
set(INS_ENB OFF)
else()
set(INS_ENB ON)
endif()
option(GGML_CPU_HBM "ggml: use memkind for CPU HBM" OFF)
option(GGML_CPU_AARCH64 "ggml: use runtime weight conversion of Q4_0 to Q4_X_X" ON)
option(GGML_AVX "ggml: enable AVX" ${INS_ENB})
option(GGML_AVX_VNNI "ggml: enable AVX-VNNI" OFF)
option(GGML_AVX2 "ggml: enable AVX2" ${INS_ENB})
option(GGML_AVX512 "ggml: enable AVX512F" OFF)
option(GGML_AVX512_VBMI "ggml: enable AVX512-VBMI" OFF)
option(GGML_AVX512_VNNI "ggml: enable AVX512-VNNI" OFF)
option(GGML_AVX512_BF16 "ggml: enable AVX512-BF16" OFF)
if (NOT MSVC)
# in MSVC F16C and FMA is implied with AVX2/AVX512
option(GGML_FMA "ggml: enable FMA" ${INS_ENB})
option(GGML_F16C "ggml: enable F16C" ${INS_ENB})
# MSVC does not seem to support AMX
option(GGML_AMX_TILE "ggml: enable AMX-TILE" OFF)
option(GGML_AMX_INT8 "ggml: enable AMX-INT8" OFF)
option(GGML_AMX_BF16 "ggml: enable AMX-BF16" OFF)
endif()
option(GGML_LASX "ggml: enable lasx" ON)
option(GGML_LSX "ggml: enable lsx" ON)
option(GGML_RVV "ggml: enable rvv" ON)
option(GGML_CPU_ALL_VARIANTS "ggml: build all variants of the CPU backend (requires GGML_BACKEND_DL)" OFF)
set(GGML_CPU_ARM_ARCH "" CACHE STRING "ggml: CPU architecture for ARM")
if (WIN32)
set(GGML_WIN_VER "0x602" CACHE STRING "ggml: Windows version")
endif()
# ggml core
set(GGML_SCHED_MAX_COPIES "4" CACHE STRING "ggml: max input copies for pipeline parallelism")
option(GGML_CPU "ggml: enable CPU backend" ON)
# 3rd party libs / backends
option(GGML_ACCELERATE "ggml: enable Accelerate framework" ON)
option(GGML_BLAS "ggml: use BLAS" ${GGML_BLAS_DEFAULT})
set(GGML_BLAS_VENDOR ${GGML_BLAS_VENDOR_DEFAULT} CACHE STRING
"ggml: BLAS library vendor")
option(GGML_LLAMAFILE "ggml: use LLAMAFILE" ${GGML_LLAMAFILE_DEFAULT})
option(GGML_CUDA "ggml: use CUDA" OFF)
option(GGML_MUSA "ggml: use MUSA" OFF)
option(GGML_CUDA_FORCE_MMQ "ggml: use mmq kernels instead of cuBLAS" OFF)
option(GGML_CUDA_FORCE_CUBLAS "ggml: always use cuBLAS instead of mmq kernels" OFF)
option(GGML_CUDA_F16 "ggml: use 16 bit floats for some calculations" OFF)
set (GGML_CUDA_PEER_MAX_BATCH_SIZE "128" CACHE STRING
"ggml: max. batch size for using peer access")
option(GGML_CUDA_NO_PEER_COPY "ggml: do not use peer to peer copies" OFF)
option(GGML_CUDA_NO_VMM "ggml: do not try to use CUDA VMM" OFF)
option(GGML_CUDA_FA_ALL_QUANTS "ggml: compile all quants for FlashAttention" OFF)
option(GGML_CUDA_GRAPHS "ggml: use CUDA graphs (llama.cpp only)" ${GGML_CUDA_GRAPHS_DEFAULT})
option(GGML_HIP "ggml: use HIP" OFF)
option(GGML_HIP_GRAPHS "ggml: use HIP graph, experimental, slow" OFF)
option(GGML_HIP_NO_VMM "ggml: do not try to use HIP VMM" ON)
option(GGML_HIP_UMA "ggml: use HIP unified memory architecture" OFF)
option(GGML_VULKAN "ggml: use Vulkan" OFF)
option(GGML_VULKAN_CHECK_RESULTS "ggml: run Vulkan op checks" OFF)
option(GGML_VULKAN_DEBUG "ggml: enable Vulkan debug output" OFF)
option(GGML_VULKAN_MEMORY_DEBUG "ggml: enable Vulkan memory debug output" OFF)
option(GGML_VULKAN_SHADER_DEBUG_INFO "ggml: enable Vulkan shader debug info" OFF)
option(GGML_VULKAN_PERF "ggml: enable Vulkan perf output" OFF)
option(GGML_VULKAN_VALIDATE "ggml: enable Vulkan validation" OFF)
option(GGML_VULKAN_RUN_TESTS "ggml: run Vulkan tests" OFF)
option(GGML_KOMPUTE "ggml: use Kompute" OFF)
option(GGML_METAL "ggml: use Metal" ${GGML_METAL_DEFAULT})
option(GGML_METAL_USE_BF16 "ggml: use bfloat if available" OFF)
option(GGML_METAL_NDEBUG "ggml: disable Metal debugging" OFF)
option(GGML_METAL_SHADER_DEBUG "ggml: compile Metal with -fno-fast-math" OFF)
option(GGML_METAL_EMBED_LIBRARY "ggml: embed Metal library" ${GGML_METAL})
set (GGML_METAL_MACOSX_VERSION_MIN "" CACHE STRING
"ggml: metal minimum macOS version")
set (GGML_METAL_STD "" CACHE STRING "ggml: metal standard version (-std flag)")
option(GGML_OPENMP "ggml: use OpenMP" ON)
option(GGML_RPC "ggml: use RPC" OFF)
option(GGML_SYCL "ggml: use SYCL" OFF)
option(GGML_SYCL_F16 "ggml: use 16 bit floats for sycl calculations" OFF)
set (GGML_SYCL_TARGET "INTEL" CACHE STRING
"ggml: sycl target device")
set (GGML_SYCL_DEVICE_ARCH "" CACHE STRING
"ggml: sycl device architecture")
option(GGML_OPENCL "ggml: use OpenCL" OFF)
option(GGML_OPENCL_PROFILING "ggml: use OpenCL profiling (increases overhead)" OFF)
option(GGML_OPENCL_EMBED_KERNELS "ggml: embed kernels" ON)
option(GGML_OPENCL_USE_ADRENO_KERNELS "ggml: use optimized kernels for Adreno" ON)
# toolchain for vulkan-shaders-gen
set (GGML_VULKAN_SHADERS_GEN_TOOLCHAIN "" CACHE FILEPATH "ggml: toolchain file for vulkan-shaders-gen")
# extra artifacts
option(GGML_BUILD_TESTS "ggml: build tests" ${GGML_STANDALONE})
option(GGML_BUILD_EXAMPLES "ggml: build examples" ${GGML_STANDALONE})
#
# dependencies
#
set(CMAKE_C_STANDARD 11)
set(CMAKE_C_STANDARD_REQUIRED true)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED true)
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads REQUIRED)
#
# build the library
#
add_subdirectory(src)
#
# tests and examples
#
if (GGML_BUILD_TESTS)
enable_testing()
add_subdirectory(tests)
endif ()
if (GGML_BUILD_EXAMPLES)
add_subdirectory(examples)
endif ()
#
# install
#
include(GNUInstallDirs)
include(CMakePackageConfigHelpers)
# all public headers
set(GGML_PUBLIC_HEADERS
include/ggml.h
include/ggml-cpu.h
include/ggml-alloc.h
include/ggml-backend.h
include/ggml-blas.h
include/ggml-cann.h
include/ggml-cuda.h
include/ggml-kompute.h
include/ggml-opt.h
include/ggml-metal.h
include/ggml-rpc.h
include/ggml-sycl.h
include/ggml-vulkan.h
include/gguf.h)
set_target_properties(ggml PROPERTIES PUBLIC_HEADER "${GGML_PUBLIC_HEADERS}")
#if (GGML_METAL)
# set_target_properties(ggml PROPERTIES RESOURCE "${CMAKE_CURRENT_SOURCE_DIR}/src/ggml-metal.metal")
#endif()
install(TARGETS ggml LIBRARY PUBLIC_HEADER)
install(TARGETS ggml-base LIBRARY)
if (GGML_STANDALONE)
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/ggml.pc.in
${CMAKE_CURRENT_BINARY_DIR}/ggml.pc
@ONLY)
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/ggml.pc
DESTINATION share/pkgconfig)
endif()
#
# Create CMake package
#
# Generate version info based on git commit.
if(NOT DEFINED GGML_BUILD_NUMBER)
find_program(GIT_EXE NAMES git git.exe REQUIRED NO_CMAKE_FIND_ROOT_PATH)
execute_process(COMMAND ${GIT_EXE} rev-list --count HEAD
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
OUTPUT_VARIABLE GGML_BUILD_NUMBER
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if(GGML_BUILD_NUMBER EQUAL 1)
message(WARNING "GGML build version fixed at 1 likely due to a shallow clone.")
endif()
execute_process(COMMAND ${GIT_EXE} rev-parse --short HEAD
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
OUTPUT_VARIABLE GGML_BUILD_COMMIT
OUTPUT_STRIP_TRAILING_WHITESPACE
)
endif()
# Capture variables prefixed with GGML_.
set(variable_set_statements
"
####### Expanded from @GGML_VARIABLES_EXPANED@ by configure_package_config_file() #######
####### Any changes to this file will be overwritten by the next CMake run #######
")
set(GGML_SHARED_LIB ${BUILD_SHARED_LIBS})
get_cmake_property(all_variables VARIABLES)
foreach(variable_name IN LISTS all_variables)
if(variable_name MATCHES "^GGML_")
string(REPLACE ";" "\\;"
variable_value "${${variable_name}}")
set(variable_set_statements
"${variable_set_statements}set(${variable_name} \"${variable_value}\")\n")
endif()
endforeach()
set(GGML_VARIABLES_EXPANDED ${variable_set_statements})
# Create the CMake package and set install location.
set(GGML_INSTALL_VERSION 0.0.${GGML_BUILD_NUMBER})
set(GGML_INCLUDE_INSTALL_DIR ${CMAKE_INSTALL_INCLUDEDIR} CACHE PATH "Location of header files")
set(GGML_LIB_INSTALL_DIR ${CMAKE_INSTALL_LIBDIR} CACHE PATH "Location of library files")
set(GGML_BIN_INSTALL_DIR ${CMAKE_INSTALL_BINDIR} CACHE PATH "Location of binary files")
configure_package_config_file(
${CMAKE_CURRENT_SOURCE_DIR}/cmake/ggml-config.cmake.in
${CMAKE_CURRENT_BINARY_DIR}/ggml-config.cmake
INSTALL_DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/ggml
PATH_VARS GGML_INCLUDE_INSTALL_DIR
GGML_LIB_INSTALL_DIR
GGML_BIN_INSTALL_DIR)
write_basic_package_version_file(
${CMAKE_CURRENT_BINARY_DIR}/ggml-version.cmake
VERSION ${GGML_INSTALL_VERSION}
COMPATIBILITY SameMajorVersion)
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/ggml-config.cmake
${CMAKE_CURRENT_BINARY_DIR}/ggml-version.cmake
DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/ggml)

View File

@@ -0,0 +1,54 @@
# Add new build types
# ReleaseGG - Release with enabled asserts
SET(CMAKE_CXX_FLAGS_RELEASEGG
"-O3"
CACHE STRING "Flags used by the c++ compiler during release builds with enabled asserts."
FORCE )
SET(CMAKE_C_FLAGS_RELEASEGG
"-O3"
CACHE STRING "Flags used by the compiler during release builds with enabled asserts."
FORCE )
SET(CMAKE_EXE_LINKER_FLAGS_RELEASEGG
""
CACHE STRING "Flags used for linking binaries during release builds with enabled asserts."
FORCE )
SET(CMAKE_SHARED_LINKER_FLAGS_RELEASEGG
""
CACHE STRING "Flags used by the shared libraries linker during release builds with enabled asserts."
FORCE )
MARK_AS_ADVANCED(
CMAKE_CXX_FLAGS_RELEASEGG
CMAKE_C_FLAGS_RELEASEGG
CMAKE_EXE_LINKER_FLAGS_RELEASEGG
CMAKE_SHARED_LINKER_FLAGS_RELEASEGG )
# RelWithDebInfoGG - RelWithDebInfo with enabled asserts
SET(CMAKE_CXX_FLAGS_RELWITHDEBINFOGG
"-O2 -g"
CACHE STRING "Flags used by the c++ compiler during release builds with debug symbols and enabled asserts."
FORCE )
SET(CMAKE_C_FLAGS_RELWITHDEBINFOGG
"-O2 -g"
CACHE STRING "Flags used by the compiler during release builds with debug symbols and enabled asserts."
FORCE )
SET(CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFOGG
""
CACHE STRING "Flags used for linking binaries during release builds with debug symbols and enabled asserts."
FORCE )
SET(CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFOGG
""
CACHE STRING "Flags used by the shared libraries linker during release builds with debug symbols and enabled asserts."
FORCE )
MARK_AS_ADVANCED(
CMAKE_CXX_FLAGS_RELWITHDEBINFOGG
CMAKE_C_FLAGS_RELWITHDEBINFOGG
CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFOGG
CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFOGG )
if (NOT XCODE AND NOT MSVC AND NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release CACHE STRING "Build type" FORCE)
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Debug" "Release" "MinSizeRel" "RelWithDebInfo" "ReleaseGG" "RelWithDebInfoGG")
endif()

View File

@@ -0,0 +1,22 @@
find_package(Git)
# the commit's SHA1
execute_process(COMMAND
"${GIT_EXECUTABLE}" describe --match=NeVeRmAtCh --always --abbrev=8
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}"
OUTPUT_VARIABLE GIT_SHA1
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
# the date of the commit
execute_process(COMMAND
"${GIT_EXECUTABLE}" log -1 --format=%ad --date=local
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}"
OUTPUT_VARIABLE GIT_DATE
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
# the subject of the commit
execute_process(COMMAND
"${GIT_EXECUTABLE}" log -1 --format=%s
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}"
OUTPUT_VARIABLE GIT_COMMIT_SUBJECT
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)

View File

@@ -0,0 +1,147 @@
@GGML_VARIABLES_EXPANDED@
@PACKAGE_INIT@
set_and_check(GGML_INCLUDE_DIR "@PACKAGE_GGML_INCLUDE_INSTALL_DIR@")
set_and_check(GGML_LIB_DIR "@PACKAGE_GGML_LIB_INSTALL_DIR@")
set_and_check(GGML_BIN_DIR "@PACKAGE_GGML_BIN_INSTALL_DIR@")
find_package(Threads REQUIRED)
find_library(GGML_LIBRARY ggml
REQUIRED
HINTS ${GGML_LIB_DIR}
NO_CMAKE_FIND_ROOT_PATH)
add_library(ggml::ggml UNKNOWN IMPORTED)
set_target_properties(ggml::ggml
PROPERTIES
IMPORTED_LOCATION "${GGML_LIBRARY}")
find_library(GGML_BASE_LIBRARY ggml-base
REQUIRED
HINTS ${GGML_LIB_DIR}
NO_CMAKE_FIND_ROOT_PATH)
add_library(ggml::ggml-base UNKNOWN IMPORTED)
set_target_properties(ggml::ggml-base
PROPERTIES
IMPORTED_LOCATION "${GGML_BASE_LIBRARY}")
if (NOT GGML_SHARED_LIB)
if (APPLE AND GGML_ACCELERATE)
find_library(ACCELERATE_FRAMEWORK Accelerate REQUIRED)
list(APPEND GGML_CPU_INTERFACE_LINK_LIBRARIES ${ACCELERATE_FRAMEWORK})
endif()
if (GGML_OPENMP)
find_package(OpenMP REQUIRED)
list(APPEND GGML_CPU_INTERFACE_LINK_LIBRARIES OpenMP::OpenMP_C OpenMP::OpenMP_CXX)
endif()
if (GGML_CPU_HBM)
find_library(memkind memkind REQUIRED)
list(APPEND GGML_CPU_INTERFACE_LINK_LIBRARIES memkind)
endif()
if (GGML_BLAS)
find_package(BLAS REQUIRED)
list(APPEND GGML_CPU_INTERFACE_LINK_LIBRARIES ${BLAS_LIBRARIES})
list(APPEND GGML_CPU_INTERFACE_LINK_OPTIONS ${BLAS_LINKER_FLAGS})
endif()
if (GGML_CUDA)
find_package(CUDAToolkit REQUIRED)
endif()
if (GGML_METAL)
find_library(FOUNDATION_LIBRARY Foundation REQUIRED)
find_library(METAL_FRAMEWORK Metal REQUIRED)
find_library(METALKIT_FRAMEWORK MetalKit REQUIRED)
list(APPEND GGML_METAL_INTERFACE_LINK_LIBRARIES
${FOUNDATION_LIBRARY} ${METAL_FRAMEWORK} ${METALKIT_FRAMEWORK})
endif()
if (GGML_VULKAN)
find_package(Vulkan REQUIRED)
list(APPEND GGML_VULKAN_INTERFACE_LINK_LIBRARIES Vulkan::Vulkan)
endif()
if (GGML_HIP)
find_package(hip REQUIRED)
find_package(hipblas REQUIRED)
find_package(rocblas REQUIRED)
list(APPEND GGML_HIP_INTERFACE_LINK_LIBRARIES hip::host roc::rocblas roc::hipblas)
endif()
if (GGML_SYCL)
find_package(DNNL)
if (${DNNL_FOUND} AND GGML_SYCL_TARGET STREQUAL "INTEL")
list(APPEND GGML_SYCL_INTERFACE_LINK_LIBRARIES DNNL::dnnl)
endif()
if (WIN32)
find_package(IntelSYCL REQUIRED)
find_package(MKL REQUIRED)
list(APPEND GGML_SYCL_INTERFACE_LINK_LIBRARIES IntelSYCL::SYCL_CXX MKL::MKL MKL::MKL_SYCL)
endif()
endif()
endif()
set(_ggml_all_targets "")
foreach(_ggml_backend ${GGML_AVAILABLE_BACKENDS})
string(REPLACE "-" "_" _ggml_backend_pfx "${_ggml_backend}")
string(TOUPPER "${_ggml_backend_pfx}" _ggml_backend_pfx)
find_library(${_ggml_backend_pfx}_LIBRARY ${_ggml_backend}
REQUIRED
HINTS ${GGML_LIB_DIR}
NO_CMAKE_FIND_ROOT_PATH)
message(STATUS "Found ${${_ggml_backend_pfx}_LIBRARY}")
add_library(ggml::${_ggml_backend} UNKNOWN IMPORTED)
set_target_properties(ggml::${_ggml_backend}
PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${GGML_INCLUDE_DIR}"
IMPORTED_LINK_INTERFACE_LANGUAGES "CXX"
IMPORTED_LOCATION "${${_ggml_backend_pfx}_LIBRARY}"
INTERFACE_COMPILE_FEATURES c_std_90
POSITION_INDEPENDENT_CODE ON)
string(REGEX MATCH "^ggml-cpu" is_cpu_variant "${_ggml_backend}")
if(is_cpu_variant)
list(APPEND GGML_CPU_INTERFACE_LINK_LIBRARIES "ggml::ggml" "ggml::ggml-base")
set_target_properties(ggml::${_ggml_backend}
PROPERTIES
INTERFACE_LINK_LIBRARIES "${GGML_CPU_INTERFACE_LINK_LIBRARIES}")
if(GGML_CPU_INTERFACE_LINK_OPTIONS)
set_target_properties(ggml::${_ggml_backend}
PROPERTIES
INTERFACE_LINK_OPTIONS "${GGML_CPU_INTERFACE_LINK_OPTIONS}")
endif()
else()
list(APPEND ${_ggml_backend_pfx}_INTERFACE_LINK_LIBRARIES "ggml::ggml" "ggml::ggml-base")
set_target_properties(ggml::${_ggml_backend}
PROPERTIES
INTERFACE_LINK_LIBRARIES "${${_ggml_backend_pfx}_INTERFACE_LINK_LIBRARIES}")
if(${_ggml_backend_pfx}_INTERFACE_LINK_OPTIONS)
set_target_properties(ggml::${_ggml_backend}
PROPERTIES
INTERFACE_LINK_OPTIONS "${${_ggml_backend_pfx}_INTERFACE_LINK_OPTIONS}")
endif()
endif()
list(APPEND _ggml_all_targets ggml::${_ggml_backend})
endforeach()
add_library(ggml::all INTERFACE IMPORTED)
set_target_properties(ggml::all
PROPERTIES
INTERFACE_LINK_LIBRARIES "${_ggml_all_targets}")
check_required_components(ggml)

View File

@@ -0,0 +1,76 @@
#pragma once
#include "ggml.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct ggml_backend_buffer_type * ggml_backend_buffer_type_t;
typedef struct ggml_backend_buffer * ggml_backend_buffer_t;
typedef struct ggml_backend * ggml_backend_t;
// Tensor allocator
struct ggml_tallocr {
ggml_backend_buffer_t buffer;
void * base;
size_t alignment;
size_t offset;
};
GGML_API struct ggml_tallocr ggml_tallocr_new(ggml_backend_buffer_t buffer);
GGML_API void ggml_tallocr_alloc(struct ggml_tallocr * talloc, struct ggml_tensor * tensor);
// Graph allocator
/*
Example usage:
ggml_gallocr_t galloc = ggml_gallocr_new(ggml_backend_cpu_buffer_type());
// optional: create a worst-case graph and reserve the buffers to avoid reallocations
ggml_gallocr_reserve(galloc, build_graph(max_batch));
// allocate the graph
struct ggml_cgraph * graph = build_graph(batch);
ggml_gallocr_alloc_graph(galloc, graph);
printf("compute buffer size: %zu bytes\n", ggml_gallocr_get_buffer_size(galloc, 0));
// evaluate the graph
ggml_backend_graph_compute(backend, graph);
*/
// special tensor flags for use with the graph allocator:
// ggml_set_input(): all input tensors are allocated at the beginning of the graph in non-overlapping addresses
// ggml_set_output(): output tensors are never freed and never overwritten
typedef struct ggml_gallocr * ggml_gallocr_t;
GGML_API ggml_gallocr_t ggml_gallocr_new(ggml_backend_buffer_type_t buft);
GGML_API ggml_gallocr_t ggml_gallocr_new_n(ggml_backend_buffer_type_t * bufts, int n_bufs);
GGML_API void ggml_gallocr_free(ggml_gallocr_t galloc);
// pre-allocate buffers from a measure graph - does not allocate or modify the graph
// call with a worst-case graph to avoid buffer reallocations
// not strictly required for single buffer usage: ggml_gallocr_alloc_graph will reallocate the buffers automatically if needed
// returns false if the buffer allocation failed
GGML_API bool ggml_gallocr_reserve(ggml_gallocr_t galloc, struct ggml_cgraph * graph);
GGML_API bool ggml_gallocr_reserve_n(
ggml_gallocr_t galloc,
struct ggml_cgraph * graph,
const int * node_buffer_ids,
const int * leaf_buffer_ids);
// automatic reallocation if the topology changes when using a single buffer
// returns false if using multiple buffers and a re-allocation is needed (call ggml_gallocr_reserve_n first to set the node buffers)
GGML_API bool ggml_gallocr_alloc_graph(ggml_gallocr_t galloc, struct ggml_cgraph * graph);
GGML_API size_t ggml_gallocr_get_buffer_size(ggml_gallocr_t galloc, int buffer_id);
// Utils
// Create a buffer and allocate all the tensors in a ggml_context
GGML_API struct ggml_backend_buffer * ggml_backend_alloc_ctx_tensors_from_buft(struct ggml_context * ctx, ggml_backend_buffer_type_t buft);
GGML_API struct ggml_backend_buffer * ggml_backend_alloc_ctx_tensors(struct ggml_context * ctx, ggml_backend_t backend);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,354 @@
#pragma once
#include "ggml.h"
#include "ggml-alloc.h"
#ifdef GGML_BACKEND_SHARED
# if defined(_WIN32) && !defined(__MINGW32__)
# ifdef GGML_BACKEND_BUILD
# define GGML_BACKEND_API __declspec(dllexport) extern
# else
# define GGML_BACKEND_API __declspec(dllimport) extern
# endif
# else
# define GGML_BACKEND_API __attribute__ ((visibility ("default"))) extern
# endif
#else
# define GGML_BACKEND_API extern
#endif
#ifdef __cplusplus
extern "C" {
#endif
typedef struct ggml_backend_buffer_type * ggml_backend_buffer_type_t;
typedef struct ggml_backend_buffer * ggml_backend_buffer_t;
typedef struct ggml_backend_event * ggml_backend_event_t;
typedef struct ggml_backend * ggml_backend_t;
typedef void * ggml_backend_graph_plan_t;
typedef struct ggml_backend_reg * ggml_backend_reg_t;
typedef struct ggml_backend_device * ggml_backend_dev_t;
//
// Backend buffer type
//
GGML_API const char * ggml_backend_buft_name (ggml_backend_buffer_type_t buft);
GGML_API ggml_backend_buffer_t ggml_backend_buft_alloc_buffer (ggml_backend_buffer_type_t buft, size_t size);
GGML_API size_t ggml_backend_buft_get_alignment (ggml_backend_buffer_type_t buft);
GGML_API size_t ggml_backend_buft_get_max_size (ggml_backend_buffer_type_t buft);
GGML_API size_t ggml_backend_buft_get_alloc_size(ggml_backend_buffer_type_t buft, struct ggml_tensor * tensor);
GGML_API bool ggml_backend_buft_is_host (ggml_backend_buffer_type_t buft);
GGML_API ggml_backend_dev_t ggml_backend_buft_get_device (ggml_backend_buffer_type_t buft);
//
// Backend buffer
//
enum ggml_backend_buffer_usage {
GGML_BACKEND_BUFFER_USAGE_ANY = 0,
GGML_BACKEND_BUFFER_USAGE_WEIGHTS = 1,
GGML_BACKEND_BUFFER_USAGE_COMPUTE = 2,
};
GGML_API const char * ggml_backend_buffer_name (ggml_backend_buffer_t buffer);
GGML_API void ggml_backend_buffer_free (ggml_backend_buffer_t buffer);
GGML_API void * ggml_backend_buffer_get_base (ggml_backend_buffer_t buffer);
GGML_API size_t ggml_backend_buffer_get_size (ggml_backend_buffer_t buffer);
GGML_API void ggml_backend_buffer_init_tensor (ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
GGML_API size_t ggml_backend_buffer_get_alignment (ggml_backend_buffer_t buffer);
GGML_API size_t ggml_backend_buffer_get_max_size (ggml_backend_buffer_t buffer);
GGML_API size_t ggml_backend_buffer_get_alloc_size(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor);
GGML_API void ggml_backend_buffer_clear (ggml_backend_buffer_t buffer, uint8_t value);
GGML_API bool ggml_backend_buffer_is_host (ggml_backend_buffer_t buffer);
GGML_API void ggml_backend_buffer_set_usage (ggml_backend_buffer_t buffer, enum ggml_backend_buffer_usage usage);
GGML_API enum ggml_backend_buffer_usage ggml_backend_buffer_get_usage (ggml_backend_buffer_t buffer);
GGML_API ggml_backend_buffer_type_t ggml_backend_buffer_get_type (ggml_backend_buffer_t buffer);
GGML_API void ggml_backend_buffer_reset (ggml_backend_buffer_t buffer);
// tensor copy between different backends
GGML_API void ggml_backend_tensor_copy(struct ggml_tensor * src, struct ggml_tensor * dst);
//
// Backend (stream)
//
GGML_API ggml_guid_t ggml_backend_guid(ggml_backend_t backend);
GGML_API const char * ggml_backend_name(ggml_backend_t backend);
GGML_API void ggml_backend_free(ggml_backend_t backend);
GGML_API ggml_backend_buffer_type_t ggml_backend_get_default_buffer_type(ggml_backend_t backend);
GGML_API ggml_backend_buffer_t ggml_backend_alloc_buffer(ggml_backend_t backend, size_t size);
GGML_API size_t ggml_backend_get_alignment(ggml_backend_t backend);
GGML_API size_t ggml_backend_get_max_size(ggml_backend_t backend);
GGML_API void ggml_backend_tensor_set_async(ggml_backend_t backend, struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
GGML_API void ggml_backend_tensor_get_async(ggml_backend_t backend, const struct ggml_tensor * tensor, void * data, size_t offset, size_t size);
// "offset" refers to the offset in tensor->data for setting/getting data
GGML_API void ggml_backend_tensor_set( struct ggml_tensor * tensor, const void * data, size_t offset, size_t size);
GGML_API void ggml_backend_tensor_get(const struct ggml_tensor * tensor, void * data, size_t offset, size_t size);
GGML_API void ggml_backend_tensor_memset( struct ggml_tensor * tensor, uint8_t value, size_t offset, size_t size);
GGML_API void ggml_backend_synchronize(ggml_backend_t backend);
GGML_API ggml_backend_graph_plan_t ggml_backend_graph_plan_create(ggml_backend_t backend, struct ggml_cgraph * cgraph);
GGML_API void ggml_backend_graph_plan_free (ggml_backend_t backend, ggml_backend_graph_plan_t plan);
GGML_API enum ggml_status ggml_backend_graph_plan_compute (ggml_backend_t backend, ggml_backend_graph_plan_t plan);
GGML_API enum ggml_status ggml_backend_graph_compute (ggml_backend_t backend, struct ggml_cgraph * cgraph);
GGML_API enum ggml_status ggml_backend_graph_compute_async(ggml_backend_t backend, struct ggml_cgraph * cgraph);
// NOTE: will be removed, use device version instead
GGML_API bool ggml_backend_supports_op(ggml_backend_t backend, const struct ggml_tensor * op);
GGML_API bool ggml_backend_supports_buft(ggml_backend_t backend, ggml_backend_buffer_type_t buft);
GGML_API bool ggml_backend_offload_op(ggml_backend_t backend, const struct ggml_tensor * op);
// asynchronous copy
// the copy is performed after all the currently queued operations in backend_src
// backend_dst will wait for the copy to complete before performing other operations
// automatic fallback to sync copy if async is not supported
GGML_API void ggml_backend_tensor_copy_async(ggml_backend_t backend_src, ggml_backend_t backend_dst, struct ggml_tensor * src, struct ggml_tensor * dst);
GGML_API ggml_backend_dev_t ggml_backend_get_device(ggml_backend_t backend);
//
// Events
//
GGML_API ggml_backend_event_t ggml_backend_event_new(ggml_backend_dev_t device);
GGML_API void ggml_backend_event_free(ggml_backend_event_t event);
GGML_API void ggml_backend_event_record(ggml_backend_event_t event, ggml_backend_t backend);
GGML_API void ggml_backend_event_synchronize(ggml_backend_event_t event);
GGML_API void ggml_backend_event_wait(ggml_backend_t backend, ggml_backend_event_t event);
//
// Backend device
//
enum ggml_backend_dev_type {
// CPU device using system memory
GGML_BACKEND_DEVICE_TYPE_CPU,
// GPU device using dedicated memory
GGML_BACKEND_DEVICE_TYPE_GPU,
// accelerator devices intended to be used together with the CPU backend (e.g. BLAS or AMX)
GGML_BACKEND_DEVICE_TYPE_ACCEL
};
// functionality supported by the device
struct ggml_backend_dev_caps {
// asynchronous operations
bool async;
// pinned host buffer
bool host_buffer;
// creating buffers from host ptr
bool buffer_from_host_ptr;
// event synchronization
bool events;
};
// all the device properties
struct ggml_backend_dev_props {
const char * name;
const char * description;
size_t memory_free;
size_t memory_total;
enum ggml_backend_dev_type type;
struct ggml_backend_dev_caps caps;
};
GGML_API const char * ggml_backend_dev_name(ggml_backend_dev_t device);
GGML_API const char * ggml_backend_dev_description(ggml_backend_dev_t device);
GGML_API void ggml_backend_dev_memory(ggml_backend_dev_t device, size_t * free, size_t * total);
GGML_API enum ggml_backend_dev_type ggml_backend_dev_type(ggml_backend_dev_t device);
GGML_API void ggml_backend_dev_get_props(ggml_backend_dev_t device, struct ggml_backend_dev_props * props);
GGML_API ggml_backend_reg_t ggml_backend_dev_backend_reg(ggml_backend_dev_t device);
GGML_API ggml_backend_t ggml_backend_dev_init(ggml_backend_dev_t device, const char * params);
GGML_API ggml_backend_buffer_type_t ggml_backend_dev_buffer_type(ggml_backend_dev_t device);
GGML_API ggml_backend_buffer_type_t ggml_backend_dev_host_buffer_type(ggml_backend_dev_t device);
GGML_API ggml_backend_buffer_t ggml_backend_dev_buffer_from_host_ptr(ggml_backend_dev_t device, void * ptr, size_t size, size_t max_tensor_size);
GGML_API bool ggml_backend_dev_supports_op(ggml_backend_dev_t device, const struct ggml_tensor * op);
GGML_API bool ggml_backend_dev_supports_buft(ggml_backend_dev_t device, ggml_backend_buffer_type_t buft);
GGML_API bool ggml_backend_dev_offload_op(ggml_backend_dev_t device, const struct ggml_tensor * op);
//
// Backend (reg)
//
GGML_API const char * ggml_backend_reg_name(ggml_backend_reg_t reg);
GGML_API size_t ggml_backend_reg_dev_count(ggml_backend_reg_t reg);
GGML_API ggml_backend_dev_t ggml_backend_reg_dev_get(ggml_backend_reg_t reg, size_t index);
GGML_API void * ggml_backend_reg_get_proc_address(ggml_backend_reg_t reg, const char * name);
// Common functions that may be obtained using ggml_backend_reg_get_proc_address
// Split buffer type for tensor parallelism
typedef ggml_backend_buffer_type_t (*ggml_backend_split_buffer_type_t)(int main_device, const float * tensor_split);
// Set the number of threads for the backend
typedef void (*ggml_backend_set_n_threads_t)(ggml_backend_t backend, int n_threads);
// Get additional buffer types provided by the device (returns a NULL-terminated array)
typedef ggml_backend_buffer_type_t * (*ggml_backend_dev_get_extra_bufts_t)(ggml_backend_dev_t device);
// Set the abort callback for the backend
typedef void (*ggml_backend_set_abort_callback_t)(ggml_backend_t backend, ggml_abort_callback abort_callback, void * abort_callback_data);
// Get a list of feature flags supported by the backend (returns a NULL-terminated array)
struct ggml_backend_feature {
const char * name;
const char * value;
};
typedef struct ggml_backend_feature * (*ggml_backend_get_features_t)(ggml_backend_reg_t reg);
//
// Backend registry
//
GGML_API void ggml_backend_device_register(ggml_backend_dev_t device);
// Backend (reg) enumeration
GGML_API size_t ggml_backend_reg_count(void);
GGML_API ggml_backend_reg_t ggml_backend_reg_get(size_t index);
GGML_API ggml_backend_reg_t ggml_backend_reg_by_name(const char * name);
// Device enumeration
GGML_API size_t ggml_backend_dev_count(void);
GGML_API ggml_backend_dev_t ggml_backend_dev_get(size_t index);
GGML_API ggml_backend_dev_t ggml_backend_dev_by_name(const char * name);
GGML_API ggml_backend_dev_t ggml_backend_dev_by_type(enum ggml_backend_dev_type type);
// Direct backend (stream) initialization
// = ggml_backend_dev_init(ggml_backend_dev_by_name(name), params)
GGML_API ggml_backend_t ggml_backend_init_by_name(const char * name, const char * params);
// = ggml_backend_dev_init(ggml_backend_dev_by_type(type), params)
GGML_API ggml_backend_t ggml_backend_init_by_type(enum ggml_backend_dev_type type, const char * params);
// = ggml_backend_dev_init(ggml_backend_dev_by_type(GPU) OR ggml_backend_dev_by_type(CPU), NULL)
GGML_API ggml_backend_t ggml_backend_init_best(void);
// Load a backend from a dynamic library and register it
GGML_API ggml_backend_reg_t ggml_backend_load(const char * path);
// Unload a backend if loaded dynamically and unregister it
GGML_API void ggml_backend_unload(ggml_backend_reg_t reg);
// Load all known backends from dynamic libraries
GGML_API void ggml_backend_load_all(void);
GGML_API void ggml_backend_load_all_from_path(const char * dir_path);
//
// Backend scheduler
//
// The backend scheduler allows for multiple backend devices to be used together
// Handles compute buffer allocation, assignment of tensors to backends, and copying of tensors between backends
// The backends are selected based on:
// - the backend that supports the operation
// - the location of the pre-allocated tensors (e.g. the weights)
/*
Example usage:
// operations that use tensors allocated in a buffer with USAGE_WEIGHTS will be assigned
// preferrably to run on the same backend as the buffer
ggml_backend_buffer_set_usage(buf_weights, GGML_BACKEND_BUFFER_USAGE_WEIGHTS);
sched = ggml_backend_sched_new({backend_gpu, backend_gpu2, backend_cpu}, NULL, num_backends, GGML_DEFAULT_GRAPH_SIZE, false);
// initialize buffers from a max size graph (optional)
reserve_graph = build_graph(sched, max_batch_size);
// manually assign nodes to a backend (optional, should not be needed in most cases)
struct ggml_tensor * node = ggml_mul_mat(ctx, ...);
ggml_backend_sched_set_tensor_backend(sched, node, backend_gpu);
ggml_backend_sched_reserve(sched, reserve_graph);
// compute
graph = build_graph(sched); // the graph and its tensors are single-use in terms of allocation, multi-use in terms of computation
for (int i = 0; i < 10; ++i) {
ggml_backend_sched_graph_compute(sched, graph); // on the first iteration the graph is allocated automatically
}
// if there are graph inputs:
graph = build_graph(sched); // get a new graph that is not allocated (the metadata for the old graph is freed once ggml_free is called)
ggml_backend_sched_reset(sched); // clear the allocation of the previous graph
ggml_backend_sched_alloc_graph(sched, graph); // explicitly allocate the new graph but do not execute it
ggml_backend_tensor_set(input_tensor, ...); // copy data to the newly allocated graph tensors
ggml_backend_sched_graph_compute(sched, graph); // execute the graph
// as an alternative to the above it is also possible to assign the inputs to a dedicated context and
// allocate them statically via ggml_backend_alloc_ctx_tensors
}
*/
typedef struct ggml_backend_sched * ggml_backend_sched_t;
// Evaluation callback for each node in the graph (set with ggml_backend_sched_set_eval_callback)
// when ask == true, the scheduler wants to know if the user wants to observe this node
// this allows the scheduler to batch nodes together in order to evaluate them in a single call
//
// when ask == false, the scheduler is passing the node tensor to the user for observation
// if the user returns false, the scheduler will cancel the graph compute
//
typedef bool (*ggml_backend_sched_eval_callback)(struct ggml_tensor * t, bool ask, void * user_data);
// Initialize a backend scheduler, backends with low index are given priority over backends with high index
GGML_API ggml_backend_sched_t ggml_backend_sched_new(ggml_backend_t * backends, ggml_backend_buffer_type_t * bufts, int n_backends, size_t graph_size, bool parallel);
GGML_API void ggml_backend_sched_free(ggml_backend_sched_t sched);
// Initialize backend buffers from a measure graph
GGML_API bool ggml_backend_sched_reserve(ggml_backend_sched_t sched, struct ggml_cgraph * measure_graph); // returns success
GGML_API int ggml_backend_sched_get_n_backends(ggml_backend_sched_t sched);
GGML_API ggml_backend_t ggml_backend_sched_get_backend(ggml_backend_sched_t sched, int i);
// Get the number of splits of the last graph
GGML_API int ggml_backend_sched_get_n_splits(ggml_backend_sched_t sched);
GGML_API int ggml_backend_sched_get_n_copies(ggml_backend_sched_t sched);
GGML_API size_t ggml_backend_sched_get_buffer_size(ggml_backend_sched_t sched, ggml_backend_t backend);
GGML_API void ggml_backend_sched_set_tensor_backend(ggml_backend_sched_t sched, struct ggml_tensor * node, ggml_backend_t backend);
GGML_API ggml_backend_t ggml_backend_sched_get_tensor_backend(ggml_backend_sched_t sched, struct ggml_tensor * node);
// Allocate and compute graph on the backend scheduler
GGML_API bool ggml_backend_sched_alloc_graph(ggml_backend_sched_t sched, struct ggml_cgraph * graph); // returns success
GGML_API enum ggml_status ggml_backend_sched_graph_compute(ggml_backend_sched_t sched, struct ggml_cgraph * graph);
GGML_API enum ggml_status ggml_backend_sched_graph_compute_async(ggml_backend_sched_t sched, struct ggml_cgraph * graph);
GGML_API void ggml_backend_sched_synchronize(ggml_backend_sched_t sched);
// Reset all assignments and allocators - must be called before changing the node backends or allocating a new graph.
// This in effect deallocates all tensors that were previously allocated and leaves them with dangling pointers.
// The correct way to use this API is to discard the deallocated tensors and create new ones.
GGML_API void ggml_backend_sched_reset(ggml_backend_sched_t sched);
// Set a callback to be called for each resulting node during graph compute
GGML_API void ggml_backend_sched_set_eval_callback(ggml_backend_sched_t sched, ggml_backend_sched_eval_callback callback, void * user_data);
//
// Utils
//
struct ggml_backend_graph_copy {
ggml_backend_buffer_t buffer;
struct ggml_context * ctx_allocated;
struct ggml_context * ctx_unallocated;
struct ggml_cgraph * graph;
};
// Copy a graph to a different backend
GGML_API struct ggml_backend_graph_copy ggml_backend_graph_copy(ggml_backend_t backend, struct ggml_cgraph * graph);
GGML_API void ggml_backend_graph_copy_free(struct ggml_backend_graph_copy copy);
typedef bool (*ggml_backend_eval_callback)(int node_index, struct ggml_tensor * t1, struct ggml_tensor * t2, void * user_data);
// Compare the output of two backends
GGML_API bool ggml_backend_compare_graph_backend(ggml_backend_t backend1, ggml_backend_t backend2, struct ggml_cgraph * graph, ggml_backend_eval_callback callback, void * user_data);
// Tensor initialization
GGML_API void ggml_backend_tensor_alloc(ggml_backend_buffer_t buffer, struct ggml_tensor * tensor, void * addr);
GGML_API void ggml_backend_view_init(struct ggml_tensor * tensor);
// CPU buffer types are always available
GGML_API ggml_backend_buffer_t ggml_backend_cpu_buffer_from_ptr(void * ptr, size_t size);
GGML_API ggml_backend_buffer_type_t ggml_backend_cpu_buffer_type(void);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,25 @@
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#ifdef __cplusplus
extern "C" {
#endif
// backend API
GGML_BACKEND_API ggml_backend_t ggml_backend_blas_init(void);
GGML_BACKEND_API bool ggml_backend_is_blas(ggml_backend_t backend);
// number of threads used for conversion to float
// for openblas and blis, this will also set the number of threads used for blas operations
GGML_BACKEND_API void ggml_backend_blas_set_n_threads(ggml_backend_t backend_blas, int n_threads);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_blas_reg(void);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,123 @@
/*
* Copyright (c) 2023-2024 The ggml authors
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#pragma once
#include "ggml-backend.h"
#include "ggml.h"
#ifdef __cplusplus
extern "C" {
#endif
/**
* @brief Maximum number of CANN devices supported.
*/
#define GGML_CANN_MAX_DEVICES 16
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_cann_reg(void);
/**
* @brief Initializes the CANN backend for a specified device.
*
* This function initializes the CANN backend for the given device.
* It verifies the device index, allocates a context, and creates a backend
* instance.
*
* @param device The index of the device to initialize.
* @return A pointer to the initialized backend instance, or nullptr on failure.
*/
GGML_BACKEND_API ggml_backend_t ggml_backend_cann_init(int32_t device);
/**
* @brief Checks if a given backend is a CANN backend.
*
* This function verifies if the provided backend is a CANN backend by comparing
* its GUID with the CANN backend's GUID.
*
* @param backend The backend instance to check.
* @return True if the backend is a CANN backend, false otherwise.
*/
GGML_BACKEND_API bool ggml_backend_is_cann(ggml_backend_t backend);
/**
* @brief Retrieves the CANN buffer type for a specified device.
*
* This function initializes and returns the buffer type interface associated
* with the given device. It ensures thread-safe access using a mutex.
*
* @param device The device index for which to retrieve the buffer type.
* @return A pointer to the buffer type interface for the specified device, or
* nullptr if the device index is out of range.
*/
GGML_BACKEND_API ggml_backend_buffer_type_t
ggml_backend_cann_buffer_type(int32_t device);
/**
* @brief Retrieves the number of CANN devices available.
*
* This function returns the number of CANN devices available based on
* information obtained from `ggml_cann_info()`.
*
* @return The number of CANN devices available.
*/
GGML_BACKEND_API int32_t ggml_backend_cann_get_device_count(void);
/**
* @brief pinned host buffer for use with the CPU backend for faster copies between CPU and NPU.
*
* @return A pointer to the host buffer type interface.
*/
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_cann_host_buffer_type(void);
/**
* @brief Retrieves the description of a specific CANN device.
*
* This function sets the specified device, retrieves the SoC name,
* and writes it into the provided description buffer.
*
* @param device The device index to retrieve the description for.
* @param description Pointer to a buffer where the description will be written.
* @param description_size Size of the description buffer.
*/
GGML_BACKEND_API void ggml_backend_cann_get_device_description(
int32_t device, char* description, size_t description_size);
/**
* @brief Retrieves the memory information of a specific CANN device.
*
* This function sets the specified device, retrieves the free and total
* memory information of the specified type (ACL_HBM_MEM), and stores them
* in the provided pointers.
*
* @param device The device index to retrieve memory information for.
* @param free Pointer to a variable where the free memory size will be stored.
* @param total Pointer to a variable where the total memory size will be
* stored.
*/
GGML_BACKEND_API void ggml_backend_cann_get_device_memory(int32_t device,
size_t* free,
size_t* total);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,39 @@
#pragma once
#ifndef __cplusplus
#error "This header is for C++ only"
#endif
#include "ggml.h"
#include "ggml-alloc.h"
#include "ggml-backend.h"
#include "gguf.h"
#include <memory>
// Smart pointers for ggml types
// ggml
struct ggml_context_deleter { void operator()(ggml_context * ctx) { ggml_free(ctx); } };
struct gguf_context_deleter { void operator()(gguf_context * ctx) { gguf_free(ctx); } };
typedef std::unique_ptr<ggml_context, ggml_context_deleter> ggml_context_ptr;
typedef std::unique_ptr<gguf_context, gguf_context_deleter> gguf_context_ptr;
// ggml-alloc
struct ggml_gallocr_deleter { void operator()(ggml_gallocr_t galloc) { ggml_gallocr_free(galloc); } };
typedef std::unique_ptr<ggml_gallocr_t, ggml_gallocr_deleter> ggml_gallocr_ptr;
// ggml-backend
struct ggml_backend_deleter { void operator()(ggml_backend_t backend) { ggml_backend_free(backend); } };
struct ggml_backend_buffer_deleter { void operator()(ggml_backend_buffer_t buffer) { ggml_backend_buffer_free(buffer); } };
struct ggml_backend_event_deleter { void operator()(ggml_backend_event_t event) { ggml_backend_event_free(event); } };
struct ggml_backend_sched_deleter { void operator()(ggml_backend_sched_t sched) { ggml_backend_sched_free(sched); } };
typedef std::unique_ptr<ggml_backend, ggml_backend_deleter> ggml_backend_ptr;
typedef std::unique_ptr<ggml_backend_buffer, ggml_backend_buffer_deleter> ggml_backend_buffer_ptr;
typedef std::unique_ptr<ggml_backend_event, ggml_backend_event_deleter> ggml_backend_event_ptr;
typedef std::unique_ptr<ggml_backend_sched, ggml_backend_sched_deleter> ggml_backend_sched_ptr;

View File

@@ -0,0 +1,135 @@
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#ifdef __cplusplus
extern "C" {
#endif
// the compute plan that needs to be prepared for ggml_graph_compute()
// since https://github.com/ggerganov/ggml/issues/287
struct ggml_cplan {
size_t work_size; // size of work buffer, calculated by `ggml_graph_plan()`
uint8_t * work_data; // work buffer, to be allocated by caller before calling to `ggml_graph_compute()`
int n_threads;
struct ggml_threadpool * threadpool;
// abort ggml_graph_compute when true
ggml_abort_callback abort_callback;
void * abort_callback_data;
};
// numa strategies
enum ggml_numa_strategy {
GGML_NUMA_STRATEGY_DISABLED = 0,
GGML_NUMA_STRATEGY_DISTRIBUTE = 1,
GGML_NUMA_STRATEGY_ISOLATE = 2,
GGML_NUMA_STRATEGY_NUMACTL = 3,
GGML_NUMA_STRATEGY_MIRROR = 4,
GGML_NUMA_STRATEGY_COUNT
};
GGML_BACKEND_API void ggml_numa_init(enum ggml_numa_strategy numa); // call once for better performance on NUMA systems
GGML_BACKEND_API bool ggml_is_numa(void); // true if init detected that system has >1 NUMA node
GGML_BACKEND_API struct ggml_tensor * ggml_new_i32(struct ggml_context * ctx, int32_t value);
GGML_BACKEND_API struct ggml_tensor * ggml_new_f32(struct ggml_context * ctx, float value);
GGML_BACKEND_API struct ggml_tensor * ggml_set_i32 (struct ggml_tensor * tensor, int32_t value);
GGML_BACKEND_API struct ggml_tensor * ggml_set_f32 (struct ggml_tensor * tensor, float value);
GGML_BACKEND_API int32_t ggml_get_i32_1d(const struct ggml_tensor * tensor, int i);
GGML_BACKEND_API void ggml_set_i32_1d(const struct ggml_tensor * tensor, int i, int32_t value);
GGML_BACKEND_API int32_t ggml_get_i32_nd(const struct ggml_tensor * tensor, int i0, int i1, int i2, int i3);
GGML_BACKEND_API void ggml_set_i32_nd(const struct ggml_tensor * tensor, int i0, int i1, int i2, int i3, int32_t value);
GGML_BACKEND_API float ggml_get_f32_1d(const struct ggml_tensor * tensor, int i);
GGML_BACKEND_API void ggml_set_f32_1d(const struct ggml_tensor * tensor, int i, float value);
GGML_BACKEND_API float ggml_get_f32_nd(const struct ggml_tensor * tensor, int i0, int i1, int i2, int i3);
GGML_BACKEND_API void ggml_set_f32_nd(const struct ggml_tensor * tensor, int i0, int i1, int i2, int i3, float value);
GGML_BACKEND_API struct ggml_threadpool * ggml_threadpool_new (struct ggml_threadpool_params * params);
GGML_BACKEND_API void ggml_threadpool_free (struct ggml_threadpool * threadpool);
GGML_BACKEND_API int ggml_threadpool_get_n_threads (struct ggml_threadpool * threadpool);
GGML_BACKEND_API void ggml_threadpool_pause (struct ggml_threadpool * threadpool);
GGML_BACKEND_API void ggml_threadpool_resume (struct ggml_threadpool * threadpool);
// ggml_graph_plan() has to be called before ggml_graph_compute()
// when plan.work_size > 0, caller must allocate memory for plan.work_data
GGML_BACKEND_API struct ggml_cplan ggml_graph_plan(
const struct ggml_cgraph * cgraph,
int n_threads, /* = GGML_DEFAULT_N_THREADS */
struct ggml_threadpool * threadpool /* = NULL */ );
GGML_BACKEND_API enum ggml_status ggml_graph_compute(struct ggml_cgraph * cgraph, struct ggml_cplan * cplan);
// same as ggml_graph_compute() but the work data is allocated as a part of the context
// note: the drawback of this API is that you must have ensured that the context has enough memory for the work data
GGML_BACKEND_API enum ggml_status ggml_graph_compute_with_ctx(struct ggml_context * ctx, struct ggml_cgraph * cgraph, int n_threads);
//
// system info
//
// x86
GGML_BACKEND_API int ggml_cpu_has_sse3 (void);
GGML_BACKEND_API int ggml_cpu_has_ssse3 (void);
GGML_BACKEND_API int ggml_cpu_has_avx (void);
GGML_BACKEND_API int ggml_cpu_has_avx_vnni (void);
GGML_BACKEND_API int ggml_cpu_has_avx2 (void);
GGML_BACKEND_API int ggml_cpu_has_f16c (void);
GGML_BACKEND_API int ggml_cpu_has_fma (void);
GGML_BACKEND_API int ggml_cpu_has_avx512 (void);
GGML_BACKEND_API int ggml_cpu_has_avx512_vbmi(void);
GGML_BACKEND_API int ggml_cpu_has_avx512_vnni(void);
GGML_BACKEND_API int ggml_cpu_has_avx512_bf16(void);
GGML_BACKEND_API int ggml_cpu_has_amx_int8 (void);
// ARM
GGML_BACKEND_API int ggml_cpu_has_neon (void);
GGML_BACKEND_API int ggml_cpu_has_arm_fma (void);
GGML_BACKEND_API int ggml_cpu_has_fp16_va (void);
GGML_BACKEND_API int ggml_cpu_has_dotprod (void);
GGML_BACKEND_API int ggml_cpu_has_matmul_int8(void);
GGML_BACKEND_API int ggml_cpu_has_sve (void);
GGML_BACKEND_API int ggml_cpu_get_sve_cnt (void); // sve vector length in bytes
// other
GGML_BACKEND_API int ggml_cpu_has_riscv_v (void);
GGML_BACKEND_API int ggml_cpu_has_vsx (void);
GGML_BACKEND_API int ggml_cpu_has_wasm_simd (void);
GGML_BACKEND_API int ggml_cpu_has_llamafile (void);
// Internal types and functions exposed for tests and benchmarks
typedef void (*ggml_vec_dot_t) (int n, float * GGML_RESTRICT s, size_t bs, const void * GGML_RESTRICT x, size_t bx,
const void * GGML_RESTRICT y, size_t by, int nrc);
struct ggml_type_traits_cpu {
ggml_from_float_t from_float;
ggml_vec_dot_t vec_dot;
enum ggml_type vec_dot_type;
int64_t nrows; // number of rows to process simultaneously
};
GGML_BACKEND_API const struct ggml_type_traits_cpu * ggml_get_type_traits_cpu(enum ggml_type type);
GGML_BACKEND_API void ggml_cpu_init(void);
//
// CPU backend
//
GGML_BACKEND_API ggml_backend_t ggml_backend_cpu_init(void);
GGML_BACKEND_API bool ggml_backend_is_cpu (ggml_backend_t backend);
GGML_BACKEND_API void ggml_backend_cpu_set_n_threads (ggml_backend_t backend_cpu, int n_threads);
GGML_BACKEND_API void ggml_backend_cpu_set_threadpool (ggml_backend_t backend_cpu, ggml_threadpool_t threadpool);
GGML_BACKEND_API void ggml_backend_cpu_set_abort_callback(ggml_backend_t backend_cpu, ggml_abort_callback abort_callback, void * abort_callback_data);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_cpu_reg(void);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,47 @@
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#ifdef __cplusplus
extern "C" {
#endif
#ifdef GGML_USE_HIP
#define GGML_CUDA_NAME "ROCm"
#define GGML_CUBLAS_NAME "hipBLAS"
#elif defined(GGML_USE_MUSA)
#define GGML_CUDA_NAME "MUSA"
#define GGML_CUBLAS_NAME "muBLAS"
#else
#define GGML_CUDA_NAME "CUDA"
#define GGML_CUBLAS_NAME "cuBLAS"
#endif
#define GGML_CUDA_MAX_DEVICES 16
// backend API
GGML_BACKEND_API ggml_backend_t ggml_backend_cuda_init(int device);
GGML_BACKEND_API bool ggml_backend_is_cuda(ggml_backend_t backend);
// device buffer
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_cuda_buffer_type(int device);
// split tensor buffer that splits matrices by rows across multiple devices
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_cuda_split_buffer_type(int main_device, const float * tensor_split);
// pinned host buffer for use with the CPU backend for faster copies between CPU and GPU
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_cuda_host_buffer_type(void);
GGML_BACKEND_API int ggml_backend_cuda_get_device_count(void);
GGML_BACKEND_API void ggml_backend_cuda_get_device_description(int device, char * description, size_t description_size);
GGML_BACKEND_API void ggml_backend_cuda_get_device_memory(int device, size_t * free, size_t * total);
GGML_BACKEND_API bool ggml_backend_cuda_register_host_buffer(void * buffer, size_t size);
GGML_BACKEND_API void ggml_backend_cuda_unregister_host_buffer(void * buffer);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_cuda_reg(void);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,50 @@
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#include <stdbool.h>
#include <stddef.h>
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
#endif
#define GGML_KOMPUTE_MAX_DEVICES 16
struct ggml_vk_device {
int index;
int type; // same as VkPhysicalDeviceType
size_t heapSize;
const char * name;
const char * vendor;
int subgroupSize;
uint64_t bufferAlignment;
uint64_t maxAlloc;
};
struct ggml_vk_device * ggml_vk_available_devices(size_t memoryRequired, size_t * count);
bool ggml_vk_get_device(struct ggml_vk_device * device, size_t memoryRequired, const char * name);
bool ggml_vk_has_vulkan(void);
bool ggml_vk_has_device(void);
struct ggml_vk_device ggml_vk_current_device(void);
//
// backend API
//
// forward declaration
typedef struct ggml_backend * ggml_backend_t;
GGML_BACKEND_API ggml_backend_t ggml_backend_kompute_init(int device);
GGML_BACKEND_API bool ggml_backend_is_kompute(ggml_backend_t backend);
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_kompute_buffer_type(int device);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_kompute_reg(void);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,66 @@
// Note: this description is outdated
//
// An interface allowing to compute ggml_cgraph with Metal
//
// This is a fully functional interface that extends ggml with GPU support for Apple devices.
// A similar interface can be created for other GPU backends (e.g. Vulkan, CUDA, etc.)
//
// How it works?
//
// As long as your program can create and evaluate a ggml_cgraph on the CPU, you can use this
// interface to evaluate the same graph on the GPU. Instead of using ggml_graph_compute(), you
// use ggml_metal_graph_compute() (or ggml_vulkan_graph_compute(), etc.)
//
// You only need to make sure that all memory buffers that you used during the graph creation
// are mapped to the device memory with the ggml_metal_add_buffer() function. This mapping is
// used during the graph evaluation to determine the arguments of the compute kernels.
//
// Synchronization between device and host memory (for example for input and output tensors)
// is done with the ggml_metal_set_tensor() and ggml_metal_get_tensor() functions.
//
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#include <stddef.h>
#include <stdbool.h>
struct ggml_tensor;
struct ggml_cgraph;
#ifdef __cplusplus
extern "C" {
#endif
//
// backend API
// user-code should use only these functions
//
GGML_BACKEND_API ggml_backend_t ggml_backend_metal_init(void);
GGML_BACKEND_API bool ggml_backend_is_metal(ggml_backend_t backend);
GGML_DEPRECATED(
GGML_BACKEND_API ggml_backend_buffer_t ggml_backend_metal_buffer_from_ptr(void * data, size_t size, size_t max_size),
"obsoleted by the new device interface - https://github.com/ggerganov/llama.cpp/pull/9713");
GGML_BACKEND_API void ggml_backend_metal_set_abort_callback(ggml_backend_t backend, ggml_abort_callback abort_callback, void * user_data);
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_metal_buffer_type(void);
// helper to check if the device supports a specific family
// ideally, the user code should be doing these checks
// ref: https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf
GGML_BACKEND_API bool ggml_backend_metal_supports_family(ggml_backend_t backend, int family);
// capture all command buffers committed the next time `ggml_backend_graph_compute` is called
GGML_BACKEND_API void ggml_backend_metal_capture_next_compute(ggml_backend_t backend);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_metal_reg(void);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,26 @@
#ifndef GGML_OPENCL_H
#define GGML_OPENCL_H
#include "ggml.h"
#include "ggml-backend.h"
#ifdef __cplusplus
extern "C" {
#endif
//
// backend API
//
GGML_BACKEND_API ggml_backend_t ggml_backend_opencl_init(void);
GGML_BACKEND_API bool ggml_backend_is_opencl(ggml_backend_t backend);
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_opencl_buffer_type(void);
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_opencl_host_buffer_type(void);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_opencl_reg(void);
#ifdef __cplusplus
}
#endif
#endif // GGML_OPENCL_H

View File

@@ -0,0 +1,216 @@
// This file contains functionality for training models using GGML.
// It is not strictly needed vs. just vanilla GGML but it provides a more high-level interface for common needs such as datasets.
// At the bottom of this file especially there are relatively high-level functions that are suitable use or adaptation in user code.
//
// Module maintainer: Johannes Gäßler (@JohannesGaessler, johannesg@5d6.de)
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
#endif
struct ggml_opt_dataset;
struct ggml_opt_context;
struct ggml_opt_result;
typedef struct ggml_opt_dataset * ggml_opt_dataset_t;
typedef struct ggml_opt_context * ggml_opt_context_t;
typedef struct ggml_opt_result * ggml_opt_result_t;
// ====== Loss ======
// built-in loss types, i.e. the built-in quantities minimized by the optimizer
// custom loss types can be defined via mean or sum which simply reduce the outputs for all datapoints to a single value
enum ggml_opt_loss_type {
GGML_OPT_LOSS_TYPE_MEAN,
GGML_OPT_LOSS_TYPE_SUM,
GGML_OPT_LOSS_TYPE_CROSS_ENTROPY,
GGML_OPT_LOSS_TYPE_MEAN_SQUARED_ERROR,
};
// ====== Dataset ======
GGML_API ggml_opt_dataset_t ggml_opt_dataset_init(
int64_t ne_datapoint, // number of elements per datapoint
int64_t ne_label, // number of elements per label
int64_t ndata, // total number of datapoints/labels
int64_t ndata_shard); // number of datapoints/labels per shard (unit at which the dataset is shuffled/copied)
GGML_API void ggml_opt_dataset_free(ggml_opt_dataset_t dataset);
// get underlying tensors that store the data
GGML_API struct ggml_tensor * ggml_opt_dataset_data (ggml_opt_dataset_t dataset); // shape = [ne_datapoint, ndata]
GGML_API struct ggml_tensor * ggml_opt_dataset_labels(ggml_opt_dataset_t dataset); // shape = [nd_label, ndata]
// shuffle idata first datapoints from dataset with RNG from opt_ctx, shuffle all datapoints if idata is negative
GGML_API void ggml_opt_dataset_shuffle(ggml_opt_context_t opt_ctx, ggml_opt_dataset_t dataset, int64_t idata);
// get batch at position ibatch from dataset and copy the data to data_batch and labels_batch
GGML_API void ggml_opt_dataset_get_batch(
ggml_opt_dataset_t dataset,
struct ggml_tensor * data_batch, // shape = [ne_datapoint, ndata_batch]
struct ggml_tensor * labels_batch, // shape = [ne_label, ndata_batch]
int64_t ibatch);
// ====== Model / Context ======
enum ggml_opt_build_type {
GGML_OPT_BUILD_TYPE_FORWARD,
GGML_OPT_BUILD_TYPE_GRAD,
GGML_OPT_BUILD_TYPE_OPT,
};
// parameters that control which optimizer is used and how said optimizer tries to find the minimal loss
struct ggml_opt_optimizer_params {
// AdamW optimizer parameters
struct {
float alpha; // learning rate
float beta1;
float beta2;
float eps; // epsilon for numerical stability
float wd; // weight decay for AdamW, use 0.0f to disable
} adamw;
};
// callback to calculate optimizer parameters prior to a backward pass
// userdata can be used to pass arbitrary data
typedef struct ggml_opt_optimizer_params (*ggml_opt_get_optimizer_params)(void * userdata);
// returns the default optimizer params (constant)
// userdata is not used
GGML_API struct ggml_opt_optimizer_params ggml_opt_get_default_optimizer_params(void * userdata);
// parameters for initializing a new optimization context
struct ggml_opt_params {
ggml_backend_sched_t backend_sched; // defines which backends are used to construct the compute graphs
struct ggml_context * ctx_compute; // created in user code, holds non-static tensors
// the forward graph is defined by inputs and outputs
// those tensors and all tensors inbetween are not intended to be reusable between multiple optimization contexts
struct ggml_tensor * inputs;
struct ggml_tensor * outputs;
enum ggml_opt_loss_type loss_type;
enum ggml_opt_build_type build_type;
int32_t opt_period; // after how many gradient accumulation steps an optimizer step should be done
ggml_opt_get_optimizer_params get_opt_pars; // callback for calculating optimizer parameters
void * get_opt_pars_ud; // userdata for calculating optimizer parameters
};
// get parameters for an optimization context with defaults set where possible
// parameters for which no sensible defaults exist are supplied as arguments to this function
GGML_API ggml_opt_params ggml_opt_default_params(
ggml_backend_sched_t backend_sched,
struct ggml_context * ctx_compute,
struct ggml_tensor * inputs,
struct ggml_tensor * outputs,
enum ggml_opt_loss_type loss_type);
GGML_API ggml_opt_context_t ggml_opt_init(struct ggml_opt_params params);
GGML_API void ggml_opt_free(ggml_opt_context_t opt_ctx);
// set gradients to zero, initilize loss, and optionally reset the optimizer
GGML_API void ggml_opt_reset(ggml_opt_context_t opt_ctx, bool optimizer);
// get underlying tensors that store data
GGML_API struct ggml_tensor * ggml_opt_inputs( ggml_opt_context_t opt_ctx); // forward graph input tensor
GGML_API struct ggml_tensor * ggml_opt_outputs( ggml_opt_context_t opt_ctx); // forward graph output tensor
GGML_API struct ggml_tensor * ggml_opt_labels( ggml_opt_context_t opt_ctx); // labels to compare outputs against
GGML_API struct ggml_tensor * ggml_opt_loss( ggml_opt_context_t opt_ctx); // scalar tensor that contains the loss
GGML_API struct ggml_tensor * ggml_opt_pred( ggml_opt_context_t opt_ctx); // predictions made by outputs
GGML_API struct ggml_tensor * ggml_opt_ncorrect(ggml_opt_context_t opt_ctx); // number of matching predictions between outputs and labels
GGML_API struct ggml_tensor * ggml_opt_grad_acc(ggml_opt_context_t opt_ctx, struct ggml_tensor * node);
// ====== Optimization Result ======
GGML_API ggml_opt_result_t ggml_opt_result_init();
GGML_API void ggml_opt_result_free(ggml_opt_result_t result);
GGML_API void ggml_opt_result_reset(ggml_opt_result_t result);
// get data from result, uncertainties are optional and can be ignored by passing NULL
GGML_API void ggml_opt_result_ndata( ggml_opt_result_t result, int64_t * ndata); // writes 1 value, number of datapoints
GGML_API void ggml_opt_result_loss( ggml_opt_result_t result, double * loss, double * unc); // writes 1 value
GGML_API void ggml_opt_result_pred( ggml_opt_result_t result, int32_t * pred); // writes ndata values
GGML_API void ggml_opt_result_accuracy(ggml_opt_result_t result, double * accuracy, double * unc); // writes 1 value
// ====== Computation ======
// do forward pass, increment result if not NULL
GGML_API void ggml_opt_forward(ggml_opt_context_t opt_ctx, ggml_opt_result_t result);
// do forward pass, increment result if not NULL, do backward pass
GGML_API void ggml_opt_forward_backward(ggml_opt_context_t opt_ctx, ggml_opt_result_t result);
// ############################################################################
// ## The high-level functions start here. They do not depend on any private ##
// ## functions or structs and can be copied to and adapted for user code. ##
// ############################################################################
// ====== Intended Usage ======
//
// 1. Select the appropriate loss for your problem.
// 2. Create a dataset and set the data for the "data" tensor. Also set the "labels" tensor if your loss needs them.
// Setting the shard size to 1 will be fine, it's the granularity with which data is shuffled/loaded (bigger values are faster).
// 3. Create a GGML graph for your model with no_alloc == true. Use two separate contexts for the tensors.
// The first context should contain the model parameters and inputs and be allocated statically in user code.
// The second context should contain all other tensors and will be (re)allocated automatically.
// Due to this automated allocation the data of the second context is not defined when accessed in user code.
// Note that the second dimension of the inputs/outputs are interpreted as the number of datapoints in those tensors.
// 4. Call ggml_opt_fit. If you need more control you can use ggml_opt_epoch instead.
// signature for a callback while evaluating opt_ctx on dataset, called after an evaluation
typedef void (*ggml_opt_epoch_callback)(
bool train, // true after training evaluation, false after validation evaluation
ggml_opt_context_t opt_ctx,
ggml_opt_dataset_t dataset,
ggml_opt_result_t result, // result associated with the dataset subsection
int64_t ibatch, // number of batches that have been evaluated so far
int64_t ibatch_max, // total number of batches in this dataset subsection
int64_t t_start_us); // time at which the evaluation on the dataset subsection was started
// do training on front of dataset, do evaluation only on back of dataset
GGML_API void ggml_opt_epoch(
ggml_opt_context_t opt_ctx,
ggml_opt_dataset_t dataset,
ggml_opt_result_t result_train, // result to increment during training, ignored if NULL
ggml_opt_result_t result_eval, // result to increment during evaluation, ignored if NULL
int64_t idata_split, // data index at which to split training and evaluation
ggml_opt_epoch_callback callback_train,
ggml_opt_epoch_callback callback_eval);
// callback that prints a progress bar on stderr
GGML_API void ggml_opt_epoch_callback_progress_bar(
bool train,
ggml_opt_context_t opt_ctx,
ggml_opt_dataset_t dataset,
ggml_opt_result_t result,
int64_t ibatch,
int64_t ibatch_max,
int64_t t_start_us);
// fit model defined by inputs and outputs to dataset
GGML_API void ggml_opt_fit(
ggml_backend_sched_t backend_sched, // backend scheduler for constructing the compute graphs
ggml_context * ctx_compute, // context with temporarily allocated tensors to calculate the outputs
ggml_tensor * inputs, // input tensor with shape [ne_datapoint, ndata_batch]
ggml_tensor * outputs, // output tensor, must have shape [ne_label, ndata_batch] if labels are used
ggml_opt_dataset_t dataset, // dataset with data and optionally also labels
enum ggml_opt_loss_type loss_type, // loss to minimize
ggml_opt_get_optimizer_params get_opt_pars, // callback to get optimizer params, userdata is pointer to epoch (of type int64_t)
int64_t nepoch, // how many times the dataset should be iterated over
int64_t nbatch_logical, // datapoints optimizer step, must be a multiple of ndata_batch in inputs/outputs
float val_split, // fraction of the dataset to use for validation, must be in [0.0f, 1.0f)
bool silent); // whether or not info prints to stderr should be suppressed
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,28 @@
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#ifdef __cplusplus
extern "C" {
#endif
#define GGML_RPC_MAX_SERVERS 16
// backend API
GGML_BACKEND_API ggml_backend_t ggml_backend_rpc_init(const char * endpoint);
GGML_BACKEND_API bool ggml_backend_is_rpc(ggml_backend_t backend);
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_rpc_buffer_type(const char * endpoint);
GGML_BACKEND_API void ggml_backend_rpc_get_device_memory(const char * endpoint, size_t * free, size_t * total);
GGML_BACKEND_API void ggml_backend_rpc_start_server(ggml_backend_t backend, const char * endpoint, size_t free_mem, size_t total_mem);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_rpc_reg(void);
GGML_BACKEND_API ggml_backend_dev_t ggml_backend_rpc_add_device(const char * endpoint);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,49 @@
//
// MIT license
// Copyright (C) 2024 Intel Corporation
// SPDX-License-Identifier: MIT
//
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#define GGML_SYCL_NAME "SYCL"
#define GGML_SYCL_MAX_DEVICES 48
#ifdef __cplusplus
extern "C" {
#endif
// backend API
GGML_BACKEND_API ggml_backend_t ggml_backend_sycl_init(int device);
GGML_BACKEND_API bool ggml_backend_is_sycl(ggml_backend_t backend);
// devide buffer
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_sycl_buffer_type(int device);
// split tensor buffer that splits matrices by rows across multiple devices
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_sycl_split_buffer_type(const float * tensor_split);
// pinned host buffer for use with the CPU backend for faster copies between CPU and GPU
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_sycl_host_buffer_type(void);
GGML_BACKEND_API void ggml_backend_sycl_print_sycl_devices(void);
GGML_BACKEND_API void ggml_backend_sycl_get_gpu_list(int *id_list, int max_len);
GGML_BACKEND_API void ggml_backend_sycl_get_device_description(int device,
char *description,
size_t description_size);
GGML_BACKEND_API int ggml_backend_sycl_get_device_count();
GGML_BACKEND_API void ggml_backend_sycl_get_device_memory(int device, size_t *free, size_t *total);
// SYCL doesn't support registering host memory, keep here for reference
// GGML_BACKEND_API bool ggml_backend_sycl_register_host_buffer(void * buffer, size_t size);
// GGML_BACKEND_API void ggml_backend_sycl_unregister_host_buffer(void * buffer);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_sycl_reg(void);
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,31 @@
#pragma once
#include "ggml.h"
#include "ggml-backend.h"
#ifdef __cplusplus
extern "C" {
#endif
#define GGML_VK_NAME "Vulkan"
#define GGML_VK_MAX_DEVICES 16
GGML_BACKEND_API void ggml_vk_instance_init(void);
// backend API
GGML_BACKEND_API ggml_backend_t ggml_backend_vk_init(size_t dev_num);
GGML_BACKEND_API bool ggml_backend_is_vk(ggml_backend_t backend);
GGML_BACKEND_API int ggml_backend_vk_get_device_count(void);
GGML_BACKEND_API void ggml_backend_vk_get_device_description(int device, char * description, size_t description_size);
GGML_BACKEND_API void ggml_backend_vk_get_device_memory(int device, size_t * free, size_t * total);
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_vk_buffer_type(size_t dev_num);
// pinned host buffer for use with the CPU backend for faster copies between CPU and GPU
GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_vk_host_buffer_type(void);
GGML_BACKEND_API ggml_backend_reg_t ggml_backend_vk_reg(void);
#ifdef __cplusplus
}
#endif

Some files were not shown because too many files have changed in this diff Show More