Show HN: Whispering – Open-source, local-first dictation you can trust
github.comHey HN! Braden here, creator of Whispering, an open-source speech-to-text app.
I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went.
So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. Your data is stored locally on your device, and your audio goes directly from your machine to a local provider (Whisper C++, Speaches, etc.) or your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.). For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before).
Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office.
Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs, and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ.
There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://github.com/cjpais/Handy). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model.
Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter (https://github.com/epicenter-so/epicenter), which I should explain a bit...
I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it.
Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now.
I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon.
Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here (https://github.com/epicenter-so/epicenter) and join the Discord here (https://go.epicenter.so/discord). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!
If anyone's interested in a janky-but-works-great dictation setup on Linux, here's mine:
On key press, start recording microphone to /tmp/dictate.mp3:
On key release, stop recording, transcribe with whisper.cpp, trim whitespace and print to stdout: I keep these in a dictate.sh script and bind to press/release on a single key. A programmable keyboard helps here. I use https://git.sr.ht/%7Egeb/dotool to turn the transcription into keystrokes. I've also tried ydotool and wtype, but they seem to swallow keystrokes. This gives a very functional push-to-talk setup.I'm very impressed with https://github.com/ggml-org/whisper.cpp. Transcription quality with large-v3-turbo-q8_0 is excellent IMO and a Vulkan build is very fast on my 6600XT. It takes about 1s for an average sentence to appear after I release the hotkey.
I'm keeping an eye on the NVidia models, hopefully they work on ggml soon too. E.g. https://github.com/ggml-org/whisper.cpp/issues/3118.
I'd love to find a tool which could recognise a few different speakers so that I could automatically dictate 1:1 sessions. In addition, I definitively would want to feed that to an LLM to cleanup the notes (to remove all "umm" and similar nonsense) and to do context aware spell checking.
The LLM part should be very much doable, but I'm not sure if speaker recognition exists in a sufficiently working state?
Speaker "diarization" is what you're looking for, and currently the most popular solution is pyannote.audio.
Eventually I'm trying to get around to using it in conjunction with a fine-tuned whisper model to make transcriptions. Just haven't found the time yet.
Does this support using the Parakeet model locally? I'm a MacWhisper user and I find that Parakeet is way better and faster than Whisper for on-device transcription. I've been using push-to-transcribe with MacWhisper through Parakeet for a while now and it's quite magical.
Not yet, but I want it too! Parakeet looks incredible (saw that leaderboard result). My current roadmap is: finish stabilizing whisper.cpp integration, then add Parakeet support. If anyone has bandwidth to PR the connector, I’d be thrilled to merge it.
Unfortunately, because it's Nvidia, parakeet doesn't work with Whisper.cpp as far as I'm aware. You need onnx
Some lovely folks have left some other open-source projects that implement Parakeet. I would recommend checking those out! I'll also work on my own implementation in the meantime :D
Parakeet is amazing - 3000x real-time on an A100 and 5x real-time even on a laptop CPU, while being more accurate than whisper-large-v3 (https://huggingface.co/spaces/hf-audio/open_asr_leaderboard). NeMo is a little awkward though; I'm amazed it runs locally on Mac (for MacWhisper).
Yeah, Parakeet runs great locally on my M1 laptop (through MacWhisper). Transcription speed of recordings feel at least 10x faster than Whisper, and the accuracy is better as well. Push to talk for dictation is pretty seamless since the model is so fast. I've observed no downside to Parakeet if you're speaking English.
A bit tangential statement, about parakeet and other Nvidia Nemo models, i never found actual architecture implementations as pytorch/tf code, seems like all such models, are instant-ized from a binary blob making it difficult to experiment! Maybe i missed something, does anyone here have more experience with .nemo models to shed some more light onto this?
+1 for MacWhisper. Very full featured, nice that it's a one time purchase, and the developer is constantly improving it.
seems like "not yet" is the answer from other comments
> "I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it."
Yes! This. I have almost no experience w/ tts, but if/when I explore the space, I'll start w/ Whispering -- because of Epicenter. Starred the repo, and will give some thought to other apps that might make sense to contribute there. Bravo, thanks for publishing these and sharing, and congrats on getting into YC! :)
That's a good idea... Just git repo your whole knowledge base and build on top of it.
Thanks so much for the support! Really appreciate the feedback, and it’s great to hear the vision resonates. No worries on the STT/TTS experience; it’s just awesome to connect with someone who shares the values of open-source and owning our data :) I’m hoping my time in YC can be productive and, along the way, create more support for other OSS developers too. Keep in touch!
I think we're talking about STT (speech-to-text) here, not TTS.
IF you do want to then ALSO have a cloud version, you can just use the AgentDB API and upload them there and just change where the SQL runs.
For those checking out the repo this morning, I'm in the middle of a release that adds Whisper C++ support!
https://github.com/epicenter-so/epicenter/pull/655
After this pushes, we'll have far more extensive local transcription support. Just fixing a few more small things :)
You mentioned that you got into YC .. what is the road to profitability for your project(s) if everything is open source and local?
As many other people commented on similar projects, one of the issues of trying to use voice dictation instead of typing is the lack of real-time visual indication. When we write, we immediately see the text, which helps to keep the thought (especially in longer sentences/paragraphs). But with dictation, it either comes with a delay or only when dictation is over, and it doesn't feel as comfortable as writing. Tangentially, many people "think as they write" and dictation doesn't offer that experience.
I wonder if it changes with time for people who use dictation often.
I think there is still some use to diction. For me it’s a great way to get screenplays on paper. I can type fast but I can think and speak faster. I just record a stream of thought of the story/video I want, even if I jump all over the place it doesn’t matter, just a nice stream of consciousness. Afterwards I spend time editing and putting things in the right order and clean up. I find this much faster than just writing.
I use whisperfile which is a multiplatform implementation of whisper that works really well.
https://huggingface.co/Mozilla/whisperfile
> I’m basically obsessed with local-first open-source software.
We all should be.
Agreed!
This is a cool project and I want go give it a go in my spare time.
However what gives me pause is the sheer number of possibly compromised microphones all around me (phones, tablets, laptops, tv etc) at all times, which makes spying much easier than if I use a keyboard.
I’ve been using whispering for about a year now, it has really changed how I interact with the computer. I make sure to buy mice or keyboards that have programmable hotkeys so that I can use the shortcuts for whispering. I can’t go back to regular typing at this point, just feels super inefficient. Thanks again for all your hard work!
Thank you so much for your support! It really means a lot :) Happy to hear that it's helped you, and keep in touch if you ever have any issues!
I've been interested in dictation for a while, but I don't want to be sending any audio to a remote API, it all has to be local. Having tried just a couple of models (namely the one used by the FUTO Keyboard), I'm kinda feeling like we're not quite there yet.
My biggest gripe perhaps is not being able to get decent content out of a thought stream; the models can't properly filter out the pauses, "uuuuhmms", and much less so handle on the fly corrections to what I've been saying, like going back and repeating something with a slight variation and whatnot.
This is a challenging problem I'd love to see being tackled well by open models I can run on my computer or phone. Are there new models more capable of this? Is it not just a model thing, and I missing a good app too?
In the meanwhile, I'll keep typing, even though it can be quite a bit less convenient to do; especially true for note taking on the go.
Have you tried Whisper itself? It's open-weights.
One of the features of the project posted above is "transformations" that you can run on transcripts. They feed the text into an LLM to clean it up. If you're willing to pay for the tokens, I think you could not only remove filler-words, but could probably even get the semantically-aware editing (corrections) you're talking about.
^Yep, unfortunately, the best option right now seems to pipe the output into another LLM to do some cleanup, which we try to help you do in Whispering. Recent transcription models don't have very good built-in inference/cleanup, with Whisper having the very weak "prompt" parameter. It seems like this is probably by design to keep these models lean/specialized/performant in their task.
By try to help, do you mean that it currently does so or that functionality is otw
This is wonderful, thank you for sharing!
Do you have any sense of whether this type of model would work with children's speech? There are plenty of educational applications that would value a privacy-first locally deployed model. But, my understanding is that Whisper performs pretty poorly with younger speakers.
Thank you! And you’re right, I think Whisper struggles with younger voices. Haven’t tested Parakeet or other models for this yet, but that’s a great use case (especially since privacy matters in education). I would also shoutout Hypernote! (https://hyprnote.com/) They might be expanding their model options, as they have shown with OWhisper (https://docs.hyprnote.com/owhisper/what-is-this).
Thanks, looks like great work! Hope you continue to cater for those of us with Intel Macs who need the off-device capability…
Thank you for sharing such a great product. Last week after getting fed up with a lot of slow commercial products and wrote my own similar app that works locally in the loop and can record everything I say at the push of a button, transcribe it and put this into the app itself. And for me it was really important to create a second mode so I could speak everything I want in my mother tongue and that gets translated into English automatically. Of course, it all works with formatting, with the placement of commas, quote, etc. It is hard to believe that this hasn't been done in a native dictation app on macOS yet.
Thank you so much for the support, really means a lot! Happy to hear that it has helped you with translation, and agreed, it's kinda crazy native dictation hasn't caught on yet. In the meantime, we have OSS to fill in the gaps.
Thanks for sharing! Transcription suddenly became useful to me when LLMs started being able to generate somewhat useful code from natural language. (I don't think anybody wants to dictate code.) Now my workflow is similar to yours.
I have mixed feelings about OS-integration. I'm currently working on a project to use a foot-pedal for push-to-transcribe - it speaks USB-HID so it works anywhere without software, and it doesn't clobber my clipboard. That said, an app like yours really opens up some cool possibilities! For example, in a keyboard-emulation strategy like mine, I can't easily adjust the text prompt/hint for the transcription model.
With an application running on the host though, you can inject relevant context/prompts/hints (either for transcription, or during your post-transformations). These might be provided intentionally by the user, or, if they really trust your app, this context could even be scraped from what's currently on-screen (or which files are currently being worked on).
Another thing I've thought about doing is using a separate keybind (or button/pedal) that appends the transcription directly to a running notes file. I often want to make a note to reference later, but which I don't need immediately. It's a little extra friction to have to actually have my notes file open in a window somewhere.
Will keep an eye on epicenter, appreciate the ethos.
Thank you for the support, and agreed on OS-level integration. At least for me, I have trouble trusting any app unless they are open source and have a transparent codebase for audit :)
If you want a rabbit hole to go down, looking into cursorless, talonvoice and that whole sphere.
Actually dictating code, but they do it in a rather smart way.
Does additional scripts/ other tools exists that can do the following:
Record permanent the voice (without shortkey) e.g. "run" compile and run a script, "code" switch back to code editor.
Under windows i use AutoHotKey2, but i would replace it with simple voice commands.
Does this functionality exist on iOS ? I'm looking for an iOS app that wraps Parakeet or whisper in a custom iOS keyboard.
That way I can switch to the dictation keyboard, press dictate, and have the transcription inserted in any application (first or third party).
MacWhisper is fantastic for macOS system dictation but the same abilities don't exist on iOS yet. The native iOS dictation is quite good but not as accurate with bespoke technical words / acronyms as Whisper cpp.
superwhisper has that functionality.
Right but not running locally on device. No privacy
I really want to run it locally on a phone, but as a developer it's scary to think about making a native mobile app and having to work with the iOS toolchain I don't have bandwidth at the moment, but if anyone knows of any OSS mobile alternatives, feel free to drop them!
Not a fan of high resource use or reliance on proprietary vendors/services. DeepSpeech/Vosk were pre-AI and still worked well on local devices, but they were a huge pain to set up and use. Anyone have better versions of those? Looks like one successor was Coqui STT, which then evolved into Coqui TTS which seems still maintained. Kaldi seems older but also still maintained.
edit: nvm, this overview explains the different options: https://www.gladia.io/blog/best-open-source-speech-to-text-m... and https://www.gladia.io/blog/thinking-of-using-open-source-whi...
Sorry for the delayed response, thank you for sharing these articles! I agree. I hope that we get a lot better open-source STT options in the future.
Wait, I'm confused. The text here says all data remains on device and emphasises how much you can trust that, that you're obsessed with local-first software, etc. Clicking on the demo video, step one is... configuring access tokens for external services? Are the services shown at 0:21 (Groq, OpenAI, Antrophic, Google, ElevenLabs) doing the actual transcription, listening to everything I say, and is only the resulting text that they give us subject to "it all stays on your device"? Because that's not at all what I expected after reading this description
Great catch Aachen, I should have clarified this better. The app supports both external APIs (Groq, OpenAI, etc.), and more recently local transcription (via whisper.cpp, OWhisper, Speaches, etc.), which never leaves your device.
Like Leftium said, the local-first Whisper C++ implementation just posted a few hours ago.
The local transcription feature via whisper.cpp was just released 2 hours ago: https://github.com/epicenter-so/epicenter/releases/tag/v7.3....
We've edited the top text to make this clearer now. Thanks for pointing this out!
> All your data is stored locally on your device, and your audio goes directly from your machine to your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.) or local provider (Speaches, owhisper, etc.)
Their point is they aren’t a middleman with this, and you can use your preferred supplier or run something locally.
The issue is
> All your data is stored locally on your device,
is fundamentally incapable with half of the following sentence.
I'd write it as
> All your data is stored locally on your device, unless you explicitly decide to use a cloud provider for dictation.
Great correction, wish I could edit the post! Updated the README to reflect this.
I’m beginning to like the idea in this space — local first with a backup with your own tool. Recently, https://hyprnote.com was popular here on Hacker News and it is pretty good. They also do the same, works local-first but you can use your preferred tool too.
Totally agreed, huge fan of Hyprnote as well. We work on two slightly different problems, but a lot of our tech has overlap, and our missions especially overlap :)
Cool! I just started becoming interested in local transcription myself.
If you add Deepgram listen API compatibility, you can do live transcription via either Deepgram (duh) or OWhisper: https://news.ycombinator.com/item?id=44901853
(I haven’t gotten the Deepgram JS SDK working with it yet, currently awaiting a response by the maintainers)
Thank you for checking it out! Coincidentally, it's on the way:
https://github.com/epicenter-so/epicenter/pull/661
In the middle of a huge release that sets up FFMPEG integration (OWhisper needs very specifically formatted files), but hoping to add this after!
Are there any speech-to-text models that are fully OSS for everything from training data/code to model weights?
https://salsa.debian.org/deeplearning-team/ml-policy
Not that I know of. I think the two most prominent open-source models that we hear about are Whisper and Parakeet!
Looks like a really cool project. Do you have any opinions on which transcription models are the best, from a quality perspective? I have heard a lot of mixed opinions on this. Curious what you've found in your development process?
I'm a huge fan of using Whisper hosted on Groq since the transcription is near instantaneous. ElevenLabs' Scribe model is also particularly great with accuracy, and I use it for high-quality transcriptions or manually upload files to their API to get diarization and timestamps (https://elevenlabs.io/app/speech-to-text). That being said, I'm not the biggest expert on models. In my day-to-day workflow, I usually swap between Whisper C++ for local transcription or Groq if I want the best balance of speed/performance, unless I'm working on something particularly sensitive.
How does this compare to VoiceInk which is also open-source and been there much longer and supports all the features that you have? https://github.com/Beingpax/VoiceInk
One thing that immediately stands out is VoiceInk is macOS only, while Whispering supports Linux and Windows in addition to macOS
I really like VoiceInk!
For the Whispering dev: would it be possible to set "right shift" as a toggle? also do it like VoiceInk which is:
- either short right shift press -> then it starts, and short right shift press again to stop - or "long right shift press" (eg when at pressed at least for 0.5s) -> then it starts and just waits for you to release right shift to stop
it's quite convenient
another really cool stuff would be to have the same "mini-recorder" which pops-up on screen like VoiceInk when you record, and once you're done it would display the current transcript, and any of your "transformation" actions, and let you choose which one (or multiple) you want to apply, each time pasting the result in the clipboard
Related, just as a heads up. I've been using this for 100% local offline transcription for a while, works well: https://github.com/pluja/whishper
Awesome, thank you so much for bringing this to my attention and including it in the thread! Always cool to see other open source projects :)
I've been interested in a tool like this for a while. I currently have tried whisprflow and aqua voice but wanted to use my API key and store more context locally. How does all the data get stored and how can I access it?
The data is currently stored in IndexedDB, and you can currently only access it through the user interface (or digging into system files). However, I'm hoping in future updates, all of the transcriptions will instead be stored as markdown files in your local file system. More on that later!
Great work! I've been using Willow Voice but I think I will migrate to this (much cheaper) but they do have a great UI or UX just by hitting a key to start recording and the context goes into whatever text input you want. I haven't installed whispering yet but will do so. P.S
Amazing, thanks for giving it a try! Let me know how it goes and feel free to message me any time :) happy to add any features that you miss from closed-source altneratives!
excellent tool and easy to get started.
on win11, i installed ffmpeg using winget but it's not detecting it. running ffmpeg -version works but the app doesn't detect it.
one thing, how can we reduce the number of notifications received?
i like the system prompt option too.
Thank you for the support! Sorry for the issues with FFmpeg. This is an active issue that we're tracking:
https://github.com/epicenter-so/epicenter/issues/674
We hope to fix notifications too thank you for the feedback and happy to hear you liked the system prompt!
am I not getting it correctly; it says local is possible but can't find any information about how to run it without any api key?
I get the whispers models, and do what? how to run in a device without internet, no documentation about it...
Commented this earlier, but I'm in the middle of a release that adds Whisper C++ support! https://github.com/epicenter-so/epicenter/pull/655
After this pushes, we'll have far more extensive local transcription support. Just fixing a few more small things :)
The docs are pretty clear that you need to use speaches if you want entirely local operation.
https://speaches.ai/
It’s not very clear, rather just a small mention. Given OP’s extensive diatribe about local-first, the fact that it prefers online providers is quite a big miss tbh.
Yeah I agree, I neglected to update the docs and demo. This post was made anticipating the local transcription feature to drop earlier but it took some time due to some bugs. Before, the default option was using Groq for transcription, but that was admittedly before I figured out local transcription and wanted something to work in the meantime. Will be changing local as the default strategy in the documentation.
Agreed.
On the other hand, kudos to developer, already working to make it happen!
All these all just Whisper wrappers? I don't get it, the underlying model still isn't as good as paid custom models from companies, is there an actual open source / weights alternative to Whisper for speech to text? I know only of Parakeet.
Voxtral mini is a bit bigger but their mixed language demos looked super impressive https://mistral.ai/news/voxtral
We like Whisper because it's open-source :) but we also support OpenAI 4o-transcribe/ElevenLabs/Deepgram APIs that all use non-Whisper models (presumedly) under the hood. Speaches also supports other models that are not Whisper. Hopefully adding Parakeet support later too!
Tried it with AppImage on Linux, attempted to download a model and "Failed to download model. An error occurred." but nothing that helps me track down the error :(
Same with the deb. :(
Thanks for flagging this, and sorry that this is happening! Does downloading the model manually work? I wonder if it's related to this:
https://github.com/epicenter-so/epicenter/issues/669
Does Whispering support semantic correction? I was unable to find confirmation while doing a quick search.
Hmm, we support prompts at both 1. the model level (the Whisper supports a "prompt" parameter that sometimes works) and 2. transformations level (inject the transcribed text into a prompt and get the output from an LLM model of your choice). Unsure how else semantic correction can be implemented, but always open expand the feature set greatly over the next few weeks!
They might not now how whisper works. I suspect that the answer to their question is 'yes' and the reason they can't find a straightforward answer through your project is that the answer is so obvious to you that it's hardly worth documenting.
Whisper for transcription tries to transform audio data into LLM output. The transcripts generally have proper casing, punctuation and can usually stick to a specific domain based on the surrounding context.
Can it tell voices apart?
Speaker diarization is the term you are looking for, and this is more difficult than simple transcription. I'm rather confident that someone probably has a good solution by now (if you want to pay for an API), but I haven't seen an open-source/open-weights tool for diarization/transcription. I looked a few months ago, but things move fast...
Diarization is on the roadmap; some providers support it but some don't and the adapter for that could be tricky. Whispering is not meant for meeting notes for now; for something like that or diarization I would recommend trying Hyprnote: https://hyprnote.com or interfacing with the Elevenlabs Scribe API https://elevenlabs.io/app/speech-to-text
I'm not looking for attributed meeting notes, so much as making it harder for a passing child to inject content.
Thanks, that, yeah. I've looked occasionally but it's been a bit. Necessary feature in a house with a 9yo. I've been thinking about taking a swing at solving my problem without solving the general problem.
Earlier today I discovered Vibe: https://github.com/thewh1teagle/vibe
Local, using WhisperX. Precompiled binaries available.
I'm hoping to find and try a local-first version of an nvidia/canary like (like https://huggingface.co/nvidia/canary-qwen-2.5b) since it's almost twice as fast as Whisper with even lower word-error-rate
Awesome, thank you so much for bringing this to my attention! Always cool to see other open source projects that have better implementations :) much to learn!
Been using WhisperX myself for years. The big factor is the diarization they offer through pyannotate in the single package. I do like the software even if they make some weird choices and configuration issues.
Allegedly Groq will be offering diarization with their cloud offering and super fast API which will be huge for those willing to go off-local.
Yes yes yes please so much yes.
I love the idea of epicenter. I love open source local-first software.
Something I've been hacking on for a minute would fit so well, if encryption wasn't a requirement for the profit model.
But uh yes thank you for making my life easier, and I hope to return the favor soon
Thank you so much for the support! It really means a lot to me. And I can't wait to hear about what you're building. Feel free to DM me and Discord when the time comes :)
Really nice!
For OsX there is also the great VoiceInk which is similar and open-source https://github.com/Beingpax/VoiceInk/
Very similar and works well. It’s a bring your own API key if you want/need. Also with local whisper.
Awesome, thank you so much for bringing this to my attention! Cool to see another open source project that has different implementations :) much to learn with their Parakeet implementation!
Is there speaker detection?
Diarization is on the roadmap! Some providers support it, but some don't and the adapter for that could be tricky. Currently, for diarization I use the Elevenlabs Scribe API https://elevenlabs.io/app/speech-to-text, but there are surely other options
are there any non-Whisper-based voice models/tech/APIs?
Yes, we currently support OpenAI/ElevenLabs/Deepgram APIs that all use non-Whisper models (presumedly) under the hood. Speaches also supports other models that are not Whisper. Hopefully adding Parakeet support later too!
Now we just need text to speech so we can truly interact with our computers hands free.
If you're on Mac, you can use `say`, e.g.,
EDIT: I'm having way too much fun with this lolLOL that's pretty funny, thank you for the share!
If there's a more modern package that sounds more human that's a similar no-brainer to install, I'd be interested, but just to note that this part of the problem has been solved for many years now, even if the better-sounding models are usually not as openly licensed, orders of magnitude more resource-intensive, limited to a few languages, and often less reliable/predictable in their pronunciation of new or compound words (usually not all of these issues at once)
Will give it a spin, thanks!
I also just found something that sounds genuinely realistic: Piper (https://github.com/OHF-Voice/piper1-gpl/tree/main). It's slow but apparently you can run it as a daemon to be faster, and it integrates with Home Assistant and Speech Dispatcher.
To manage the install graphically, you can use Pied (https://pied.mikeasoft.com/), which has a snap and a flatpak. That one's really cool because you can choose the voice graphically which makes it easy to try them out or switch voices. To play sound you just use "spd-say 'Hello, world!'"More crazy: Home Assistant did a "Year of Voice" project (https://www.home-assistant.io/blog/2022/12/20/year-of-voice/) that culminated in a real open-source voice assistant product (https://www.home-assistant.io/voice-pe/) !!! And it's only $60??
Windows Defender says it is infected.
Ahh that's unfortunate. This most likely is related to the rust `enigo` create, which we use to write text to the cursor. You can see the lines in question here: https://github.com/epicenter-so/epicenter/blob/60f172d193d88...
If it's still an issue, feel free to build it locally on your machine to ensure your supply chain is clean! I'll add more instructions in the README in the future.
I'm no expert, but since it acts as a keyboard wedge it's likely to be unpopular with security software.
This needs to be higher, the installer on the README has a trojan.
More details please? Which installer?
---7.3.0--- This release popped up just a few minutes ago, so VirusTotal results for the 7.3.0 EXE and MSI installers
EXE (still running behavior checks but Arctic Wolf says Unsafe and AVG & Avast say PUP): https://www.virustotal.com/gui/file/816b21b7435295d0ac86f6a8...
MSI nothing flags immediately, still running behavior checks (https://www.virustotal.com/gui/file/e022a018c4ac6f27696c145e...)
---7.2.2/7.2.1 below--- I do note one bit of weirdness, the Windows downloads show 7.2.2 but the download links themselves are 7.2.1. 7.2.1 is also what shows on the release from 3 days ago even though it's numbered 7.2.2.
I didn't check the Mac or Linux installers, but for Windows VirusTotal flags nothing on the 7.2.1/7.2.2 MSI (https://www.virustotal.com/gui/file/7a2d4fec05d1b24b7deda202...) and 3 flags on the EXE (ArcticWolf Unsafe, AVG & Avast PUP) (https://www.virustotal.com/gui/file/a30388127ad48ca8a42f9831...)
Need to run a diff against 7.2.2 tag against 7.3.0; I suspect the issue might be something related to an edit I made on `tauri.conf.json` or one of my Rust dependencies.
We're actively tracking this issue here:
https://github.com/epicenter-so/epicenter/issues/440
Thank you again for bringing this to my attention! Need to step up my Windows development.
What does Virustotal say?