Show HN: Whispering – Open-source, local-first dictation you can trust

github.com

402 points by braden-w 17 hours ago

Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app.

I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went.

So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. Your data is stored locally on your device, and your audio goes directly from your machine to a local provider (Whisper C++, Speaches, etc.) or your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.). For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before).

Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office.

Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs, and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ.

There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://github.com/cjpais/Handy). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model.

Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter (https://github.com/epicenter-so/epicenter), which I should explain a bit...

I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it.

Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now.

I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon.

Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here (https://github.com/epicenter-so/epicenter) and join the Discord here (https://go.epicenter.so/discord). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!

pstroqaty 2 hours ago

If anyone's interested in a janky-but-works-great dictation setup on Linux, here's mine:

On key press, start recording microphone to /tmp/dictate.mp3:

  # Save up to 10 mins. Minimize buffering. Save pid
  ffmpeg -f pulse -i default -ar 16000 -ac 1 -t 600 -y -c:a libmp3lame -q:a 2 -flush_packets 1 -avioflags direct -loglevel quiet /tmp/dictate.mp3 &
  echo $! > /tmp/dictate.pid
On key release, stop recording, transcribe with whisper.cpp, trim whitespace and print to stdout:

  # Stop recording
  kill $(cat /tmp/dictate.pid)
  # Transcribe
  whisper-cli --language en --model $HOME/.local/share/whisper/ggml-large-v3-turbo-q8_0.bin --no-prints --no-timestamps /tmp/dictate.mp3 | tr -d '\n' | sed 's/^[[:space:]]*//;s/[[:space:]]*$//'
I keep these in a dictate.sh script and bind to press/release on a single key. A programmable keyboard helps here. I use https://git.sr.ht/%7Egeb/dotool to turn the transcription into keystrokes. I've also tried ydotool and wtype, but they seem to swallow keystrokes.

  bindsym XF86Launch5 exec dictate.sh start
  bindsym --release XF86Launch5 exec echo "type $(dictate.sh stop)" | dotoolc
This gives a very functional push-to-talk setup.

I'm very impressed with https://github.com/ggml-org/whisper.cpp. Transcription quality with large-v3-turbo-q8_0 is excellent IMO and a Vulkan build is very fast on my 6600XT. It takes about 1s for an average sentence to appear after I release the hotkey.

I'm keeping an eye on the NVidia models, hopefully they work on ggml soon too. E.g. https://github.com/ggml-org/whisper.cpp/issues/3118.

mrgaro 21 minutes ago

I'd love to find a tool which could recognise a few different speakers so that I could automatically dictate 1:1 sessions. In addition, I definitively would want to feed that to an LLM to cleanup the notes (to remove all "umm" and similar nonsense) and to do context aware spell checking.

The LLM part should be very much doable, but I'm not sure if speaker recognition exists in a sufficiently working state?

  • torstenvl 5 minutes ago

    Speaker "diarization" is what you're looking for, and currently the most popular solution is pyannote.audio.

    Eventually I'm trying to get around to using it in conjunction with a fine-tuned whisper model to make transcriptions. Just haven't found the time yet.

wkcheng 15 hours ago

Does this support using the Parakeet model locally? I'm a MacWhisper user and I find that Parakeet is way better and faster than Whisper for on-device transcription. I've been using push-to-transcribe with MacWhisper through Parakeet for a while now and it's quite magical.

  • braden-w 10 hours ago

    Not yet, but I want it too! Parakeet looks incredible (saw that leaderboard result). My current roadmap is: finish stabilizing whisper.cpp integration, then add Parakeet support. If anyone has bandwidth to PR the connector, I’d be thrilled to merge it.

    • Bolwin 8 hours ago

      Unfortunately, because it's Nvidia, parakeet doesn't work with Whisper.cpp as far as I'm aware. You need onnx

      • braden-w 5 hours ago

        Some lovely folks have left some other open-source projects that implement Parakeet. I would recommend checking those out! I'll also work on my own implementation in the meantime :D

  • daemonologist 12 hours ago

    Parakeet is amazing - 3000x real-time on an A100 and 5x real-time even on a laptop CPU, while being more accurate than whisper-large-v3 (https://huggingface.co/spaces/hf-audio/open_asr_leaderboard). NeMo is a little awkward though; I'm amazed it runs locally on Mac (for MacWhisper).

    • wkcheng 10 hours ago

      Yeah, Parakeet runs great locally on my M1 laptop (through MacWhisper). Transcription speed of recordings feel at least 10x faster than Whisper, and the accuracy is better as well. Push to talk for dictation is pretty seamless since the model is so fast. I've observed no downside to Parakeet if you're speaking English.

  • warangal 3 hours ago

    A bit tangential statement, about parakeet and other Nvidia Nemo models, i never found actual architecture implementations as pytorch/tf code, seems like all such models, are instant-ized from a binary blob making it difficult to experiment! Maybe i missed something, does anyone here have more experience with .nemo models to shed some more light onto this?

  • polo 12 hours ago

    +1 for MacWhisper. Very full featured, nice that it's a one time purchase, and the developer is constantly improving it.

  • mark212 12 hours ago

    seems like "not yet" is the answer from other comments

chrisweekly 14 hours ago

> "I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it."

Yes! This. I have almost no experience w/ tts, but if/when I explore the space, I'll start w/ Whispering -- because of Epicenter. Starred the repo, and will give some thought to other apps that might make sense to contribute there. Bravo, thanks for publishing these and sharing, and congrats on getting into YC! :)

  • dev0p an hour ago

    That's a good idea... Just git repo your whole knowledge base and build on top of it.

  • braden-w 5 hours ago

    Thanks so much for the support! Really appreciate the feedback, and it’s great to hear the vision resonates. No worries on the STT/TTS experience; it’s just awesome to connect with someone who shares the values of open-source and owning our data :) I’m hoping my time in YC can be productive and, along the way, create more support for other OSS developers too. Keep in touch!

  • sebastiennight 7 hours ago

    I think we're talking about STT (speech-to-text) here, not TTS.

  • spullara 13 hours ago

    IF you do want to then ALSO have a cloud version, you can just use the AgentDB API and upload them there and just change where the SQL runs.

braden-w 16 hours ago

For those checking out the repo this morning, I'm in the middle of a release that adds Whisper C++ support!

https://github.com/epicenter-so/epicenter/pull/655

After this pushes, we'll have far more extensive local transcription support. Just fixing a few more small things :)

  • teiferer 4 hours ago

    You mentioned that you got into YC .. what is the road to profitability for your project(s) if everything is open source and local?

divan 2 hours ago

As many other people commented on similar projects, one of the issues of trying to use voice dictation instead of typing is the lack of real-time visual indication. When we write, we immediately see the text, which helps to keep the thought (especially in longer sentences/paragraphs). But with dictation, it either comes with a delay or only when dictation is over, and it doesn't feel as comfortable as writing. Tangentially, many people "think as they write" and dictation doesn't offer that experience.

I wonder if it changes with time for people who use dictation often.

  • archerx 2 hours ago

    I think there is still some use to diction. For me it’s a great way to get screenplays on paper. I can type fast but I can think and speak faster. I just record a stream of thought of the story/video I want, even if I jump all over the place it doesn’t matter, just a nice stream of consciousness. Afterwards I spend time editing and putting things in the right order and clean up. I find this much faster than just writing.

    I use whisperfile which is a multiplatform implementation of whisper that works really well.

    https://huggingface.co/Mozilla/whisperfile

marcodiego 14 hours ago

> I’m basically obsessed with local-first open-source software.

We all should be.

okasaki 12 minutes ago

This is a cool project and I want go give it a go in my spare time.

However what gives me pause is the sheer number of possibly compromised microphones all around me (phones, tablets, laptops, tv etc) at all times, which makes spying much easier than if I use a keyboard.

dumbmrblah 15 hours ago

I’ve been using whispering for about a year now, it has really changed how I interact with the computer. I make sure to buy mice or keyboards that have programmable hotkeys so that I can use the shortcuts for whispering. I can’t go back to regular typing at this point, just feels super inefficient. Thanks again for all your hard work!

  • braden-w 5 hours ago

    Thank you so much for your support! It really means a lot :) Happy to hear that it's helped you, and keep in touch if you ever have any issues!

Tmpod 10 hours ago

I've been interested in dictation for a while, but I don't want to be sending any audio to a remote API, it all has to be local. Having tried just a couple of models (namely the one used by the FUTO Keyboard), I'm kinda feeling like we're not quite there yet.

My biggest gripe perhaps is not being able to get decent content out of a thought stream; the models can't properly filter out the pauses, "uuuuhmms", and much less so handle on the fly corrections to what I've been saying, like going back and repeating something with a slight variation and whatnot.

This is a challenging problem I'd love to see being tackled well by open models I can run on my computer or phone. Are there new models more capable of this? Is it not just a model thing, and I missing a good app too?

In the meanwhile, I'll keep typing, even though it can be quite a bit less convenient to do; especially true for note taking on the go.

  • hephaes7us 8 hours ago

    Have you tried Whisper itself? It's open-weights.

    One of the features of the project posted above is "transformations" that you can run on transcripts. They feed the text into an LLM to clean it up. If you're willing to pay for the tokens, I think you could not only remove filler-words, but could probably even get the semantically-aware editing (corrections) you're talking about.

    • braden-w 5 hours ago

      ^Yep, unfortunately, the best option right now seems to pipe the output into another LLM to do some cleanup, which we try to help you do in Whispering. Recent transcription models don't have very good built-in inference/cleanup, with Whisper having the very weak "prompt" parameter. It seems like this is probably by design to keep these models lean/specialized/performant in their task.

      • _345 4 hours ago

        By try to help, do you mean that it currently does so or that functionality is otw

glial 15 hours ago

This is wonderful, thank you for sharing!

Do you have any sense of whether this type of model would work with children's speech? There are plenty of educational applications that would value a privacy-first locally deployed model. But, my understanding is that Whisper performs pretty poorly with younger speakers.

  • braden-w 10 hours ago

    Thank you! And you’re right, I think Whisper struggles with younger voices. Haven’t tested Parakeet or other models for this yet, but that’s a great use case (especially since privacy matters in education). I would also shoutout Hypernote! (https://hyprnote.com/) They might be expanding their model options, as they have shown with OWhisper (https://docs.hyprnote.com/owhisper/what-is-this).

hn_throw2025 2 hours ago

Thanks, looks like great work! Hope you continue to cater for those of us with Intel Macs who need the off-device capability…

g48ywsJk6w48 6 hours ago

Thank you for sharing such a great product. Last week after getting fed up with a lot of slow commercial products and wrote my own similar app that works locally in the loop and can record everything I say at the push of a button, transcribe it and put this into the app itself. And for me it was really important to create a second mode so I could speak everything I want in my mother tongue and that gets translated into English automatically. Of course, it all works with formatting, with the placement of commas, quote, etc. It is hard to believe that this hasn't been done in a native dictation app on macOS yet.

  • braden-w 5 hours ago

    Thank you so much for the support, really means a lot! Happy to hear that it has helped you with translation, and agreed, it's kinda crazy native dictation hasn't caught on yet. In the meantime, we have OSS to fill in the gaps.

hephaes7us 11 hours ago

Thanks for sharing! Transcription suddenly became useful to me when LLMs started being able to generate somewhat useful code from natural language. (I don't think anybody wants to dictate code.) Now my workflow is similar to yours.

I have mixed feelings about OS-integration. I'm currently working on a project to use a foot-pedal for push-to-transcribe - it speaks USB-HID so it works anywhere without software, and it doesn't clobber my clipboard. That said, an app like yours really opens up some cool possibilities! For example, in a keyboard-emulation strategy like mine, I can't easily adjust the text prompt/hint for the transcription model.

With an application running on the host though, you can inject relevant context/prompts/hints (either for transcription, or during your post-transformations). These might be provided intentionally by the user, or, if they really trust your app, this context could even be scraped from what's currently on-screen (or which files are currently being worked on).

Another thing I've thought about doing is using a separate keybind (or button/pedal) that appends the transcription directly to a running notes file. I often want to make a note to reference later, but which I don't need immediately. It's a little extra friction to have to actually have my notes file open in a window somewhere.

Will keep an eye on epicenter, appreciate the ethos.

  • braden-w 5 hours ago

    Thank you for the support, and agreed on OS-level integration. At least for me, I have trouble trusting any app unless they are open source and have a transparent codebase for audit :)

  • NDxTreme 7 hours ago

    If you want a rabbit hole to go down, looking into cursorless, talonvoice and that whole sphere.

    Actually dictating code, but they do it in a rather smart way.

progx 3 hours ago

Does additional scripts/ other tools exists that can do the following:

Record permanent the voice (without shortkey) e.g. "run" compile and run a script, "code" switch back to code editor.

Under windows i use AutoHotKey2, but i would replace it with simple voice commands.

jryio 6 hours ago

Does this functionality exist on iOS ? I'm looking for an iOS app that wraps Parakeet or whisper in a custom iOS keyboard.

That way I can switch to the dictation keyboard, press dictate, and have the transcription inserted in any application (first or third party).

MacWhisper is fantastic for macOS system dictation but the same abilities don't exist on iOS yet. The native iOS dictation is quite good but not as accurate with bespoke technical words / acronyms as Whisper cpp.

  • nchudleigh 6 hours ago

    superwhisper has that functionality.

    • jryio 5 hours ago

      Right but not running locally on device. No privacy

      • braden-w 4 hours ago

        I really want to run it locally on a phone, but as a developer it's scary to think about making a native mobile app and having to work with the iOS toolchain I don't have bandwidth at the moment, but if anyone knows of any OSS mobile alternatives, feel free to drop them!

0xbadcafebee 12 hours ago

Not a fan of high resource use or reliance on proprietary vendors/services. DeepSpeech/Vosk were pre-AI and still worked well on local devices, but they were a huge pain to set up and use. Anyone have better versions of those? Looks like one successor was Coqui STT, which then evolved into Coqui TTS which seems still maintained. Kaldi seems older but also still maintained.

edit: nvm, this overview explains the different options: https://www.gladia.io/blog/best-open-source-speech-to-text-m... and https://www.gladia.io/blog/thinking-of-using-open-source-whi...

  • braden-w 4 hours ago

    Sorry for the delayed response, thank you for sharing these articles! I agree. I hope that we get a lot better open-source STT options in the future.

Aachen 12 hours ago

Wait, I'm confused. The text here says all data remains on device and emphasises how much you can trust that, that you're obsessed with local-first software, etc. Clicking on the demo video, step one is... configuring access tokens for external services? Are the services shown at 0:21 (Groq, OpenAI, Antrophic, Google, ElevenLabs) doing the actual transcription, listening to everything I say, and is only the resulting text that they give us subject to "it all stays on your device"? Because that's not at all what I expected after reading this description

  • braden-w 10 hours ago

    Great catch Aachen, I should have clarified this better. The app supports both external APIs (Groq, OpenAI, etc.), and more recently local transcription (via whisper.cpp, OWhisper, Speaches, etc.), which never leaves your device.

    Like Leftium said, the local-first Whisper C++ implementation just posted a few hours ago.

  • dang 4 hours ago

    We've edited the top text to make this clearer now. Thanks for pointing this out!

  • IanCal 11 hours ago

    > All your data is stored locally on your device, and your audio goes directly from your machine to your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.) or local provider (Speaches, owhisper, etc.)

    Their point is they aren’t a middleman with this, and you can use your preferred supplier or run something locally.

    • bangaladore 10 hours ago

      The issue is

      > All your data is stored locally on your device,

      is fundamentally incapable with half of the following sentence.

      I'd write it as

      > All your data is stored locally on your device, unless you explicitly decide to use a cloud provider for dictation.

      • braden-w 10 hours ago

        Great correction, wish I could edit the post! Updated the README to reflect this.

Brajeshwar 6 hours ago

I’m beginning to like the idea in this space — local first with a backup with your own tool. Recently, https://hyprnote.com was popular here on Hacker News and it is pretty good. They also do the same, works local-first but you can use your preferred tool too.

  • braden-w 5 hours ago

    Totally agreed, huge fan of Hyprnote as well. We work on two slightly different problems, but a lot of our tech has overlap, and our missions especially overlap :)

solarkraft 16 hours ago

Cool! I just started becoming interested in local transcription myself.

If you add Deepgram listen API compatibility, you can do live transcription via either Deepgram (duh) or OWhisper: https://news.ycombinator.com/item?id=44901853

(I haven’t gotten the Deepgram JS SDK working with it yet, currently awaiting a response by the maintainers)

jnmandal 12 hours ago

Looks like a really cool project. Do you have any opinions on which transcription models are the best, from a quality perspective? I have heard a lot of mixed opinions on this. Curious what you've found in your development process?

  • braden-w 5 hours ago

    I'm a huge fan of using Whisper hosted on Groq since the transcription is near instantaneous. ElevenLabs' Scribe model is also particularly great with accuracy, and I use it for high-quality transcriptions or manually upload files to their API to get diarization and timestamps (https://elevenlabs.io/app/speech-to-text). That being said, I'm not the biggest expert on models. In my day-to-day workflow, I usually swap between Whisper C++ for local transcription or Groq if I want the best balance of speed/performance, unless I'm working on something particularly sensitive.

michael-sumner 14 hours ago

How does this compare to VoiceInk which is also open-source and been there much longer and supports all the features that you have? https://github.com/Beingpax/VoiceInk

  • phainopepla2 13 hours ago

    One thing that immediately stands out is VoiceInk is macOS only, while Whispering supports Linux and Windows in addition to macOS

  • oulipo 13 hours ago

    I really like VoiceInk!

    For the Whispering dev: would it be possible to set "right shift" as a toggle? also do it like VoiceInk which is:

    - either short right shift press -> then it starts, and short right shift press again to stop - or "long right shift press" (eg when at pressed at least for 0.5s) -> then it starts and just waits for you to release right shift to stop

    it's quite convenient

    another really cool stuff would be to have the same "mini-recorder" which pops-up on screen like VoiceInk when you record, and once you're done it would display the current transcript, and any of your "transformation" actions, and let you choose which one (or multiple) you want to apply, each time pasting the result in the clipboard

tummler 14 hours ago

Related, just as a heads up. I've been using this for 100% local offline transcription for a while, works well: https://github.com/pluja/whishper

  • braden-w 5 hours ago

    Awesome, thank you so much for bringing this to my attention and including it in the thread! Always cool to see other open source projects :)

ayushrodrigues 13 hours ago

I've been interested in a tool like this for a while. I currently have tried whisprflow and aqua voice but wanted to use my API key and store more context locally. How does all the data get stored and how can I access it?

  • braden-w 5 hours ago

    The data is currently stored in IndexedDB, and you can currently only access it through the user interface (or digging into system files). However, I'm hoping in future updates, all of the transcriptions will instead be stored as markdown files in your local file system. More on that later!

Johnny_Bonk 16 hours ago

Great work! I've been using Willow Voice but I think I will migrate to this (much cheaper) but they do have a great UI or UX just by hitting a key to start recording and the context goes into whatever text input you want. I haven't installed whispering yet but will do so. P.S

  • braden-w 15 hours ago

    Amazing, thanks for giving it a try! Let me know how it goes and feel free to message me any time :) happy to add any features that you miss from closed-source altneratives!

hn1986 7 hours ago

excellent tool and easy to get started.

on win11, i installed ffmpeg using winget but it's not detecting it. running ffmpeg -version works but the app doesn't detect it.

one thing, how can we reduce the number of notifications received?

i like the system prompt option too.

mrs6969 15 hours ago

am I not getting it correctly; it says local is possible but can't find any information about how to run it without any api key?

I get the whispers models, and do what? how to run in a device without internet, no documentation about it...

  • rpdillon 14 hours ago

    The docs are pretty clear that you need to use speaches if you want entirely local operation.

    https://speaches.ai/

    • yunohn 13 hours ago

      It’s not very clear, rather just a small mention. Given OP’s extensive diatribe about local-first, the fact that it prefers online providers is quite a big miss tbh.

      • braden-w 10 hours ago

        Yeah I agree, I neglected to update the docs and demo. This post was made anticipating the local transcription feature to drop earlier but it took some time due to some bugs. Before, the default option was using Groq for transcription, but that was admittedly before I figured out local transcription and wanted something to work in the meantime. Will be changing local as the default strategy in the documentation.

      • mrs6969 3 hours ago

        Agreed.

        On the other hand, kudos to developer, already working to make it happen!

satvikpendem 13 hours ago

All these all just Whisper wrappers? I don't get it, the underlying model still isn't as good as paid custom models from companies, is there an actual open source / weights alternative to Whisper for speech to text? I know only of Parakeet.

  • sa-code 12 hours ago

    Voxtral mini is a bit bigger but their mixed language demos looked super impressive https://mistral.ai/news/voxtral

    • braden-w 5 hours ago

      We like Whisper because it's open-source :) but we also support OpenAI 4o-transcribe/ElevenLabs/Deepgram APIs that all use non-Whisper models (presumedly) under the hood. Speaches also supports other models that are not Whisper. Hopefully adding Parakeet support later too!

emacsen 11 hours ago

Tried it with AppImage on Linux, attempted to download a model and "Failed to download model. An error occurred." but nothing that helps me track down the error :(

newman314 16 hours ago

Does Whispering support semantic correction? I was unable to find confirmation while doing a quick search.

  • braden-w 15 hours ago

    Hmm, we support prompts at both 1. the model level (the Whisper supports a "prompt" parameter that sometimes works) and 2. transformations level (inject the transcribed text into a prompt and get the output from an LLM model of your choice). Unsure how else semantic correction can be implemented, but always open expand the feature set greatly over the next few weeks!

    • joshred 15 hours ago

      They might not now how whisper works. I suspect that the answer to their question is 'yes' and the reason they can't find a straightforward answer through your project is that the answer is so obvious to you that it's hardly worth documenting.

      Whisper for transcription tries to transform audio data into LLM output. The transcripts generally have proper casing, punctuation and can usually stick to a specific domain based on the surrounding context.

dllthomas 13 hours ago

Can it tell voices apart?

  • hephaes7us 11 hours ago

    Speaker diarization is the term you are looking for, and this is more difficult than simple transcription. I'm rather confident that someone probably has a good solution by now (if you want to pay for an API), but I haven't seen an open-source/open-weights tool for diarization/transcription. I looked a few months ago, but things move fast...

    • braden-w 10 hours ago

      Diarization is on the roadmap; some providers support it but some don't and the adapter for that could be tricky. Whispering is not meant for meeting notes for now; for something like that or diarization I would recommend trying Hyprnote: https://hyprnote.com or interfacing with the Elevenlabs Scribe API https://elevenlabs.io/app/speech-to-text

      • dllthomas 9 hours ago

        I'm not looking for attributed meeting notes, so much as making it harder for a passing child to inject content.

    • dllthomas 11 hours ago

      Thanks, that, yeah. I've looked occasionally but it's been a bit. Necessary feature in a house with a 9yo. I've been thinking about taking a swing at solving my problem without solving the general problem.

hereme888 12 hours ago

Earlier today I discovered Vibe: https://github.com/thewh1teagle/vibe

Local, using WhisperX. Precompiled binaries available.

I'm hoping to find and try a local-first version of an nvidia/canary like (like https://huggingface.co/nvidia/canary-qwen-2.5b) since it's almost twice as fast as Whisper with even lower word-error-rate

  • braden-w 5 hours ago

    Awesome, thank you so much for bringing this to my attention! Always cool to see other open source projects that have better implementations :) much to learn!

  • icelancer 12 hours ago

    Been using WhisperX myself for years. The big factor is the diarization they offer through pyannotate in the single package. I do like the software even if they make some weird choices and configuration issues.

    Allegedly Groq will be offering diarization with their cloud offering and super fast API which will be huge for those willing to go off-local.

Jarwain 8 hours ago

Yes yes yes please so much yes.

I love the idea of epicenter. I love open source local-first software.

Something I've been hacking on for a minute would fit so well, if encryption wasn't a requirement for the profit model.

But uh yes thank you for making my life easier, and I hope to return the favor soon

  • braden-w 5 hours ago

    Thank you so much for the support! It really means a lot to me. And I can't wait to hear about what you're building. Feel free to DM me and Discord when the time comes :)

oulipo 13 hours ago

Really nice!

For OsX there is also the great VoiceInk which is similar and open-source https://github.com/Beingpax/VoiceInk/

  • jiehong 13 hours ago

    Very similar and works well. It’s a bring your own API key if you want/need. Also with local whisper.

    • braden-w 5 hours ago

      Awesome, thank you so much for bringing this to my attention! Cool to see another open source project that has different implementations :) much to learn with their Parakeet implementation!

ideashower 14 hours ago

Is there speaker detection?

  • braden-w 10 hours ago

    Diarization is on the roadmap! Some providers support it, but some don't and the adapter for that could be tricky. Currently, for diarization I use the Elevenlabs Scribe API https://elevenlabs.io/app/speech-to-text, but there are surely other options

random3 14 hours ago

are there any non-Whisper-based voice models/tech/APIs?

  • braden-w 12 hours ago

    Yes, we currently support OpenAI/ElevenLabs/Deepgram APIs that all use non-Whisper models (presumedly) under the hood. Speaches also supports other models that are not Whisper. Hopefully adding Parakeet support later too!

codybontecou 15 hours ago

Now we just need text to speech so we can truly interact with our computers hands free.

  • PyWoody 14 hours ago

    If you're on Mac, you can use `say`, e.g.,

        say "This is a test message" --voice="Bubbles"
    
    EDIT: I'm having way too much fun with this lol

        say "This is a test message" --voice="Organ"
        say "This is a test message" --voice="Good News"
        say "This is a test message" --voice="Bad News"
        say "This is a test message" --voice="Jester"
    • braden-w 13 hours ago

      LOL that's pretty funny, thank you for the share!

  • Aachen 12 hours ago

        $ apt install espeak-ng
        $ espeak-ng 'Hello, World!'
    
    It takes some adjustment and sounds a lot worse than what e.g. Google ships proprietarily on your phone, but after ~30 seconds of listening (if I haven't used it recently) I understand it just as well as I understand the TTS engine on my phone

    If there's a more modern package that sounds more human that's a similar no-brainer to install, I'd be interested, but just to note that this part of the problem has been solved for many years now, even if the better-sounding models are usually not as openly licensed, orders of magnitude more resource-intensive, limited to a few languages, and often less reliable/predictable in their pronunciation of new or compound words (usually not all of these issues at once)

    • 0xbadcafebee 11 hours ago

        $ apt install festival
        $ echo "Hello, World!" | festival --tts
      
      Not impressively better, but I find festival slightly more intelligible.
      • Aachen 11 hours ago

        Will give it a spin, thanks!

        • 0xbadcafebee 7 hours ago

          I also just found something that sounds genuinely realistic: Piper (https://github.com/OHF-Voice/piper1-gpl/tree/main). It's slow but apparently you can run it as a daemon to be faster, and it integrates with Home Assistant and Speech Dispatcher.

            $ sudo apt update
            $ sudo apt install -y python3 python3-pip libsndfile1 ffmpeg
            $ python -m venv piper-tts
            $ ./venv/piper-tts/bin/pip install piper-tts
            $ ./venv/piper-tts/bin/python3 -m piper.download_voices en_US-lessac-medium
            $ ./venv/piper-tts/bin/piper -m en_US-lessac-medium -- 'This will play on your speakers.'
          
          To manage the install graphically, you can use Pied (https://pied.mikeasoft.com/), which has a snap and a flatpak. That one's really cool because you can choose the voice graphically which makes it easy to try them out or switch voices. To play sound you just use "spd-say 'Hello, world!'"

          More crazy: Home Assistant did a "Year of Voice" project (https://www.home-assistant.io/blog/2022/12/20/year-of-voice/) that culminated in a real open-source voice assistant product (https://www.home-assistant.io/voice-pe/) !!! And it's only $60??

satisfice 15 hours ago

Windows Defender says it is infected.