This is amazing, I’ve been having the problem with live STT (mainly for voice assistants). I’m curious if your model + whisper tiny would outperform Whisper small or even medium. I’ve been having issues where even Fast Whisper small takes too long.
Also bummed how Qwen3-1.7B purely nonthinking hasn’t been released. Otherwise, I’m curious on “how low can you go”
What hardware are you running? Parakeet runs on nvidia and Mac and it’s way faster than Whisper. And I’ve had issues with training Qwen3 (and even Qwen2.5 but I think I was masking stop tokens wrong). I’ve had success with Gemma 3 though, and they have some really small models (270m and 1b). Maybe 270m for just transcript cleaning? I wonder if the 1b model can handle the transcript analysis…
Unfortunately I have zero experience with the Jetson family, and Parakeet itself is a pain to get running IMO - I took the easy option and used the ONNX version
Thanks for sharing. It is impressive to see how fine tuning makes such a huge difference. It is a matter of cost to decide whether you use a large llm or fine tune a small one.
As a noob in fine tuning, question: how did you decided the values of the hyper parameters ?
Cost is a big factor - i really want to make models that can run on average CPU only machines so most of the world can benefit, rather than needing expensive GPUs or an internet connection + subscriptions. Another big factor is privacy (you don't need to trust a 3rd party with your inputs).
As for the hyperparameters, pure bruteforce trial and error. It feels more like a dark art than a science. You roll the dice and then start tweaking things until the loss looks like it's dropping nicely and consistently, and the checkpoints are starting to output things resembling what we want. I sometimes do inference using checkpoints just to get a feel of if the model is learning (regardless of loss)
This is amazing, I’ve been having the problem with live STT (mainly for voice assistants). I’m curious if your model + whisper tiny would outperform Whisper small or even medium. I’ve been having issues where even Fast Whisper small takes too long.
Also bummed how Qwen3-1.7B purely nonthinking hasn’t been released. Otherwise, I’m curious on “how low can you go”
What hardware are you running? Parakeet runs on nvidia and Mac and it’s way faster than Whisper. And I’ve had issues with training Qwen3 (and even Qwen2.5 but I think I was masking stop tokens wrong). I’ve had success with Gemma 3 though, and they have some really small models (270m and 1b). Maybe 270m for just transcript cleaning? I wonder if the 1b model can handle the transcript analysis…
I’m running on a Jetson Orin Nano. Do you know if there is a parakeet + Wyoming repo?
Unfortunately I have zero experience with the Jetson family, and Parakeet itself is a pain to get running IMO - I took the easy option and used the ONNX version
Try the inkvoice app for example. It can run parakeet with a simple click
Thanks for sharing. It is impressive to see how fine tuning makes such a huge difference. It is a matter of cost to decide whether you use a large llm or fine tune a small one.
As a noob in fine tuning, question: how did you decided the values of the hyper parameters ?
Thank you for reading!
Cost is a big factor - i really want to make models that can run on average CPU only machines so most of the world can benefit, rather than needing expensive GPUs or an internet connection + subscriptions. Another big factor is privacy (you don't need to trust a 3rd party with your inputs).
As for the hyperparameters, pure bruteforce trial and error. It feels more like a dark art than a science. You roll the dice and then start tweaking things until the loss looks like it's dropping nicely and consistently, and the checkpoints are starting to output things resembling what we want. I sometimes do inference using checkpoints just to get a feel of if the model is learning (regardless of loss)
I can't trust Llama3, because I have no idea what they did to the model to make it "less woke".