The team has also implemented state-of-the-art research to improve voice labeling accuracy, using fingerprinting and diarization to create error-free transcripts and crystal-clear audio.
I dont get why people prefer the app more, it is so much friction to unlock my phone (wait for unlock), open an app (wait for it to load), and then press the record button. I'm very excited for hardware devices like this. Lets see if they can actually make it useful.
Our core belief is to make only a few features that are incredibly accurate. People have tons of complaints on past AI wearables: UI is unclear when its recording, battery life only lasts a day, button is incorrectly placed, transcript fails to detect more than 2 speakers. All things that we've put a lot of thought into and have fixed.
We're running models for transcription and voice classification. The data policy for those model providers are SOC2 compliant. On top of that we made the important decision to open source the device firmware and app code that will be public on github.com/open-vision-engineering. The audio files are stored on the standalone device's 64 gb and then is synced to your phone's on-device storage. We use AWS S3 to generate a signed url so only temporary access is enabled.
Instabuy. This feature set and open source privacy first mindset is exactly what I want.
The team has also implemented state-of-the-art research to improve voice labeling accuracy, using fingerprinting and diarization to create error-free transcripts and crystal-clear audio.
[dead]
Extra disk space, tactile switch, microphone when you're on calls, extra battery life. Some prefer an app but I see the appeal
I dont get why people prefer the app more, it is so much friction to unlock my phone (wait for unlock), open an app (wait for it to load), and then press the record button. I'm very excited for hardware devices like this. Lets see if they can actually make it useful.
Our core belief is to make only a few features that are incredibly accurate. People have tons of complaints on past AI wearables: UI is unclear when its recording, battery life only lasts a day, button is incorrectly placed, transcript fails to detect more than 2 speakers. All things that we've put a lot of thought into and have fixed.
[dead]
"unlimited cloud storage" we've heard that story before...
I don't get it; I already have a phone that can record.
super cool! excited to see how you can leverage external compute, while maintaining the privacy
We're running models for transcription and voice classification. The data policy for those model providers are SOC2 compliant. On top of that we made the important decision to open source the device firmware and app code that will be public on github.com/open-vision-engineering. The audio files are stored on the standalone device's 64 gb and then is synced to your phone's on-device storage. We use AWS S3 to generate a signed url so only temporary access is enabled.
[dead]
pty neat! Excited to see how you can leverage apple compute too
Yes the grand vision for personal compute Apple is going for is iPhone running models acting on wearable sensors. Think invisible OS using agents.