42 comments

  • artavazdsm 6 hours ago ago

    Co-founder of Krisp here. 1.5B non-native English speakers in the workforce, 4x native — yet all comms infra is optimized for native accents. We spent 3 years building listener-side, on-device accent understanding. The hard parts: no parallel training data exists, the accent space is infinite, accent is entangled with voice identity, and it runs on CPU under 250ms latency. Built in Yerevan, Armenia. Beta is live and free. Happy to go deep on the ML side.

    • AlexeyBelov 6 hours ago ago

      What do you think about the misuse potential (by scammers for example)?

      Aside from that, I like that this exists now.

      • davitb 4 hours ago ago

        This is for listener-side, not speaker-side. So no misuse case here.

  • aris_hovsepyan 5 hours ago ago

    The real achievement here isn't just quality, it's doing it streaming with tight latency on CPU while preserving speaker identity. Most VC-style work looks great offline, then falls apart once you go real-time. Nice work getting this to hold up in streaming.

  • 1ilit 3 hours ago ago

    On-device CPU inference is the real flex here. Optimization probably mattered as much as modeling.

  • Narek21 2 hours ago ago

    This feels adjacent to voice conversion research, but with stricter latency constraints.

  • zkhalapyan 3 hours ago ago

    Yeh, this would be helpful for the Singlish friends of mine out there!

  • KarineS 6 hours ago ago

    Finally Krisp built it! I will understand my users from interviews better, with no cognitive load and "could you please repeat that" phrasing.

  • lu_mn 6 hours ago ago

    Kinda wild to think accent friction is basically a tech problem. Doing this in real time on CPU sounds tough. Curious how well it holds up in messy, real calls.

  • amartiro 6 hours ago ago

    The parallel data is a problem here — you can’t crowdsource ground truth because no one can record themselves with a different accent.

  • rasjonell 5 hours ago ago

    Latency can destroy conversational rhythm. What’s your p95 inference time? also are there any benchmarks we can see?

  • MarAraqelyan 3 hours ago ago

    Really cool to see accent adaptation in real time — curious about benchmarks and how well this handles messy, real Zoom calls

  • armsuro 6 hours ago ago

    This feels adjacent to voice conversion research, but with stricter latency constraints.

  • snek26 6 hours ago ago

    Curious whether wav2vec-style embeddings played a role in your representation learning.

  • achobanyan 3 hours ago ago

    Local CPU inference stands out. Careful optimization likely rivaled the modeling effort.

  • tritont 5 hours ago ago

    Nice to finally see this direction of accent conversion (that is on incoming calls) in the Krisp app. This is a very meaningful feature.

  • Ani_Kh1 4 hours ago ago

    Curious whether wav2vec-style embeddings played a role in your representation learning.

  • sssnowgirl 6 hours ago ago

    This is a game-changer! I remember each and every call I had with an investor and feeling shy asking "can you repeat?"... thanks krisp, you changed my life!!!

  • nareksardaryann 5 hours ago ago

    Great work. Natural + clear is the combo that matters.

  • bebelovejan 6 hours ago ago

    I would like to use such model but only if it really preserves my voice, otherwise people would understand its not me or I have to use it all the time.

  • arshakarap 6 hours ago ago

    This is built for international, privacy-first teams!

  • gyumjibashyan 6 hours ago ago

    How did you estimate the number of IQ points?

  • Tatevik_H 5 hours ago ago

    Streaming constraint under 200ms changes everything. Causal modeling in speech is brutal to get right.

  • aharutyunyan 5 hours ago ago

    Accent space is effectively infinite. Generalization must rely on invariants rather than enumeration.

  • Flora_H42 6 hours ago ago

    Streaming constraint under 200ms changes everything. Causal modeling in speech is brutal to get right.

  • imuradyan 6 hours ago ago

    On-device CPU inference is the real flex here! Optimization probably mattered as much as modeling.

  • sohanyan 6 hours ago ago

    Accent space is effectively infinite. Generalization must rely on invariants rather than enumeration.

  • astipili 6 hours ago ago

    will it help the barista in Starbucks get my name right finally?

  • 5 hours ago ago
    [deleted]
  • Hripsimeh 5 hours ago ago

    This is a huge game changer !

  • Nathanf22 6 hours ago ago

    [dead]

  • felline 2 hours ago ago

    [dead]

  • nkhachatryan 4 hours ago ago

    [dead]

  • melkman42 5 hours ago ago

    [dead]

  • liatitanyan 4 hours ago ago

    [dead]

  • Talkative123 5 hours ago ago

    [dead]

  • davtyan96 6 hours ago ago

    [dead]

  • eharutyunyan 6 hours ago ago

    [dead]

  • 5 hours ago ago
    [deleted]
  • zmkoyan 3 hours ago ago

    [dead]

  • armb21 5 hours ago ago

    [flagged]

  • CyberSec86888 5 hours ago ago

    [flagged]