They claim they're surpassing the latest DeepSeek model in many tests. I'm wondering when we can see a GGUF release of this model as being a MoE this should run fine on local machines.
BTW: here's the blog post https://mimo.xiaomi.com/mimo-v2-5-pro. They state that "DeepSeek V4 Pro numbers are with its max effort setting." so I'm wondering what they used for this one.
They claim they're surpassing the latest DeepSeek model in many tests. I'm wondering when we can see a GGUF release of this model as being a MoE this should run fine on local machines.
BTW: here's the blog post https://mimo.xiaomi.com/mimo-v2-5-pro. They state that "DeepSeek V4 Pro numbers are with its max effort setting." so I'm wondering what they used for this one.
This is the most underrated release we tested at https://gertlabs.com
I'm surprised they open sourced it. It's very comparable with Kimi K2.6 performance-wise, and slightly better with tools. And it's cheaper.
wow. China has so many open source LLM.