The article is better than the headline might suggest. Key paragraph:
> It’s not just the performance of NVLM, but Nvidia’s decision to make it available as an open-source project. The likes of OpenAI, Claude, and Google aren’t expected to do that anytime soon. Nvidia’s approach could benefit AI researchers and smaller firms, as they’d get access to a seemingly powerful multimodal LLM without having to pay for it.
Sort of an obvious move to open source it--they're selling shovels and just showed everyone where the mines are.
This new release will generate more hype/competition around AI, with data and training increasingly becoming the moat (rather than model level innovations), which will lead to more people needing Nvidia GPUs.
Nonetheless, I really like how competitive OSS is in LLMs compared to other major innovations.
The article is better than the headline might suggest. Key paragraph:
> It’s not just the performance of NVLM, but Nvidia’s decision to make it available as an open-source project. The likes of OpenAI, Claude, and Google aren’t expected to do that anytime soon. Nvidia’s approach could benefit AI researchers and smaller firms, as they’d get access to a seemingly powerful multimodal LLM without having to pay for it.
Sort of an obvious move to open source it--they're selling shovels and just showed everyone where the mines are.
This new release will generate more hype/competition around AI, with data and training increasingly becoming the moat (rather than model level innovations), which will lead to more people needing Nvidia GPUs.
Nonetheless, I really like how competitive OSS is in LLMs compared to other major innovations.
Let's check the Chatbot Arena in a bit. That's a more useful benchmark than any self-reported benchmarks.
https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...
related: https://news.ycombinator.com/item?id=41716975