25 comments

  • danielhanchen 2 days ago ago

    I made some dynamic GGUFs for the 32B MoE model! Try:

    ./llama.cpp/llama-cli -hf unsloth/granite-4.0-h-small-GGUF:UD-Q4_K_XL

    Also a support agent finetuning notebook with granite 4: https://colab.research.google.com/github/unslothai/notebooks...

    • anshumankmr 2 days ago ago

      You guys are lightning fast. Did you folks have access to the model weights before hand or something, if you don't mind me asking?

      • danielhanchen 10 hours ago ago

        Oh thanks! Yes sometimes we get early access to some models!

    • incomingpain a day ago ago

      As always, you're awesome. keep up the great work!

  • baobun 2 days ago ago

    IBM announcement post is more informative than venturebeat

    IBM Granite 4.0: hyper-efficient, high performance hybrid models for enterprise

    https://www.ibm.com/new/announcements/ibm-granite-4-0-hyper-...

    • flowerthoughts 2 days ago ago

      ISO 42001 certified.

      > ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.

      https://www.iso.org/standard/42001

      If anyone has access to ISO standards, I'm really curious what the practical effects of that certification is. I.e. what things does Granite have that others don't, because they had to add/do it to fulfill the certification.

      The committee was formed in 2017, chaired by an AI expert: https://www.iso.org/committee/6794475.html

      • PeterStuer a day ago ago

        Depends. In my experience, some countries, e.g. Spain, are very into certs while others just ignore it.

    • magicalhippo 2 days ago ago

      They also have a nice write-up on the Mamba architecture:

      https://www.ibm.com/think/topics/mamba-model

  • thawab a day ago ago

    After getting burned by Watson. I am not touching any AI from IBM.

    • arthurcolle a day ago ago

      It's a file you can run on your computer

    • stirfish a day ago ago

      Tell us more about how you were burned, please!

  • aetherspawn 2 days ago ago

    I really just want to know how it compares to ChatGPT and Claude at various tasks, but there aren’t any graphs for that.

    • KronisLV a day ago ago

      It will probably take a few days/week for some in depth benchmarks to start popping up.

      The IBM article has this image showing that it's supposed to be a bit ahead of GPT OSS 120B for at least some tasks (horrible URL but oh well): https://www.ibm.com/content/dam/worldwide-content/creative-a...

      So in general it's going to be worse than GPT-5 and also Sonnet 4.5, but closer to GPT-5 mini. At least you can run this on prem, but none of the others. Pretty good, could possibly replace Qwen3 for quite a few use cases!

      • KronisLV a day ago ago

        Edit: or perhaps not, seems like 3rd party benchmarks aren't as positive.

  • incomingpain a day ago ago

    "Small" is 32b a9b for 19GB @ Q4_K_XL

    20GB @ 100,000 context.

    But for some reason... LM studio isnt loading it onto gpu for me?

    I just updated to 0.3.28 and still wont load onto gpu.

    Switching from Vulkan to rocm. It's now working properly?

    https://docs.unsloth.ai/new/ibm-granite-4.0

    Fantastic work from unsloth folks as usual.

    As it's running in roo code, it's using more like 26GB of vram.

    ~30TPS

    Roo code does not work with it.

    Kilo code next. It seems to be about 22GB of vram.

    Kilo code works great.

    The model however didn't 1 shot my first benchmark. That's pretty bad news for this model given magistral 2509 or apriel 15b are better.

    Better on pass 2, still no 100%

    3rd pass achieved.

    Im predicting it'll be around 30% on livecodebench. Probably like 15% on aiderpolyglot. Very disappointed in its coding capability.

    I just found:

    https://artificialanalysis.ai/models/granite-4-0-h-small

    25.1% on livecodebench. Absolutely deserved.

    2% terminal bench.

    16% on coding index. Completely deserved.

  • anshumankmr 2 days ago ago

    Also worth checking out was codestral... I think that had a 256k context and used Mamba even if it is slightly older model now... it had worked great for a Text2SQL use case we worked on.

    • incomingpain a day ago ago

      Magistral 2509 just came out. It super slows down when you go over 40,000 context. It's quite a fantastic model.

  • EagnaIonat 2 days ago ago

    Tried out the Ollama version and it's insanely fast with really good results for 1.9GB size. Supposed to have a 1M context window, would be interested where the speed goes then.

    No Mamba in the Ollama version though.

    • mehdibl a day ago ago

      Ollama default to Q4 usually and 8/16k context and not the 1M context

    • Flere-Imsaho 2 days ago ago

      (I've only just starting running local LLMs so excuse the dumb question).

      Would Granite run with llama.cpp and use Mamba?

      • RossBencina 2 days ago ago

        Last I checked Ollama inference is based on llama.cpp so either Ollama has not caught up yet, or the answer is no.

        EDIT: Looks like Granite 4 hybrid architecture support was added to llama.cpp back in May: https://github.com/ggml-org/llama.cpp/pull/13550

        • magicalhippo 2 days ago ago

          > Last I checked Ollama inference is based on llama.cpp

          Yes and no. They've written their own "engine" using GGML libraries directly, but fall back to llama.cpp for models the new engine doesn't yet support.

  • serioussecurity 2 days ago ago

    Every technical paper I've read that IBM publish at an ML conference has been P-hacked to hell. Stay away.

    • soganess 2 days ago ago

      Links? Maybe just paper titles?