14 comments

  • jotux an hour ago ago

    Not precisely what you were looking for, but this was going around yesterday: https://rasim.pro/blog/how-to-install-deepseek-r1-locally-fu...

  • billconan 2 days ago ago

    I guess I would buy nvidia digits https://www.nvidia.com/en-us/project-digits/

    • tomcam 13 hours ago ago

      Is it available for purchase?

    • satvikpendem 2 days ago ago

      This is exactly what I'd recommend too, it's cheaper than buying GPUs individually and Digits has way more VRAM.

    • siltcakes 2 days ago ago

      Thank you! This looks awesome!

  • mikewarot 6 hours ago ago

    I used to think GPUs were the way to go, but now my goal is to get a used server with a Terabyte of RAM so I can run the full size Deepseek R1

  • cameron_b 2 days ago ago

    Unpopular but highly promising way to go if training is on your mind- 4x 7900 xtx cards and some nuts and bolts to feed them could be a price per GPU memory high point. There are folks using ROCm with that to put up some interesting numbers in terms of wall clock and power required per training run.

  • monroewalker 2 days ago ago
  • vunderba 2 days ago ago

    Be more specific - AI is a very broad field.

    nVidia GPUs have the best inference speed (particularly around SDXL, Hunyuan, Flux, etc), but unless you're buying several used 3090s SLI style, you're going to have to split larger LLM GGUFs across main memory and GPU. I'm excluding the RTX 5090 since two of them (plus tax) would basically blow your budget out the water.

    With Apple I think you can get up to 192GB of shared memory allowing for very large LLMs.

    Another thing is your experience. Unless you want to shell out even more money, you'll likely have to build the PC. It's not hard but it's definitely more work than just grabbing a Mac Studio from the nearest Apple Store.

    • zippyman55 8 hours ago ago

      I bought a fully tricked out Mac Studio and had planned to use it but I found I was far too busy in my final career and actually it sits idle. So, while I do not regret my decision based on what I knew at the time, my life trajectory has left it unused. So, I want to get back to doing LLM work on it for fun, Julia and R programming.

  • giardini a day ago ago

    https://www.amazon.com/Yassk-Fortune-Telling-Floating-Answer...

    AI in the palm of your hand! Best deal evarrr!

  • PaulHoule 2 days ago ago

    I just got a Mac Mini with maximum specs (can't believe how small the box it came in was!) and that's not a bad choice. As you say it has the advantage of handling large models. I think the 5090 will outperform it in terms of FLOPS but it only comes with 32MB compared to the 64MB you can get on an M4 mini. The 5090 itself will be $2000 (if you can get it at that price) compared to the $2500 max mini M4. You'll probably spend at least $1k for the rest of the PC worthy of the 5090 card.

  • throwaway519 16 hours ago ago

    Why get an Apple? Even th3 keyboard lacks required keys for development. They're purely tech bro poser machines.

    • tomcam 13 hours ago ago

      What keys does it lack for development? I haven’t noticed any missing in 40 years of coding on Macs.