Nvidia Stock Rises. AMD's New AI Chip Is Not Competitive

(barrons.com)

34 points | by nabla9 17 hours ago ago

48 comments

  • markhahn 15 hours ago ago

    the stock market is just a casino, and this article is just an attempt to pump NVDA.

    especially if you line up availability dates, AMD is competitive. not to mention price!

    if there's any news here, it's that the recent announcements are just small tweaks of the MI300. then again, nvidia has announced nothing revolutionary either. does the market (people doing AI, not biz/stock morons) actually want something revolutionary?

    • hashtag-til 15 hours ago ago
    • vladimirralev 14 hours ago ago

      It used to be a casino. Now it's a matter of monetary policy transmission and a national security issue. They fully and shamelessly embraced the wealth effect as the driving force of the economy. Market always has to go up or else bad things are going to happen.

    • jl6 14 hours ago ago

      Yes, the market wants a revolution in the amount of VRAM you can fit on a GPU.

      • doctorpangloss 14 hours ago ago

        Maybe DRAM manufacturers should be making GPUs.

        • to11mtm 14 hours ago ago

          TBH It would be interesting to think about some variant of 3d XPoint on an inferrence-oriented GPU device.

    • bigdickinurmum 15 hours ago ago

      [flagged]

  • nabla9 17 hours ago ago

    Nvidia’s Blackwell GPUs sold out for next 12 months. This likely means that their profit margins jump again when sales from those chips come in.

    Bernstein Research:

    MI325X: "Training performance seems 1 year behind Blackwell (on par with H200) while inferencing is only slightly better,"

    MI350X: "Even the company's MI350X tease shows raw performance that, while on par with Blackwell on paper, arrives a year later, just in time to compete against Nvidia's next offerings. Hence we do not see even AMD's accelerated road map closing the competitive gap."

    https://www.businessinsider.com/amd-latest-gpu-still-lags-be...

    • linotype 16 hours ago ago

      Blackwell being sold out for 12 months sounds like market opportunity for AMD. A chip is better than none.

      • nabla9 14 hours ago ago

        Nvidia will compete with pricing using H100, H200 against AMDs latest. Basically AMD will get sales, but profit margins are nowhere where Nvidia is.

        Operating income to sales (ttm):

          AMD:   4%
          NVDA: 62%
        
        Nvidia and AMD and both competing as customers to TSMC for fab supply they need to order 1-2 years in advance. Apple and Nvidia are served first because they are the best paying customers.

        ps. When Intel was the big dog, it almost killed AMD every time they made a x86 chip that was better than Intel. All Intel had to do was to sacrifice little profit margins, removing AMDs profit. This time the demand is so high that it's not going to happen, so AMD can enjoy the piece it has.

        • kolbe 13 hours ago ago

          How did that strategy work out for Intel?

      • gdiamos 15 hours ago ago

        H100s are available at increasingly affordable prices

        • electronbeam 14 hours ago ago

          Depends on what yield AMD has, they may be able to undercut that if aiming for marketshare rather than revenue.

          The marginal cost of each chip is dollars. The 5 digit prices for H100s are just margins to be undercut

        • robotnikman 15 hours ago ago

          Now if only they could be affordable for the average consumer... A man can dream...

          • blihp 14 hours ago ago

            When we get to the end of the hype cycle, they will be. The only question is if people will be interested in footing the power bill for any of the ocean of obsolete data center GPUs that companies will be dumping.

  • dboreham 16 hours ago ago

    Does it need to be performance competitive if the price is right?

    • 15 hours ago ago
      [deleted]
    • nabla9 14 hours ago ago

      Operating income to sales:

        AMD:  4%
        NVDA: 62%
      
      
      Who you think has the pricing power?
      • manquer 13 hours ago ago

        Hardly 1:1 comparison, AMD is not just only GPU maker, GPU is not even largest revenue contributor, the margins on x86 CPU and various custom processors they do ( like for Playstation) is wafer thin.

    • transcriptase 15 hours ago ago

      Gestures broadly at Intel ARC

      • moffkalast 15 hours ago ago

        The A770 launched costing 100 USD more than the RTX 4060, it pulls twice the wattage while underperforming it in every way.

        • to11mtm 14 hours ago ago

          Intel continues to toss a stick in their own front wheel and blame whatever.

          If they made an A775 or whatever with 32GB and sold it for 500, hell even 600 bucks a lot of people would buy it, myself likely included. Lots of people would be happy with a 'slow but can fit big models and still faster than reaching to CPU' card.

          • throwaway48476 13 hours ago ago

            They used the same stick on the homelab users that want desktop SR-IOV in the A770 which was fused off. Intel is a very uncompetitive company.

          • moffkalast 14 hours ago ago

            Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated. Intel could easily do that and even if the prompt processing is supposedly really lackluster on the arcs right now, people would move literal heaven and earth to get it optimized.

            • to11mtm 13 hours ago ago

              > Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated

              I've gotten on this thought train enough times I started doing some digging...

              They might need a -little- extra work. Ada 5000 Had 32GB with a 256 bit bus but it's a bit of an outlier... I say it that way because as I did my digging, I'm finding that most boards are 8x16Gb (gigabit) modules resulting in a 256-bit width and 16GB of memory. A 4090 gets to 24GB by going to 384 bit.

              Obviously, upping the width would potentially be a redesign, but we can again point to the Ada 5000 as a case where 32GB was done on a 256-bit bus. Might be some extra work somewhere but it's doable.

              Even my quoted price is likely giving Intel and/or board partners some margin, unless I'm missing something about DRAM costs and ability to get the densities required. But as it stands, a 16GB A770 is something on the range of 250-300 USD range. A 32GB version for 600$, should actually give them some good margin compared to 16GB a770s.

  • petermcneeley 15 hours ago ago

    Can someone summarize what they mean by not competitive? Yes a new chip from nvidia will not compete with CUDA (a software ecosystem).

  • 2c2c2c 15 hours ago ago

    is it naive to look at the market and just assume there's 500b of market cap screaming for amd to throw everythign at a competent cuda competitor and eventually see commoditization here? is this not possible (why hasnt this happened)?

    • zmmmmm 15 hours ago ago

      My take in a nutshell: when raw performance and micro-optimisation are the core value proposition, portability and alternative equivalent technologies stop being viable competitive levers. There is just too much sunk into micro-optimisation around nVidia's architecture at every layer up and down the stack.

      The only thing that will save us I think is when competition authorities finally wake up on this and force nVidia to share it's tech at some level. The equivalent of the cross licensing deals b/w Intel and AMD that kept the x86 architecture from being a monopoly (sort of).

    • jjmarr 15 hours ago ago

      That takes time which is why AMD is making acquisitions and hiring like crazy.

      • p1esk 14 hours ago ago

        AMD had 12 years to become competitive. Deep learning revolution started in 2012.

        • trynumber9 14 hours ago ago

          AMD was nearly bankrupt for the first half of that. In my opinion it was a herculean feat that they survived at all.

          • ErneX 14 hours ago ago

            Agree, not even at 2 billion market cap back then, now it’s almost 272 billion.

            1st Ryzens were launched just 7 years ago.

          • p1esk 14 hours ago ago

            Whose fault is that?

            • kuschku 9 hours ago ago

              Avcording to the US and EU's highest courts, Intel. Not entirely sure what you're trying to argue.

            • to11mtm 13 hours ago ago

              Whoever thought sticking with Bulldozer was a good idea while the GloFo thing was happening. The move towards more 'normal' process tech vs the tighter coupling when they owned the fabs led to probably at least a couple of missteps. Of course then all the other weirdness with Bulldozer...

              Jaguar saved their butts via the XB1/PS4 to a large extent, (and my Puma laptop was way nicer than the Atom laptops for it's day,) Bulldozer was a huge stumble for the company tho.

              I -will- say, around 2014-2015 I tossed together a 'low-end' 15h (Probably a Steamroller) and it was a competent machine, albeit relegated to 'retro-ish' steam games and DVR purposes. The Radeon core 3d performance at least did a good job of balancing real world performance compared to a Core i3

            • 13 hours ago ago
              [deleted]
            • throwaway48476 13 hours ago ago

              A lot of shady exclusivity tied MDF deals.

        • eptcyka 14 hours ago ago

          12 years ago, Nvidia cared more about gamers than GPGPU, and 8bit floats were definitely not something anyone optimized for.

          • manquer 13 hours ago ago

            And 6 years ago they cared about crypto miners (whether they wanted to admit public or not).

            Nvidia really has thick plot armor to be able to ride two massive hype waves.

        • 13 hours ago ago
          [deleted]
    • dkasper 15 hours ago ago

      Sure it’s possible, but it’s also incredibly difficult.

  • 14 hours ago ago
    [deleted]
  • steeve 15 hours ago ago

    I mean, we (zml) clocked MI300X ($20k) at +30% than H100 ($30k).

    So…

    • wmf 14 hours ago ago

      That was then. Now it's about MI325 vs. B100.

    • peterhhchan 15 hours ago ago

      What about power consumption? edit: My understanding from about a year ago is that AMD and NVDA's chips were priced similarly in terms of performance per watt.