the stock market is just a casino, and this article is just an attempt to pump NVDA.
especially if you line up availability dates, AMD is competitive. not to mention price!
if there's any news here, it's that the recent announcements are just small tweaks of the MI300. then again, nvidia has announced nothing revolutionary either. does the market (people doing AI, not biz/stock morons) actually want something revolutionary?
It used to be a casino. Now it's a matter of monetary policy transmission and a national security issue. They fully and shamelessly embraced the wealth effect as the driving force of the economy. Market always has to go up or else bad things are going to happen.
Nvidia’s Blackwell GPUs sold out for next 12 months. This likely means that their profit margins jump again when sales from those chips come in.
Bernstein Research:
MI325X: "Training performance seems 1 year behind Blackwell (on par with H200) while inferencing is only slightly better,"
MI350X: "Even the company's MI350X tease shows raw performance that, while on par with Blackwell on paper, arrives a year later, just in time to compete against Nvidia's next offerings. Hence we do not see even AMD's accelerated road map closing the competitive gap."
Nvidia will compete with pricing using H100, H200 against AMDs latest. Basically AMD will get sales, but profit margins are nowhere where Nvidia is.
Operating income to sales (ttm):
AMD: 4%
NVDA: 62%
Nvidia and AMD and both competing as customers to TSMC for fab supply they need to order 1-2 years in advance. Apple and Nvidia are served first because they are the best paying customers.
ps. When Intel was the big dog, it almost killed AMD every time they made a x86 chip that was better than Intel. All Intel had to do was to sacrifice little profit margins, removing AMDs profit. This time the demand is so high that it's not going to happen, so AMD can enjoy the piece it has.
When we get to the end of the hype cycle, they will be. The only question is if people will be interested in footing the power bill for any of the ocean of obsolete data center GPUs that companies will be dumping.
Hardly 1:1 comparison, AMD is not just only GPU maker, GPU is not even largest revenue contributor, the margins on x86 CPU and various custom processors they do ( like for Playstation) is wafer thin.
Intel continues to toss a stick in their own front wheel and blame whatever.
If they made an A775 or whatever with 32GB and sold it for 500, hell even 600 bucks a lot of people would buy it, myself likely included. Lots of people would be happy with a 'slow but can fit big models and still faster than reaching to CPU' card.
Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated. Intel could easily do that and even if the prompt processing is supposedly really lackluster on the arcs right now, people would move literal heaven and earth to get it optimized.
> Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated
I've gotten on this thought train enough times I started doing some digging...
They might need a -little- extra work. Ada 5000 Had 32GB with a 256 bit bus but it's a bit of an outlier... I say it that way because as I did my digging, I'm finding that most boards are 8x16Gb (gigabit) modules resulting in a 256-bit width and 16GB of memory. A 4090 gets to 24GB by going to 384 bit.
Obviously, upping the width would potentially be a redesign, but we can again point to the Ada 5000 as a case where 32GB was done on a 256-bit bus. Might be some extra work somewhere but it's doable.
Even my quoted price is likely giving Intel and/or board partners some margin, unless I'm missing something about DRAM costs and ability to get the densities required. But as it stands, a 16GB A770 is something on the range of 250-300 USD range. A 32GB version for 600$, should actually give them some good margin compared to 16GB a770s.
AMD's MI325 is slower (maybe 2x slower) than Nvidia B100. Sure it's cheaper and maybe consumes less power but you need more racks, more networking, and more labor to get the same performance.
is it naive to look at the market and just assume there's 500b of market cap screaming for amd to throw everythign at a competent cuda competitor and eventually see commoditization here? is this not possible (why hasnt this happened)?
My take in a nutshell: when raw performance and micro-optimisation are the core value proposition, portability and alternative equivalent technologies stop being viable competitive levers. There is just too much sunk into micro-optimisation around nVidia's architecture at every layer up and down the stack.
The only thing that will save us I think is when competition authorities finally wake up on this and force nVidia to share it's tech at some level. The equivalent of the cross licensing deals b/w Intel and AMD that kept the x86 architecture from being a monopoly (sort of).
Whoever thought sticking with Bulldozer was a good idea while the GloFo thing was happening. The move towards more 'normal' process tech vs the tighter coupling when they owned the fabs led to probably at least a couple of missteps. Of course then all the other weirdness with Bulldozer...
Jaguar saved their butts via the XB1/PS4 to a large extent, (and my Puma laptop was way nicer than the Atom laptops for it's day,) Bulldozer was a huge stumble for the company tho.
I -will- say, around 2014-2015 I tossed together a 'low-end' 15h (Probably a Steamroller) and it was a competent machine, albeit relegated to 'retro-ish' steam games and DVR purposes. The Radeon core 3d performance at least did a good job of balancing real world performance compared to a Core i3
What about power consumption? edit: My understanding from about a year ago is that AMD and NVDA's chips were priced similarly in terms of performance per watt.
the stock market is just a casino, and this article is just an attempt to pump NVDA.
especially if you line up availability dates, AMD is competitive. not to mention price!
if there's any news here, it's that the recent announcements are just small tweaks of the MI300. then again, nvidia has announced nothing revolutionary either. does the market (people doing AI, not biz/stock morons) actually want something revolutionary?
It reminds me of this classic cartoon:
https://static01.nyt.com/images/2011/08/11/opinion/081111kru...
It used to be a casino. Now it's a matter of monetary policy transmission and a national security issue. They fully and shamelessly embraced the wealth effect as the driving force of the economy. Market always has to go up or else bad things are going to happen.
Yes, the market wants a revolution in the amount of VRAM you can fit on a GPU.
Maybe DRAM manufacturers should be making GPUs.
TBH It would be interesting to think about some variant of 3d XPoint on an inferrence-oriented GPU device.
[flagged]
Nvidia’s Blackwell GPUs sold out for next 12 months. This likely means that their profit margins jump again when sales from those chips come in.
Bernstein Research:
MI325X: "Training performance seems 1 year behind Blackwell (on par with H200) while inferencing is only slightly better,"
MI350X: "Even the company's MI350X tease shows raw performance that, while on par with Blackwell on paper, arrives a year later, just in time to compete against Nvidia's next offerings. Hence we do not see even AMD's accelerated road map closing the competitive gap."
https://www.businessinsider.com/amd-latest-gpu-still-lags-be...
Blackwell being sold out for 12 months sounds like market opportunity for AMD. A chip is better than none.
Nvidia will compete with pricing using H100, H200 against AMDs latest. Basically AMD will get sales, but profit margins are nowhere where Nvidia is.
Operating income to sales (ttm):
Nvidia and AMD and both competing as customers to TSMC for fab supply they need to order 1-2 years in advance. Apple and Nvidia are served first because they are the best paying customers.ps. When Intel was the big dog, it almost killed AMD every time they made a x86 chip that was better than Intel. All Intel had to do was to sacrifice little profit margins, removing AMDs profit. This time the demand is so high that it's not going to happen, so AMD can enjoy the piece it has.
How did that strategy work out for Intel?
H100s are available at increasingly affordable prices
Depends on what yield AMD has, they may be able to undercut that if aiming for marketshare rather than revenue.
The marginal cost of each chip is dollars. The 5 digit prices for H100s are just margins to be undercut
Now if only they could be affordable for the average consumer... A man can dream...
When we get to the end of the hype cycle, they will be. The only question is if people will be interested in footing the power bill for any of the ocean of obsolete data center GPUs that companies will be dumping.
Does it need to be performance competitive if the price is right?
Operating income to sales:
Who you think has the pricing power?Hardly 1:1 comparison, AMD is not just only GPU maker, GPU is not even largest revenue contributor, the margins on x86 CPU and various custom processors they do ( like for Playstation) is wafer thin.
Gestures broadly at Intel ARC
The A770 launched costing 100 USD more than the RTX 4060, it pulls twice the wattage while underperforming it in every way.
Intel continues to toss a stick in their own front wheel and blame whatever.
If they made an A775 or whatever with 32GB and sold it for 500, hell even 600 bucks a lot of people would buy it, myself likely included. Lots of people would be happy with a 'slow but can fit big models and still faster than reaching to CPU' card.
They used the same stick on the homelab users that want desktop SR-IOV in the A770 which was fused off. Intel is a very uncompetitive company.
Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated. Intel could easily do that and even if the prompt processing is supposedly really lackluster on the arcs right now, people would move literal heaven and earth to get it optimized.
> Yeah I mean getting 24GB on one card is extremely expensive and it's not the raw GDDR costs, it's just artificially inflated
I've gotten on this thought train enough times I started doing some digging...
They might need a -little- extra work. Ada 5000 Had 32GB with a 256 bit bus but it's a bit of an outlier... I say it that way because as I did my digging, I'm finding that most boards are 8x16Gb (gigabit) modules resulting in a 256-bit width and 16GB of memory. A 4090 gets to 24GB by going to 384 bit.
Obviously, upping the width would potentially be a redesign, but we can again point to the Ada 5000 as a case where 32GB was done on a 256-bit bus. Might be some extra work somewhere but it's doable.
Even my quoted price is likely giving Intel and/or board partners some margin, unless I'm missing something about DRAM costs and ability to get the densities required. But as it stands, a 16GB A770 is something on the range of 250-300 USD range. A 32GB version for 600$, should actually give them some good margin compared to 16GB a770s.
Just gonna drop this here: https://www.techpowerup.com/gpu-specs/radeon-pro-vii.c3575
That absolute bandwidth monster demonstrates that any bus width is entirely possible with some effort.
Can someone summarize what they mean by not competitive? Yes a new chip from nvidia will not compete with CUDA (a software ecosystem).
AMD's MI325 is slower (maybe 2x slower) than Nvidia B100. Sure it's cheaper and maybe consumes less power but you need more racks, more networking, and more labor to get the same performance.
I can't find any information that show a difference as large as 2x. Do you have a specific comparison point in mind?
From Nvidia and AMD, I read sparse fp8 at 7 PFLOPs for B100 [0] vs 5.22 PFLOPs for mi325x [1]
Nvidia doesn't give the dense fp8 so that's the easiest comparison I could get.
[0] https://resources.nvidia.com/en-us-blackwell-architecture [1] https://www.amd.com/en/products/accelerators/instinct/mi300/...
is it naive to look at the market and just assume there's 500b of market cap screaming for amd to throw everythign at a competent cuda competitor and eventually see commoditization here? is this not possible (why hasnt this happened)?
My take in a nutshell: when raw performance and micro-optimisation are the core value proposition, portability and alternative equivalent technologies stop being viable competitive levers. There is just too much sunk into micro-optimisation around nVidia's architecture at every layer up and down the stack.
The only thing that will save us I think is when competition authorities finally wake up on this and force nVidia to share it's tech at some level. The equivalent of the cross licensing deals b/w Intel and AMD that kept the x86 architecture from being a monopoly (sort of).
That takes time which is why AMD is making acquisitions and hiring like crazy.
AMD had 12 years to become competitive. Deep learning revolution started in 2012.
AMD was nearly bankrupt for the first half of that. In my opinion it was a herculean feat that they survived at all.
Agree, not even at 2 billion market cap back then, now it’s almost 272 billion.
1st Ryzens were launched just 7 years ago.
Whose fault is that?
Avcording to the US and EU's highest courts, Intel. Not entirely sure what you're trying to argue.
Whoever thought sticking with Bulldozer was a good idea while the GloFo thing was happening. The move towards more 'normal' process tech vs the tighter coupling when they owned the fabs led to probably at least a couple of missteps. Of course then all the other weirdness with Bulldozer...
Jaguar saved their butts via the XB1/PS4 to a large extent, (and my Puma laptop was way nicer than the Atom laptops for it's day,) Bulldozer was a huge stumble for the company tho.
I -will- say, around 2014-2015 I tossed together a 'low-end' 15h (Probably a Steamroller) and it was a competent machine, albeit relegated to 'retro-ish' steam games and DVR purposes. The Radeon core 3d performance at least did a good job of balancing real world performance compared to a Core i3
A lot of shady exclusivity tied MDF deals.
12 years ago, Nvidia cared more about gamers than GPGPU, and 8bit floats were definitely not something anyone optimized for.
And 6 years ago they cared about crypto miners (whether they wanted to admit public or not).
Nvidia really has thick plot armor to be able to ride two massive hype waves.
Sure it’s possible, but it’s also incredibly difficult.
I mean, we (zml) clocked MI300X ($20k) at +30% than H100 ($30k).
So…
That was then. Now it's about MI325 vs. B100.
What about power consumption? edit: My understanding from about a year ago is that AMD and NVDA's chips were priced similarly in terms of performance per watt.