The extent to which TPU architecture is built for the purpose also doesn't happen in a single design generation. Ironwood is the seventh generation of TPU, and that matters a lot.
The work that XLA & schedulers are doing here is wildly impressive.
This feels so much drastically harder to work with than Itanium must have been. ~400bit VLIW, across extremely diverse execution units. The workload is different, it's not general purpose, but still awe inspiring to know not just that they built the chip but that the software folks can actually use such a wildly weird beast.
I wish we saw more industry uptake for XLA. Uptakes not bad, per-se: there's a bunch of different hardware it can target! But what amazing secret sauce, it's open source, and it doesn't feel like there's the industry rally behind it it deserves. It feels like Nvidia is only barely beginning to catch up, to dig a new moat, with the just announced Nvidia Tiles. Such huge overlap. Afaik, please correct if wrong, but XLA isn't at present particularly useful at scheduling across machines, is it? https://github.com/openxla/xla
I do think it's a lot simpler than the problem Itanium was trying to solve. Neural nets are just way more regular in nature, even with block sparsity, compared to generic consumer pointer-hopping code. I wouldn't call it "easy", but we've found that writing performant NN kernels for a VLIW architecture chip is in practice a lot more straightforward than other architectures.
JAX/XLA does offer some really nice tools for doing automated sharding of models across devices, but for really large performance-optimized models we often handle the comms stuff manually, similar in spirit to MPI.
Thanks for sharing this. I agree w.r.t. XLA. I've been moving to JAX after many years of using torch and XLA is kind of magic. I think torch.compile has quite a lot of catching up to do.
> XLA isn't at present particularly useful at scheduling across machines,
In Itanium's heyday, the compilers and libraries were pretty good at handling HPC workloads, which is really the closest anyone was running then to modern NN training/inference. The problem with Itanium and its compilers was that people obviously wanted to run workloads that looked nothing like HPC (databases, web servers, etc) and the architecture and compilers weren't very good at that. There have always been very successful VLIW-style architectures in more specialized domains (graphics, HPC, DSP, now NPU) it just hasn't worked out well for general-purpose processors.
This was a nice breakdown. I always feel most TPU articles skip over the practical parts. This one actually connects the concepts in a way that clicks.
I'm surprised the perspective of China making TPUs at scale in a couple of years is not bigger news. It could be a deadly blow for Google, NVIDIA, and the rest. Combine it with China's nuclear base and labor pool. And the cherry on top, America will train 600k Chinese students as Trump agreed to.
Manufacturing is the hard part. China certainly has the knowledge to build a TPU architecture without needing to steal the plans. What they don't have is the ability to actually build the chips. This is even in spite of also stealing lithography plans.
There is a dark art to semiconductor manufacturing that pretty much only TSMC really has the wizards for. Maybe intel and samsung a bit too.
> What they don't have is the ability to actually build the chips.
China has fabs. Most are older nodes and are used to manufacture chips used in cars and consumer electronics. They have companies that design chips (manufactured by TSMC), like the Ascend 910, which are purpose built for AI. They may be behind, but they’re not standing still.
For China there is no plan B for semiconductor manufacturing. Invading Taiwan would be a dice roll and the consequences would be severe. They will create their own SOTA semiconductor industry. Same goes for their military.
The question is when? Does that come in time to deflate the US tech stock bubble? Or will the bubble start to level out and reality catch up, or will the market crash for another reason beforehand?
China has their own fabs. They are behind TSMC in terms of technology, but that doesn't mean they don't have fabs. They're currently ~7nm AFAIK. That's behind TSMC, but also not useless. They are obviously trying hard to catch up. I don't think we should just imagine that they never will. China has a lot of smart engineers and they know how strategically important chip manufacturing is.
This is like this funny idea people had in the early 2000s that China would continue to manufacture most US technology but they could never design their own competitive tech. Why would anyone think that?
Wrt invading Taiwan, I don't think there is any way China can get TSMC intact. If they do invade Taiwan (please God no), it would be a horrible bloodbath. Deaths in the hundreds of thousands and probably relentless bombing. Taiwan would likely destroy its own fabs to avoid them being taken. It would be sad and horrible.
That'd be the belief in good old American exceptionalism. Up until recently, a common meme on HN was "freedom" is fundamental to innovation, and naturally the country with the most Freedom(TM) wins. This even persisted after it was clear that DJI was kicking all kinds of ass, outcompeting multiple western drone companies.
If they invade Taiwan, we will scuttle the plants and direct ASML to disable their machines which they will do because that’s the condition under which we gave them the tech. They’re not going to get it this way.
They’ll just catch the next wave of tech or eventually break into EUV.
imo the most likely answer is that asml funds a second source for the optics that isn't US controlled and starts shipping to China. The US is losing influence fast.
Lot of retired fab folks in the Austin area if you needed to spin up a local fab. It's really not a dark art, there are plenty of folks that have experience in the industry.
This is sort of like saying there are lots of kids in the local community college shop class if you want to spin up an F1 team.
The knowledge of making 2008 era chips is not a gating factor for getting a handful of atoms to function as a transistor in current SOTA chips. There are probably 100 people on earth who know how to do this, and the majority of them are in Taiwan.
Again, China has literally stolen the plans for EUV lithography, years ago, and still cannot get it to work. Even Samsung and Intel, using the same machines as TSMC, cannot match what they are doing.
It's a dark art in the most literal sense.
Nevermind that new these cutting edge fabs cost ~$50 Billion each.
I've always wondered. If you have fuck you money, wouldn't it be possible to build GPUs to do LLM matmul with 2008 technology. Again, assuming energy costs / cooling costs don't matter.
Building the clean rooms at this scale is a limitation in itself. Just getting the factory setup to and the machines put in so they don't generate particulate matter in operation is an art that compares in difficulty to making the chips themselves.
Energy, cooling, and how much of the building you're taking up do matter. They matter less and in a more manageable way for hyperscalers that have a long established resource management practice in lots of big data centers because they can phase in new technologies as they phase out the old. But it's a lot more daunting to think about building a data center big enough to compete with one full of Blackwell systems there are more than 10 times more performant per watt and per square foot.
The mask shops at TSMC and Samsung kind of are a dark art. It's one of the interesting things about the contract manufacturing business in chips. It's not just a matter of having access to state of the art equipment.
>It could be a deadly blow for Google, NVIDIA, and the rest.
How would this be a deadly blow to Google? Google makes TPUs for their own services and products, avoiding paying the expensive nvidia tax. If other people make similar products, this has effectively zero impact on Google.
nvidia knew their days were numbered, at least in their ownership of the whole market. And China hardly had to steal the great plans for a TPU to make one, and a FMA/MAC unit is actually a surprisingly simple bit of hardware to design. Everyone is adding "TPUs" in their chips - Apple, Qualcomm, Google, AMD, Amazon, Huawei, nvidia (that's what tensor cores are) and everyone else.
And that startup isn't the big secret. Huawei already has solutions matching the H20. Once the specific need that can be serviced by an ASIC is clear, everyone starts building it.
>America will train 600k Chinese students as Trump agreed to
What great advantage do you think this is?
America isn't remotely the great gatekeeper on this. If anything, Taiwan + the Netherlands (ASML) are. China would yield infinitely more value in learning manufacturing and fabrication secrets than cloning some specific ASIC.
>Combine it with China's nuclear base and labor pool. And the cherry on top, America will train 600k Chinese students as Trump agreed to.
I dont understand this part. What has nuclear base got to do with chip manufacturing? And surely, not all 600k students are learning chip design or stealing plans
I assume the nuclear reactors are to power the data centers using the new chips. There have been a few mentions on HN about the US being very behind in building enough power plants to run LLM workloads
The frenetic pace of data center construction in the US means that nuclear is not a short-term option. No way are they going to wait a decade or more for generation to come on line. It’s going to be solar, batteries, and gas (turbines, and possibly fuel cells).
I mean they have the power grid to run TPUs at 10x the scale of USA.
About students, have you seen the microelectronic labs in American universities lately? A huge chunk are Chinese already. Same with some of the top AI labs.
Thankfully LLMs are a dead end, so nobody will make it to AGI by just throwing more electricity at the problem. Now if we could only have a new AI winter we could postpone the end of mankind as the dominant species on earth by another couple of decades.
The extent to which TPU architecture is built for the purpose also doesn't happen in a single design generation. Ironwood is the seventh generation of TPU, and that matters a lot.
The Scaling ML textbook also has an excellent section on TPUs. https://jax-ml.github.io/scaling-book/tpus/
I also enjoyed https://henryhmko.github.io/posts/tpu/tpu.html https://news.ycombinator.com/item?id=44342977 .
The work that XLA & schedulers are doing here is wildly impressive.
This feels so much drastically harder to work with than Itanium must have been. ~400bit VLIW, across extremely diverse execution units. The workload is different, it's not general purpose, but still awe inspiring to know not just that they built the chip but that the software folks can actually use such a wildly weird beast.
I wish we saw more industry uptake for XLA. Uptakes not bad, per-se: there's a bunch of different hardware it can target! But what amazing secret sauce, it's open source, and it doesn't feel like there's the industry rally behind it it deserves. It feels like Nvidia is only barely beginning to catch up, to dig a new moat, with the just announced Nvidia Tiles. Such huge overlap. Afaik, please correct if wrong, but XLA isn't at present particularly useful at scheduling across machines, is it? https://github.com/openxla/xla
I do think it's a lot simpler than the problem Itanium was trying to solve. Neural nets are just way more regular in nature, even with block sparsity, compared to generic consumer pointer-hopping code. I wouldn't call it "easy", but we've found that writing performant NN kernels for a VLIW architecture chip is in practice a lot more straightforward than other architectures.
JAX/XLA does offer some really nice tools for doing automated sharding of models across devices, but for really large performance-optimized models we often handle the comms stuff manually, similar in spirit to MPI.
Thanks for sharing this. I agree w.r.t. XLA. I've been moving to JAX after many years of using torch and XLA is kind of magic. I think torch.compile has quite a lot of catching up to do.
> XLA isn't at present particularly useful at scheduling across machines,
I'm not sure if you mean compiler-based distributed optimizations, but JAX does this with XLA: https://docs.jax.dev/en/latest/notebooks/Distributed_arrays_...
In Itanium's heyday, the compilers and libraries were pretty good at handling HPC workloads, which is really the closest anyone was running then to modern NN training/inference. The problem with Itanium and its compilers was that people obviously wanted to run workloads that looked nothing like HPC (databases, web servers, etc) and the architecture and compilers weren't very good at that. There have always been very successful VLIW-style architectures in more specialized domains (graphics, HPC, DSP, now NPU) it just hasn't worked out well for general-purpose processors.
This was a nice breakdown. I always feel most TPU articles skip over the practical parts. This one actually connects the concepts in a way that clicks.
Are TPUs still stuck to their weird Google bucket thing when using GCP? I hated that.
I'm surprised the perspective of China making TPUs at scale in a couple of years is not bigger news. It could be a deadly blow for Google, NVIDIA, and the rest. Combine it with China's nuclear base and labor pool. And the cherry on top, America will train 600k Chinese students as Trump agreed to.
The TPUv4 and TPUv6 docs were stolen by a Chinese national in 2022/2023: https://www.cyberhaven.com/blog/lessons-learned-from-the-goo... https://www.justice.gov/opa/pr/superseding-indictment-charge...
And that's just 1 guy that got caught. Who knows how many other cases were there.
A Chinese startup is already making clusters of TPUs and has revenue https://www.scmp.com/tech/tech-war/article/3334244/ai-start-...
Manufacturing is the hard part. China certainly has the knowledge to build a TPU architecture without needing to steal the plans. What they don't have is the ability to actually build the chips. This is even in spite of also stealing lithography plans.
There is a dark art to semiconductor manufacturing that pretty much only TSMC really has the wizards for. Maybe intel and samsung a bit too.
> What they don't have is the ability to actually build the chips.
China has fabs. Most are older nodes and are used to manufacture chips used in cars and consumer electronics. They have companies that design chips (manufactured by TSMC), like the Ascend 910, which are purpose built for AI. They may be behind, but they’re not standing still.
The software is the hard part. Western software still outclasses what the chinese produce by a good amount.
This. The amount of investment into CUDA is high enough most companies won't even consider competition, even if it was lower cost.
We desperately need more open frameworks for competition to work
For China there is no plan B for semiconductor manufacturing. Invading Taiwan would be a dice roll and the consequences would be severe. They will create their own SOTA semiconductor industry. Same goes for their military.
The question is when? Does that come in time to deflate the US tech stock bubble? Or will the bubble start to level out and reality catch up, or will the market crash for another reason beforehand?
China has their own fabs. They are behind TSMC in terms of technology, but that doesn't mean they don't have fabs. They're currently ~7nm AFAIK. That's behind TSMC, but also not useless. They are obviously trying hard to catch up. I don't think we should just imagine that they never will. China has a lot of smart engineers and they know how strategically important chip manufacturing is.
This is like this funny idea people had in the early 2000s that China would continue to manufacture most US technology but they could never design their own competitive tech. Why would anyone think that?
Wrt invading Taiwan, I don't think there is any way China can get TSMC intact. If they do invade Taiwan (please God no), it would be a horrible bloodbath. Deaths in the hundreds of thousands and probably relentless bombing. Taiwan would likely destroy its own fabs to avoid them being taken. It would be sad and horrible.
> Why would anyone think that?
That'd be the belief in good old American exceptionalism. Up until recently, a common meme on HN was "freedom" is fundamental to innovation, and naturally the country with the most Freedom(TM) wins. This even persisted after it was clear that DJI was kicking all kinds of ass, outcompeting multiple western drone companies.
> Wrt invading Taiwan, I don't think there is any way China can get TSMC intact.
There are so many trade and manufacturing links between China and Taiwan that an outright war would be economically disastrous for both countries.
That doesn't mean they won't try anyway; political ideology often trumps rational planning.
If they invade Taiwan, we will scuttle the plants and direct ASML to disable their machines which they will do because that’s the condition under which we gave them the tech. They’re not going to get it this way.
They’ll just catch the next wave of tech or eventually break into EUV.
imo the most likely answer is that asml funds a second source for the optics that isn't US controlled and starts shipping to China. The US is losing influence fast.
Lot of retired fab folks in the Austin area if you needed to spin up a local fab. It's really not a dark art, there are plenty of folks that have experience in the industry.
This is sort of like saying there are lots of kids in the local community college shop class if you want to spin up an F1 team.
The knowledge of making 2008 era chips is not a gating factor for getting a handful of atoms to function as a transistor in current SOTA chips. There are probably 100 people on earth who know how to do this, and the majority of them are in Taiwan.
Again, China has literally stolen the plans for EUV lithography, years ago, and still cannot get it to work. Even Samsung and Intel, using the same machines as TSMC, cannot match what they are doing.
It's a dark art in the most literal sense.
Nevermind that new these cutting edge fabs cost ~$50 Billion each.
I've always wondered. If you have fuck you money, wouldn't it be possible to build GPUs to do LLM matmul with 2008 technology. Again, assuming energy costs / cooling costs don't matter.
Building the clean rooms at this scale is a limitation in itself. Just getting the factory setup to and the machines put in so they don't generate particulate matter in operation is an art that compares in difficulty to making the chips themselves.
Energy, cooling, and how much of the building you're taking up do matter. They matter less and in a more manageable way for hyperscalers that have a long established resource management practice in lots of big data centers because they can phase in new technologies as they phase out the old. But it's a lot more daunting to think about building a data center big enough to compete with one full of Blackwell systems there are more than 10 times more performant per watt and per square foot.
IIRC people have gotten LLMs to run on '80s hardware. Inference isn't overly compute heavy.
The killer really is training, which is insanely compute intensive and really only recently hardware practical on the scale needed.
you could probably train a gpt 2 sized model with sota architecture on a 2008 supercomputer. it would take a while though.
The mask shops at TSMC and Samsung kind of are a dark art. It's one of the interesting things about the contract manufacturing business in chips. It's not just a matter of having access to state of the art equipment.
Yeah I'm terrified that TPUs will get cheaper, that would be awful.
>It could be a deadly blow for Google, NVIDIA, and the rest.
How would this be a deadly blow to Google? Google makes TPUs for their own services and products, avoiding paying the expensive nvidia tax. If other people make similar products, this has effectively zero impact on Google.
nvidia knew their days were numbered, at least in their ownership of the whole market. And China hardly had to steal the great plans for a TPU to make one, and a FMA/MAC unit is actually a surprisingly simple bit of hardware to design. Everyone is adding "TPUs" in their chips - Apple, Qualcomm, Google, AMD, Amazon, Huawei, nvidia (that's what tensor cores are) and everyone else.
And that startup isn't the big secret. Huawei already has solutions matching the H20. Once the specific need that can be serviced by an ASIC is clear, everyone starts building it.
>America will train 600k Chinese students as Trump agreed to
What great advantage do you think this is?
America isn't remotely the great gatekeeper on this. If anything, Taiwan + the Netherlands (ASML) are. China would yield infinitely more value in learning manufacturing and fabrication secrets than cloning some specific ASIC.
>Combine it with China's nuclear base and labor pool. And the cherry on top, America will train 600k Chinese students as Trump agreed to.
I dont understand this part. What has nuclear base got to do with chip manufacturing? And surely, not all 600k students are learning chip design or stealing plans
I assume the nuclear reactors are to power the data centers using the new chips. There have been a few mentions on HN about the US being very behind in building enough power plants to run LLM workloads
The frenetic pace of data center construction in the US means that nuclear is not a short-term option. No way are they going to wait a decade or more for generation to come on line. It’s going to be solar, batteries, and gas (turbines, and possibly fuel cells).
We should ask ourselves: is it worth ruining local communities in order to beat China in the global sphere?
Nuclear power is what they are talking about, not weapons.
I mean they have the power grid to run TPUs at 10x the scale of USA.
About students, have you seen the microelectronic labs in American universities lately? A huge chunk are Chinese already. Same with some of the top AI labs.
Thankfully LLMs are a dead end, so nobody will make it to AGI by just throwing more electricity at the problem. Now if we could only have a new AI winter we could postpone the end of mankind as the dominant species on earth by another couple of decades.