While in general I am very much in favour of playing both sides of an abstraction, I would argue that RTL is nowhere near the top of the stack.
In the GPU world, you have games, which are built on game engines, which are built against an API, which is implemented in a driver (which contains a compiler!), which communicates through OS abstractions with hardware, which is written in a HDL (this is where you write RTL!), which is then laid out in silicon.
Now each of these parts of the stack have ridiculous complexity in them, and there are definitely things in the upper layers of the stack that impact how you want to write your RTL (and vice versa). So if your stack knowledge stops at RTL(which, honestly there is absolutely nothing wrong with!), there is still lots of fun complexity left in the stack.
OpenROAD comes to mind, as the main open source self optimizing RTL-to-GDS effort. Begat in part with DARPA help. Lots of derivarive efforts/projects!
There was a ton of energy & hype & visibility for the first couple years after launch (2018). But it's been pretty quiet, a lot less PR since. I wish there was more visibility into the evolution of this & related effort.
https://github.com/The-OpenROAD-Project/OpenROAD?tab=readme-...
Unlike software product development, even when using a foundries chip manufacture requires 7-8 figure (USD) development budgets. That is per iteration. Unlike JS development there isn't massive volumes of internet resources to train LLMs on to produce usable RTL, etc code.
>Unlike JS development there isn't massive volumes of internet resources to train LLMs on to produce usable RTL, etc code.
There is local data on each major manufacturer/designer that they can use to train their LLMs. I'm sure Synopsys/Mentor/Siemens are also working towards providing solutions to their customers can use to train their own LLMs.
This is pretty much the business model of Adobe in creating their 'copyright compliant' image generation features.
The companies with the biggest treasure trove of non-public training data will likely have a technical advantage. _If_ they can purposefully use that data to create better models for their niche problem domain.
Full stack-ish, when you have an in house layout guy a few cubes over and are old enough to have done schematic captured ASIC (Cadence Edge!) and gate level emergency fixes. But alas, Catapult C is the new religion. Old dog, new tricks and all that.
Almost everyone in my team is "full stack" (nobody has ever called it this). I'm not convinced by this. I guess it would allow us to hire worse people, but I'm not sure that's a good thing to aim towards.
Interesting, I thought(from the title) this would be about analogue vs digital designers. But the article is written in the context of a "fully digital" chip (i.e the analogue stuff is abstracted away, all chips are analogue at the end of the day).
"Fullstack chip designers" exist in the mixed-signal world. Where the analogue component is the majority of the chip, and the digital is fairly simple, it's sometimes done by single person to save money. At least it was definitely a thing in the late 00's and early 2010's in small fabless design centers. Not sure about nowadays.
While in general I am very much in favour of playing both sides of an abstraction, I would argue that RTL is nowhere near the top of the stack.
In the GPU world, you have games, which are built on game engines, which are built against an API, which is implemented in a driver (which contains a compiler!), which communicates through OS abstractions with hardware, which is written in a HDL (this is where you write RTL!), which is then laid out in silicon. Now each of these parts of the stack have ridiculous complexity in them, and there are definitely things in the upper layers of the stack that impact how you want to write your RTL (and vice versa). So if your stack knowledge stops at RTL(which, honestly there is absolutely nothing wrong with!), there is still lots of fun complexity left in the stack.
OpenROAD comes to mind, as the main open source self optimizing RTL-to-GDS effort. Begat in part with DARPA help. Lots of derivarive efforts/projects!
There was a ton of energy & hype & visibility for the first couple years after launch (2018). But it's been pretty quiet, a lot less PR since. I wish there was more visibility into the evolution of this & related effort. https://github.com/The-OpenROAD-Project/OpenROAD?tab=readme-...
Unlike software product development, even when using a foundries chip manufacture requires 7-8 figure (USD) development budgets. That is per iteration. Unlike JS development there isn't massive volumes of internet resources to train LLMs on to produce usable RTL, etc code.
> chip manufacture requires 7-8 figure (USD) development budgets
im quietly holding out hope for atomicsemi.com or someone like that
>Unlike JS development there isn't massive volumes of internet resources to train LLMs on to produce usable RTL, etc code.
There is local data on each major manufacturer/designer that they can use to train their LLMs. I'm sure Synopsys/Mentor/Siemens are also working towards providing solutions to their customers can use to train their own LLMs.
This is pretty much the business model of Adobe in creating their 'copyright compliant' image generation features.
The companies with the biggest treasure trove of non-public training data will likely have a technical advantage. _If_ they can purposefully use that data to create better models for their niche problem domain.
Full stack-ish, when you have an in house layout guy a few cubes over and are old enough to have done schematic captured ASIC (Cadence Edge!) and gate level emergency fixes. But alas, Catapult C is the new religion. Old dog, new tricks and all that.
Almost everyone in my team is "full stack" (nobody has ever called it this). I'm not convinced by this. I guess it would allow us to hire worse people, but I'm not sure that's a good thing to aim towards.
Interesting, I thought(from the title) this would be about analogue vs digital designers. But the article is written in the context of a "fully digital" chip (i.e the analogue stuff is abstracted away, all chips are analogue at the end of the day).
"Fullstack chip designers" exist in the mixed-signal world. Where the analogue component is the majority of the chip, and the digital is fairly simple, it's sometimes done by single person to save money. At least it was definitely a thing in the late 00's and early 2010's in small fabless design centers. Not sure about nowadays.
Of course it has to be another article about "AI". It wouldn't be on HN if it's not about "AI". /s