How AlphaChip transformed computer chip design

(deepmind.google)

294 points | by isof4ult 3 days ago ago

174 comments

  • vighneshiyer 3 days ago ago

    This work from Google (original Nature paper: https://www.nature.com/articles/s41586-021-03544-w) has been credibly criticized by several researchers in the EDA CAD discipline. These papers are of interest:

    - A rebuttal by a researcher within Google who wrote this at the same time as the "AlphaChip" work was going on ("Stronger Baselines for Evaluating Deep Reinforcement Learning in Chip Placement"): http://47.190.89.225/pub/education/MLcontra.pdf

    - The 2023 ISPD paper from a group at UCSD ("Assessment of Reinforcement Learning for Macro Placement"): https://vlsicad.ucsd.edu/Publications/Conferences/396/c396.p...

    - A paper from Igor Markov which critically evaluates the "AlphaChip" algorithm ("The False Dawn: Reevaluating Google's Reinforcement Learning for Chip Macro Placement"): https://arxiv.org/pdf/2306.09633

    In short, the Google authors did not fairly evaluate their RL macro placement algorithm against other SOTA algorithms: rather they claim to perform better than a human at macro placement, which is far short of what mixed-placement algorithms are capable of today. The RL technique also requires significantly more compute than other algorithms and ultimately is learning a surrogate function for placement iteration rather than learning any novel representation of the placement problem itself.

    In full disclosure, I am quite skeptical of their work and wrote a detailed post on my website: https://vighneshiyer.com/misc/ml-for-placement/

    • negativeonehalf 2 days ago ago

      FD: I have been following this whole thing for a while, and know personally a number of the people involved.

      The AlphaChip authors address criticism in their addendum, and in a prior statement from the co-lead authors: https://www.nature.com/articles/s41586-024-08032-5 , https://www.annagoldie.com/home/statement

      - The 2023 ISPD paper didn't pre-train at all. This means no learning from experience, for a learning-based algorithm. I feel like you can stop reading there.

      - The ISPD paper and the MLcontra paper both used much larger older technology node sizes, which have pretty different physical properties. TPU has a sub 10nm technology node size, whereas ISPD uses 45nm and 12nm. These are really different from a physical design perspective. Even worse, MLcontra uses a truly ancient benchmark with >100nm technology node size.

      Markov's paper just summarizes the other two.

      (Incidentally, none of ISPD / MLcontra / Markov were peer reviewed - ISPD 2023 was an invited paper.)

      There's a lot of other stuff wrong with the ISPD paper and the MLcontra paper - happy to go into it - and a ton of weird financial incentives lurking in the background. Commercial EDA companies do NOT want a free open-source tool like AlphaChip to take over.

      Reading your post, I appreciate the thoroughness, but it seems like you are too quick to let ISPD 2023 off the hook for failing to pre-train and using less compute. The code for pre-training is just the code for training --- you train on some chips, and you save and reuse the weights between runs. There's really no excuse for failing to do this, and the original Nature paper described at length how valuable pre-training was. Given how different TPU is from the chips they were evaluating on, they should have done their own pre-training, regardless of whether the AlphaChip team released a pre-trained checkpoint on TPU.

      (Using less compute isn't just about making it take longer - ISPD 2023 used half as many GPUs and 1/20th as many RL experience collectors, which may screw with the dynamics of the RL job. And... why not just match the original authors' compute, anyway? Isn't this supposed to be a reproduction attempt? I really do not understand their decisions here.)

      • isotypic 2 days ago ago

        Why does pretraining or not matter in the ISPD 2023 paper? The circuit_training repo, as noted in the rebuttal of the rebuttal by the ISPD 2023 paper authors, claims training from scratch is "comparable or better" than fine-tuning the pre-trained model. So no matter your opinion on the importance of the pretraining step, this result isn't replicable, at which point the ball is in Google's court to release code/checkpoints to show otherwise.

        • negativeonehalf 2 days ago ago

          The quick-start guide in the repo that said you don't have to pre-train for the sample test case, meaning that you can validate your setup without pre-training. That does not mean you don't need to pre-train! Again, the paper talks at length about the importance of pre-training.

          • marcinzm 2 days ago ago

            This is what the repo says:

            >Results >Ariane RISC-V CPU >View the full details of the Ariane experiment on our details page. With this code we are able to get comparable or better results training from scratch as fine-tuning a pre-trained model.

            The paper includes a graph showing that it takes longer for Ariane to train without pre-training however the results in the end are the same.

            • negativeonehalf 13 hours ago ago

              See their ISPD 2022 paper, which goes into more detail about the value of pre-training (e.g. Figure 7): https://dl.acm.org/doi/pdf/10.1145/3505170.3511478

              Sometimes training from scratch is able to match the results of pre-training, given ~5X more time to converge. Other times, though, it never does as well as a pre-trained model, converging to a worse final result.

              This isn't too surprising -- the whole point of the method is to be able to learn from experience.

          • anna-gabriella 2 days ago ago

            That does not mean you need to pre-train either. Common sense, no?

      • wegfawefgawefg 2 days ago ago

        In reinforcement learning pre-training reduces peak performance. We can argue about this, but it is not a sufficiently strong point to stop reading from alone.

        • 317070 2 days ago ago

          Do you have a citation for this? I did my Phd on this topic 8 years ago, and I didn't completely follow the field after. I'm curious to learn more.

      • dogleg77 a day ago ago

        The problem with the Google Nature paper is that its results were not reproduced outside Google. You can attack attempts to reproduce but that only reinforces the point: those claimed results cannot be trusted.

        Other commenters already addressed the pre-training issue. Please kindly include a link to Kahng's 2023 discussion addressing your complaints. Otherwise, you are unfairly supporting those people you know.

        Kahng's placer is open-source and was used in the Nature paper. It does not make sense to accuse Kahng of colluding with companies against open-source.

        • negativeonehalf 10 hours ago ago

          For a more thorough discussion on pre-training, see this ISPD 2022 paper by the AlphaChip people: https://dl.acm.org/doi/pdf/10.1145/3505170.3511478

          As for external usage of the method - MediaTek is one of the largest chip design companies in the world, and they built on AlphaChip. There's a quote from a MediaTek SVP at the bottom of the GDM blog post:

          "AlphaChip's groundbreaking AI approach revolutionizes a key phase of chip design. At MediaTek, we've been pioneering chip design's floorplanning and macro placement by extending this technique in combination with the industry's best practices. This paradigm shift not only enhances design efficiency, but also sets new benchmarks for effectiveness, propelling the industry towards future breakthroughs."

      • clickwiseorange 2 days ago ago

        Oh, man... this is the same old stuff from the 2023 Anna Goldie statement (is this Anna Goldie's comment?). This was all addressed by Kahng in 2023 - no valid criticisms. Where do I start?

        Kahng's ISPD 2023 paper is not in dispute - no established experts objected to it. The Nature paper is in dispute. Dozens of experts objected to it: Kahng, Cheng, Markov, Madden, Lienig, Swartz objected publically.

        The fact that Kahng's paper was invited doesn't mean it wasn't peer reviewed. I checked with ISPD chairs in 2023 - Kahng's paper was thoroughly reviewed and went through multiple rounds of comments. Do you accept it now? Would you accept peer-reviewed versions of other papers?

        Kahng is the most prominent active researcher in this field. If anyone knows this stuff, it's Kahng. There were also five other authors in that paper, including another celebrated professor, Cheng.

        The pre-training thing was disclaimed in the Google release. No code, data or instructions for pretraining were given by Google for years. The instructions said clearly: you can get results comparable to Nature without pre-training.

        The "much older technology" is also a bogus issue because the HPWL scales linearly and is reported by all commercial tools. Rectangles are rectangles. This is textbook material. But Kahng etc al prepared some very fresh examples, including NVDLA, with two recent technologies. Guess what, RL did poorly on those. Are you accepting this result?

        The bit about financial incentives and open-source is blatantly bogus, as Kahng leads OpenROAD - the main open-source EDA framework. He is not employed by any EDA companies. It is Google who has huge incentives here, see Demis Hassabis tweet "our chips are so good...".

        The "Stronger Baselines" matched compute resources exactly. Kahng and his coauthors performed fair comparisons between annealing and RL, giving the same resources to each. Giving greater resources is unlikely to change results. This was thoroughly addressed in Kahng's FAQ - if you only could read that.

        The resources used by Google were huge. Cadence tools in Kahng's paper ran hundreds times faster and produced better results. That is as conclusive as it gets.

        It doesn't take a Ph.D. to understand fair comparisons.

        • smokel 2 days ago ago

          Wow, you seem to be pretty invested in this topic. Care to clarify?

          • anna-gabriella a day ago ago

            Reposting, as someone is flagging my comments. > People in the know are following this topic - big-wow surprise!

        • negativeonehalf 2 days ago ago

          For AlphaChip, pre-training is just training. You train, and save the weights in between. This has always been supported by the Google's open-source repository. I've read Kahng's FAQ, and he fails to address this, which is unsurprising, because there's simply no excuse for cutting out pre-training for a learning-based method. In his setup, every time AlphaChip sees a new chip, he re-randomizes the weights and makes it learn from scratch. This is obviously a terrible move.

          HPWL (half-perimeter wirelength) is an approximation of wirelength, which is only one component of the chip floorplanning objective function. It is relatively easy to crunch all the components together and optimize HPWL --- minimizing actual wirelength while avoiding congestion issues is much harder.

          Simulated annealing is good at quickly converging on a bad solution to the problem, with relatively little compute. So what? We aren't compute-limited here. Chip design is a lengthy, expensive process where even a few-percent wirelength reduction can be worth millions of dollars. What matters is the end result, and ML has SA beat.

          (As for conflict of interest, my understanding is Cadence has been funding Kahng's lab for years, and Markov's LinkedIn says he works for Synopsis. Meanwhile, Google has released a free, open-source tool.)

          • clickwiseorange 2 days ago ago

            It's not that one needs an excuse. The Google CT repo said clearly you don't need to pretrain. "supported" usually includes at least an illustration, some scripts to get it going - no such thing there before Kahng's paper. Pre-trained was not recommended and was not supported.

            Everything optimized in Nature RL is an approximation. HPWL is where you start, and RL uses it in the objective function too. As shown in "Stronger Baselines", RL loses a lot by HPWL - so much that nothing else can save it. If your wires are very long, you need routing tracks to route them, and you end up with congestion too.

            SA consistently produces better solutions than RL for various time budgets. That's what matters. Both papers have shown that SA produces competent solutions. You give SA more time, you get better solutions. In a fair comparison, you give equal budgets to SA and RL. RL loses. This was confirmed using Google's RL code and two independent SA implementations, on many circuits. Very definitively. No, ML did not have SA beat - please read the papers.

            Cadence hasn't funded Kahng for a long time. In fact, Google funded Kahng more recently, so he has all the incentives to support Google. Markov's LinkedIn page says he worked at Google before. Even Chatterjee, of all people, worked at Google.

            Google's open-source tool is a head fake, it's practically unusable.

            Update: I'll respond to the next comment here since there's no Reply button.

            1. The Nature paper said one thing, the code did something else, as we've discovered. The RL method does some training as it goes. So, pre-training is not the same as training. Hence "pre". Another problem with pretraining in Google work is data contamination - we can't compare test and training data. The Google folks admitted to training and testing on different versions of the same design. That's bad. Rejection-level bad.

            2. HPWL is indeed a nice simple objective. So nice that Jeff Dean's recent talks use it. It is chip design. All commercial circuit placers without exception optimize it and report it. All EDA publications report it. Google's RL optimized HPWL + density + congestion

            3. This shows you aren't familiar with EDA. Simulated Annealing was the king of placement from mid 1980s to mid 1990s. Most chips were placed by SA. But you don't have to go far - as I recall, the Nature paper says they used SA to postprocess macro placements.

            SA can indeed find mediocre solutions quickly, but keeps on improving them, just like RL. Perhaps, you aren't familiar with SA. I am. There are provable results showing SA finds optimal solution if given enough time. Not for RL.

            • negativeonehalf 2 days ago ago

              The Nature paper describes the importance of pre-training repeatedly. The ability to learn from experience is the whole point of the method. Pre-training is just training and saving the weights -- this is ML 101.

              I'm glad you agree that HPWL is a proxy metric. Optimizing HPWL is a fun applied math puzzle, but it's not chip design.

              I am unaware of a single instance of someone using SA to generate real-world, usable macro layouts that were actually taped out, much less for modern chip design, in part due to SA's struggles to manage congestion, resulting in unusable layouts. SA converges quickly to a bad solution, but this is of little practical value.

              • clickwiseorange 2 days ago ago

                1. The Nature paper said one thing, the code did something else, as we've discovered. The RL method does some training as it goes. So, pre-training is not the same as training. Hence "pre". Another problem with pretraining in Google work is data contamination - we can't compare test and training data. The Google folks admitted to training and testing on different versions of the same design. That's bad. Rejection-level bad.

                2. HPWL is indeed a nice simple objective. So nice that Jeff Dean's recent talks use it. It is chip design. All commercial circuit placers without exception optimize it and report it. All EDA publications report it. Google's RL optimized HPWL + density + congestion

                3. This shows you aren't familiar with EDA. Simulated Annealing was the king of placement from mid 1980s to mid 1990s. Most chips were placed by SA. But you don't have to go far - as I recall, the Nature paper says they used SA to postprocess macro placements.

                SA can indeed find mediocre solutions quickly, but keeps on improving them, just like RL. Perhaps, you aren't familiar with SA. I am. There are provable results showing SA finds optimal solution if given enough time. Not for RL.

              • AshamedCaptain 2 days ago ago

                SA and HPWL are most definitely used as of today for the chips that power the GPUs used for "ML 101". But frankly this has the same value as saying "some sort algorithm is used somewhere" -- they're well entrenched basics of the field. To claim that SA produces "bad congestion" is like claiming that using steel pans produces bad cooking -- needs a shitton of context and qualification since you cannot generalize this way.

            • foobarqux 2 days ago ago

              I think clicking the time stamp field in the comment will allow you to get a reply box.

        • djmips 2 days ago ago

          > Kahng is the most prominent active researcher in this field. If anyone knows this stuff, it's Kahng.

          This is written as a textbook example logical fallacy of appeal to authority.

          • ok_dad 2 days ago ago

            The GP would have had to appeal only to the expert’s opinion, with no actual evidence, but the GP actually gave a lot of evidence to the expertise of the researcher in the form of peer reviewed papers and other links. That’s not an appeal to authority at all.

    • porphyra 3 days ago ago

      The Deepmind chess paper was also criticized for unfair evaluation, as they were using an older version of Stockfish for comparison. Apparently, the gap between AlphaZero and that old version of Stockfish (about 50 elo iirc) was about the same as the gap between consecutive versions of Stockfish.

      • lacker 2 days ago ago

        Indeed, six years later, the AlphaZero algorithm is not the best performing algorithm for chess. LCZero (uses AlphaZero algorithm) won some TCECs after it came out but for the past few years Stockfish (does not use AlphaZero algorithm) has been winning consistently.

        https://en.wikipedia.org/wiki/Top_Chess_Engine_Championship

        So perhaps the critics had a point there.

        • vlovich123 2 days ago ago

          There’s a lot of codevelopment happening in the space where the positions are evaluated by Leela and then used to train the NNUE net within stockfish. And Leela comes from AlphaZero. So basically AlphaZero was directly responsible for opening up new avenues of research for a more specialized chess engine to reach new levels than it could have without it.

          > Generally considered to be the strongest GPU engine, it continues to provide open data which is essential for training our NNUE networks. They released version 0.31.1 of their engine a few weeks ago, check it out!

          [1]

          I’d say the impact AlphaZero has had on chess and go can’t be understated considering it’s a general algorithm that at worst is highly competitive with purpose built engines. And that’s ignoring the actual point of why DeepMind is doing any of this which is for GAI (that’s why they’re not constantly trying to compete with existing engines)

          [1] https://lichess.org/@/StockfishNews/blog/stockfish-17-is-her...

        • theodorthe5 2 days ago ago

          Do you understand StockFish filled the gap only after using NNs as well in the evaluation function? And that was a direct consequence of AlphaZero research.

    • Workaccount2 3 days ago ago

      To be fair, some of these criticisms are a few years old. Which normally would be fair game, but the progress in AI has been breakneck. Criticism of other AI tech from 2021 or 2022 are pretty dated today.

      • jeffbee 3 days ago ago

        It certainly looks like the criticism at the end of the rebuttal that DeepMind has abandoned their EDA efforts is a bit stale in this context.

      • anna-gabriella 2 days ago ago

        Dated or not, if half of the criticisms are right, the original paper may need to be retracted. No progress on RL for chip design was published by Google since 2022, as far as I can tell. So, it looks like most if not all criticisms remain valid.

        • joshuamorton a day ago ago

          > No progress on RL for chip design was published by Google since 2022, as far as I can tell.

          This makes sense given that both authors of the paper left Google in 2022. And one no longer seems to work in the chip design space, plausibly because of the bullying by entrenched folks.

          Then again, since rejoining Google the other author has produced around one patent per month in chip design with RL in 2023 and 2024, so perhaps they feel there is a marketable tool here that they don't want to share.

    • smokel 2 days ago ago

      I don't really understand all the fuss about this particular paper. Nearly all papers on AI techniques are pretty much impossible to reproduce, due to details that the authors don't understand or are trying to cover up.

      This is what you get if you make academic researchers compete for citation counts.

      Pretraining seems to be an important aspect here, and it makes sense that such pretraining requires good examples, which unfortunately for the free lunch people, is not available to the public.

      That's what you get when you let big companies do fundamental research. Would it be better if the companies did not publish anything about their research at all?

      It all feels a bit unproductive to attack one another.

    • jeffbee 3 days ago ago

      It seems like this is multiple parties pursuing distinct arguments. Is Google saying that this technique is applicable in the way that the rebuttals are saying it is not? When I read the paper and the update I did not feel as though Google claimed that it is general, that you can just rip it off and run it and get a win. They trained it to make TPUs, then they used it to make TPUs. The fact that it doesn't optimize whatever "ibm14" is seems beside the point.

      • clickwiseorange 2 days ago ago

        Good question. It's not just ibm14, but everything people outside Google tried shows that RL is much worse than prior methods. NVDLA, BlackParrot, etc. There is a strong possibility that Google pre-trained RL on certain TPU designs then tested in them, and submitted to Nature.

    • gdiamos 3 days ago ago

      Criticism is an important part of the scientific process.

      Whichever approach ends up winning is improved by careful evaluation and replication of results

    • s-macke 3 days ago ago

      When I first read about AlphaChip yesterday, my first question was how it compares to other optimization algorithms such as genetic algorithms or simulated annealing. Thank you for confirming that my questions are valid.

    • nemonemo 3 days ago ago

      What is your opinion of the addendum? I think the addendum and the pre-trained checkpoint are the substance of the announcement, and it is surprising to see little mention of those here.

    • bsder 2 days ago ago

      EDA claims in the digital domain are fairly easy to evaluate. Look at the picture of the layout.

      When you see a chip that has the datapath identified and laid out properly by a computer algorithm, you've got something. If not, it's vapor.

      So, if your layout still looks like a random rat's nest? Nope.

      If even a random person can see that your layout actually follows the obvious symmetric patterns from bit 0 to bit 63, maybe you've got something worth looking at.

      Analog/RF is a little tougher to evaluate because the smaller number of building blocks means you can use Moore's Law to brute force things much more exhaustively, but if things "looks pretty" then you've got something. If it looks weird, you don't.

      • glitchc 2 days ago ago

        That doesn't mean the fabricated netlist doesn't work. I'm not supporting Google by any means, but the test should be: Does it fabricate and function as intended? If not, clearly gibberish. If so, we now have computers building computers, which is one step closer to SkyNet. The truth is probably somewhere in between. But even if some of the samples, with the terrible layouts, are actually functional, then we might learn something new. Maybe the gibberish design has reduced crosstalk, which would be fascinating.

  • lordswork 3 days ago ago

    Some interesting context on this work: 2 researchers were bullied to the point of leaving Google for Anthropic by a senior researcher (who has now been terminated himself): https://www.wired.com/story/google-brain-ai-researcher-fired...

    They must feel vindicated by their work turning out to be so fruitful now.

    • gabegobblegoldi 2 days ago ago

      Vindicated indeed. The senior researcher and others on the project were bullied for raising concerns of fraud by the two researchers [1]. They filed a lawsuit against Google that has a lot of detailed allegations of fraud [2].

      [1] https://www.theregister.com/AMP/2023/03/27/google_ai_chip_pa...

      [2] https://regmedia.co.uk/2023/03/26/satrajit_vs_google.pdf

      • negativeonehalf 13 hours ago ago

        You are now using multiple new accounts based on the name of one of the authors (Anna Goldie) and her husband (Gabriel). First this one ('gabegobblegoldi'), and then 'anna-gabriella'.

        I think it is time for you to take a deep breath and think about what you are doing and why.

        You seem to be obsessed with the idea that this work is overrated. MediaTek and Google don't think so, and use it in production for their chips, including TPU, Dimensity, Axion, and others. If you're right and they're wrong, using this method loses them money. If it's the other way around, then using this method makes them gain money.

        Please read PG's post and ask yourself if it applies to you: https://www.paulgraham.com/fh.html

        Chatterjee settled his case. He has moved on. This is not some product being sold -- it is a free, open-source tool. People who see value in it use it; others don't, and so they don't. This is how it always works, and it's fine.

    • clickwiseorange 2 days ago ago

      It's actually not clear who was bullied. The two researchers ganged up on Chatterjee and got him fired because he used the word "fraud" - wrongful termination of a whistleblower. Only recently Google settled with Chatterjee for an undisclosed amount.

  • hinkley 3 days ago ago

    TSMC made a point of calling out that their latest generation of software for automating chip design has features that allow you to select logic designs for TDP over raw speed. I think that’s our answer to keep Dennard scaling alive in spirit if not in body. Speed of light is still going to matter, so physical proximity of communicating components will always matter, but I wonder how many wins this will represent versus avoiding thermal throttling.

    • therealcamino 2 days ago ago

      EDA software has long allowed trading off power, delay, and area during optimization . But TSMC doesn't produce those tools, as far as I'm aware.

      • hinkley 2 days ago ago

        https://www.tsmc.com/english/dedicatedFoundry/oip/eda_allian...

        They don’t produce but they are tailored for them just the same. “We have” doesn’t have to mean “we made”. They don’t say it as such here but elsewhere they refer to the IP they can make available, which can also be made in house or cross licensed and still count as “we have”.

        • therealcamino 2 days ago ago

          Used in that sense, the same software could be called Samsung's and Intel's and any other foundry's, since it is qualified for use with those processes as well. But that's not really the main point I was making, which was that there have been 20+ years of cooperative effort in both process design and EDA software to optimize for power and trade it off against other optimization goals. While there are design and packaging approaches that are only coming into use because of "end of Moore's law, what do we do now" reactions, and some may have power implications, power optimization predates that by a good while.

  • pfisherman 3 days ago ago

    Questions for those in the know about chip design. How are they measuring the quality of a chip design? Does the metric that Google is reporting make sense? Or is it just something to make themselves look good?

    Without knowing much, my guess is that “quality” of a chip design is multifaceted and heavily dependent on the use case. That is the ideal chip for a data center would look very different from those for a mobile phone camera or automobile.

    So again what does “better” mean in the context of this particular problem / task.

    • Drunk_Engineer 2 days ago ago

      I have not read the latest paper, but their previous work was really unclear about metrics being used. Researchers trying to replicate results had a hard time getting reliable details/benchmarks out of Google. Also, my recollection is that Google did not even compute timing, just wirelength and congestion; i.e. extremely primitive metrics.

      Floorplanning/placement/synthesis is a billion dollar industry, so if their approach were really revolutionary they would be selling the technology, not wasting their time writing blog posts about it.

      • rossjudson 2 days ago ago

        Like when Google wasted its time writing publicly about Spanner?

        https://research.google/pubs/spanner-googles-globally-distri...

        or Bigtable?

        https://research.google/pubs/bigtable-a-distributed-storage-...

        or GFS?

        or MapReduce?

        or Borg?

        or...I think you get the idea.

        • thenoblesunfish 2 days ago ago

          I am not sure these publications were intended to generate sales of these technologies. My assumption is that they mostly help the company in terms of recruitment. This lets potential employees see cool stuff Google is doing, and see them as an industry leader.

          • vlovich123 2 days ago ago

            Spanner is literally a Google cloud product you can buy ignoring that it underpins a good amount of Google tech internally. The same is true of other stuff. Dismissing it as a recruitment tool indicates you haven’t worked at Google or really know much about their product lines.

            • ebalit 2 days ago ago

              He didn't say that Spanner is only a recruitment tool but that the blog posts about Spanner (and other core technologies of Google) might be.

              • vlovich123 2 days ago ago

                More people see the blog posts as it’s a more gentle introduction than the paper itself. Sure it might generate interest in Google but it also generates interest for people to further look into the research. They are not for sales of the tech but I’m not sure the impact is just a recruitment tool even if that’s how Google justified the work to itself.

        • bushbaba 2 days ago ago

          Spanner research paper was in 2012. Bigtable was in 2006. GFS 2003. The last decade has been a 'lost decade' of google. Not much innovation to be honest.

      • IshKebab 2 days ago ago

        > Floorplanning/placement/synthesis is a billion dollar industry

        Maybe all together, but I don't think automatic placement algorithms are a billion dollar industry. There's so much more to it than that.

        • Drunk_Engineer 2 days ago ago

          Yes in combination. Customers generally buy these tools as a package deal. If the placer/floorplanner blows everything else out of the water, then a CAD vendor can upsell a lot of related tools.

      • negativeonehalf 2 days ago ago

        The original paper reports P&R metrics (WNS, TNS, area, power, wirelength, horizontal congestion, vertical congestion) - https://www.nature.com/articles/s41586-021-03544-w

        (no paywall): https://www.cl.cam.ac.uk/~ey204/teaching/ACS/R244_2021_2022/...

        • Drunk_Engineer 2 days ago ago

          From what I saw in the rebuttal papers, the Google cost-function is wirelength based. You can still get good TNS from that if your timing is very simplistic -- or if you choose your benchmark carefully.

          • negativeonehalf 2 days ago ago

            They optimize using a fast heuristic based on wirelength, congestion, and density, but they evaluate with full P&R. It is definitely interesting that they get good timing without explicitly including it in their reward function!

            • ithkuil 2 days ago ago

              Yeah; it means the heuristic they use is a good one

          • clickwiseorange 2 days ago ago

            The odd thing is that they don't compute timing in RL, but claim that somehow TNS and WNS improved. Does anyone believe this? With five circuits and three wins, the results are a coin toss.

    • q3k 3 days ago ago

      This is just floorplanning, which is a problem with fairly well defined quality metrics (max speed and chip area used).

      • Drunk_Engineer 2 days ago ago

        Oh man, if only it were that simple. A floorplanner has to guestimate what the P&R tools are going to do with the initial layout. That can be very hard to predict -- even if the floorplanner and P&R tool are from the same vendor.

  • Upvoter33 2 days ago ago

    To me, there is an underlying issue: why are so many DeepX papers being sent to Nature, instead of appropriate CS forums? If you are doing better work in chip design, send it to IPSD or ISCA or whatever, and then you will get the types of reviews needed for this work. I have no idea what Nature does with a paper like this.

  • thesz 2 days ago ago

    Eurisco [1], if I remember correctly, was once used to perform placement-and-route task and was pretty good at it.

    [1] https://en.wikipedia.org/wiki/Eurisko

    What's more, Eurisco was then used in designing Traveler TCS' game fleet of battle spaceships. And Eurisco used symmetry-based placement learned from VLSI design in the design of the spaceships' fleet.

    Can AlphaChip's heuistics be used anywhere else?

    • gabegobblegoldi 2 days ago ago

      Doesn’t look like it. In fact the original paper claimed that their RL method could be used for all sorts of combinatorial optimization problems. Yet they chose an obscure problem in chip design and showed their results on proprietary data instead of standard public benchmarks.

      Instead they could have demonstrated their amazing method on any number of standard NP hard optimization problems e.g. traveling salesman, bin packing, ILP, etc. where we can generate tons of examples and verify easily whether it produces better results than other solvers or not.

      This is why many in the chip design and optimization community felt that the paper was suspicious. Even with this addendum they adamantly refuse to share any results that can be independently verified.

      • AshamedCaptain 2 days ago ago

        > Yet they chose an obscure problem in chip design

        It is not obscure (in chip design). If anything it is one of the most easily reachable problems. Almost every other PhD student in the field has implemented a macro placer, even if just for fun, and there are frequent academic competitions. A lot of design houses also roll their own macro placers since it's not a difficult problem and generally adding a bit of knowledge of your design style can help you gain an extra % over the generic commercial tools.

        It does not surprise me at all that they decided to start with this for their foray into chip EDA. It's the minimum effort route.

        • gabegobblegoldi 2 days ago ago

          Sorry. I meant obscure relative to the large space of combinatorial optimization problems not just chip design.

          Most design houses don’t write their own macro placers but customize commercial flows for their designs.

          The problem with macro placement as an RL technology demonstrator is that to evaluate quality you need to go through large parts of the design flow which involves using other commercial tools. This makes it incredibly hard to evaluate superiority since all those steps and tools add noise.

          Easier problems would have been to use RL to minimize the number of gates in a logic circuit or just focus on placement with half perimeter wirelength (I think this is what you mean with your grad student example). Essentially solving point problems in the design flow and evaluating quality improvements locally.

          They evaluated quality globally and only globally and that destroys credibility in this business due to the noise involved unless you have lots of examples, can show statistical significance, and (unfortunately for the authors) also local improvements.

          That’s what the follow on studies did and that’s why the community has lost faith in this particular algorithm.

          • AshamedCaptain 2 days ago ago

            > Most design houses don’t write their own macro placers but customize commercial flows for their designs.

            Most I don't know, but all the mid-to-large ones have automated macro placers. Obviously, the output is introduced into the commercial flow, generally by setting placement constraints. The larger houses go much further and may even override specific parts of the flow, but not basing it on an commercial flow is out of the question right now.

            > The problem with macro placement as an RL technology demonstrator is that to evaluate quality you need to go through large parts of the design flow which involves using other commercial tools.

            Not really, not any more than any other optimization such as e.g. frontend which I'm more familiar with. If you don't want to go through the full design flow (which I agree introduces noise more than anything else), then benchmark your floorplans in some easily calculable metric (e.g., HPWL). Likewise, if you want to test the quality of some logic simplification _in theory_ you'd have to also go through the entire flow (backend included), but no one does that and you just evaluate some easily calculable metric e.g. number of gates. These distinctions are traditional more than anything else.

            Academic macro placers generally have limited access to commercial flows (either due to licensing issues or computing resource availability) so it is rather common to benchmark them in other metrics. Google paper tried to be too smart for its own good and therefore incomparable to anything academic.

  • AshamedCaptain 2 days ago ago

    What is Google doing here? At best, the quality of their "computer chip design" work can be described as "controversial" https://spectrum.ieee.org/chip-design-controversy . What is there to gain by just making a PR now without doing anything new?

    • negativeonehalf 10 hours ago ago

      In the blog post, they announce MediaTek's widespread usage, the deployment in multiple generations of TPU with increasing performance each generation, Axion, etc.

      Chips designed with the help of AlphaChip are in datacenters and Samsung phones, right now. That's pretty neat!

  • yeahwhatever10 3 days ago ago

    Why do they keep saying "superhuman"? Algorithms are used for these tasks, humans aren't laying out trillions of transistors by hand.

    • fph 3 days ago ago

      My state-of-art bubblesort implementation is also superhuman at sorting numbers.

      • xanderlewis 3 days ago ago

        Nice. Do you offer API access for a monthly fee?

        • int0x29 3 days ago ago

          I'll need 7 5 gigawatt datacenters in the middle of major urban areas or we might lose the Bubble Sort race with the Chinese.

          • dgacmu 3 days ago ago

            Surely you'll be able to reduce this by getting TSMC to build new fabs to construct your new Bubble Sort Processors (BSPs).

          • qingcharles 2 days ago ago

            I'll give you US$7Tn in investment. Just don't ask where it's coming from.

          • gattr 3 days ago ago

            Surely a 1.21-GW datacenter would suffice!

          • therein 3 days ago ago

            Have we decided when are we deprecating it? I'm already cultivating another team in a remote location to work on a competing product that we will include into Google Cloud a month before deprecating this one.

      • HPsquared 3 days ago ago

        Nice. Still true though! We are in the bubble sort era of AI.

        • kevindamm 2 days ago ago

          When we get better quantum computers we can start using spaghetti sort.

    • jeffbee 3 days ago ago

      This is floorplanning the blocks, not every feature. We are talking dozens to hundreds of blocks, not billions-trillions of gates and wires.

      I assume that the human benchmark is a human using existing EDA tools, not a guy with a pocket protector and a roll of tape.

    • thomasahle 2 days ago ago

      Believe it or not, but there was a time where algorithms were worse than humans at layout out transistors. In particular at the higher level design decisions.

      • justsid 16 hours ago ago

        That’s somewhat still the case, humans could do a much better job at efficient layouting. The problem is that humans don’t scale as well, laying out billions of transistors is hard for humans. But computers can do it if you forego some efficiency by switching to standard cells and then throw compute at the problem.

    • epistasis 3 days ago ago

      Google is good at many things, but perhaps their strongest skill is media positioning.

      • jonas21 3 days ago ago

        I feel like they're particularly bad at this, especially compared to other large companies.

        • pinewurst 3 days ago ago

          Familiarity breeds contempt. They've been pushing the Google==Superhuman thing since the Internet Boom with declining efficacy.

      • lordswork 3 days ago ago

        The media hates Google.

        • epistasis 3 days ago ago

          It a love/hate relationship. Which benefits Google and the media greatly.

    • deelowe 2 days ago ago

      I read the paper. Superhuman is a metric they defined in the paper which has to do with how long it takes a human to do certain tasks.

      • anna-gabriella 2 days ago ago

        Does this make any sense, really? - Define some common words and then let the media run wild with them. How about we redefine "better" and "revolutionize"? Oh, wait, I think people are doing that already...

    • negativeonehalf 2 days ago ago

      Prior to AlphaChip, macro placement was done manually by human engineers in any production setting. Prior algorithmic methods especially struggled to manage congestion, resulting in chips that weren't manufacturable.

      • AshamedCaptain 2 days ago ago

        > macro placement was done manually by human engineers in any production setting

        To quote certain popular TV series .... Sorry, are you from the past? Do your "production" chips only have a couple dozen macros or what?

    • jayd16 3 days ago ago

      "superhuman or comparable"

      What nonsense! XD

  • cobrabyte 3 days ago ago

    I'd love a tool like this for PCB design/layout

    • onjectic 3 days ago ago

      First thing my mind went to as well, I’m sure this is already being worked on, I think it would be more impactful than even this.

      • bgnn 3 days ago ago

        why do you think that?

        • bittwiddle 2 days ago ago

          Far more people / companies are designing PCBs than there are designing custom chips.

          • foota 2 days ago ago

            I think the real value would be in ease of use. I imagine the top N chip creators represent a fair bit of the marginal value in pushing the state of the art forward. E.g., for hobbyists or small shops, there's likely not much value in tiny marginal improvements, but for the big ones it's worth the investment.

    • dsv3099i 2 days ago ago
  • ninetyninenine 2 days ago ago

    What occupation is there that is purely intellectual that has no chance of an AI ever progressing to a point where it can take it over?

    • Zamiel_Snawley 2 days ago ago

      I think only sentimentality can prevent take over by a sufficiently competent AI.

      I don’t want art that wasn’t made by a human, no matter how visually stunning or indistinguishable it is.

      • ninetyninenine 2 days ago ago

        Discounting fraud... what if the AI produces something genuinely better. Genuinely moving you to tears? What then?

        Imagine your favorite movie, the most moving book. You read it, it changed you, then you found out it was an AI that generated it in a mere 10 seconds.

        Artificial sentimentality is useless in the face of reality. That human endeavor is simply data points along an multi-dimensional best fit curve.

        • Zamiel_Snawley 2 days ago ago

          That’s a challenging hypothetical.

          I think it would feel hollowed out, disingenuous.

          It feels too close to being a rat with a dopamine button, meaningless hedonism.

          I haven’t thought it through particularly thoroughly though, I’d been interested in hearing other opinions. These philosophical questions quickly approach unanswerable.

          • ninetyninenine 2 days ago ago

            >These philosophical questions quickly approach unanswerable.

            With the current trendline of AI progress in the last decade the question has a high possibility of being answered by being actualized in reality.

            It's not a random question either. With AI quickly entrenching itself into every aspect of human creation from art, music, to chip design, this is all I can think about.

    • alexyz12 2 days ago ago

      anything that needs very real-time info. AI's will always be limited by us feeding them info, or them collecting it themselves. But humans can travel to more places than an AI can, until robots are everywhere too I suppose

  • ilaksh 3 days ago ago

    How far are we from memory-based computing going from research into competitive products? I get the impression that we are already well passed the point where it makes sense to invest very aggressively to scale up experiments with things like memristors. Because they are talking about how many new nuclear reactors they are going to need just for the AI datacenters.

    • mikewarot 2 days ago ago

      The cognitive mismatch between Von Neumann's folly and other compute architectures is vast. He slowed down the ENIAC by 66% when he got ahold of it.

      We're in the timeline that took the wrong path. The other world has isolinear memory, which can be used for compute, or as memory, down to the LUT level. Everything runs at a consistent speed, and hardware faults LUTs can be routed around easily.

    • sroussey 3 days ago ago

      The problem is that the competition (our current von neumann architecture) has billions of dollars of R&D per year invested.

      Better architectures without the yearly investment train will no longer be better quite quickly.

      You would need to be 100x to 1000x better in order to pull the investment train onto your tracks.

      Don’t has been impossible for decades.

      Even so, I think we will see such a change in my lifetime.

      AI could be that use case that has a strong enough demand pull to make it happen.

      We will see.

      • therealcamino 2 days ago ago

        If you don't worry about the programming model, it's pretty easy to be way better than than existing methodologies in terms of pure compute.

        But if you do pay attention to the programming model, they're unusable. You'll see that dozens of these approaches have come and gone, because it's impossible to write software for them.

        • sanxiyn 2 days ago ago

          GPGPU is instructive. It is not easy, but possible to write software for it. That's why it succeeded.

      • ilaksh 2 days ago ago

        I think it's just ignorance and timidity on the part of investors. Memristor or memory-computing startups are surely the next trend in investing within a few years.

        I don't think it's necessarily demand or any particular calculation that makes things happen. I think people including investors are just herd animals. They aren't enthusiastic until they see the herd moving and then they want in.

        • foota 2 days ago ago

          I don't think it's ignorant to not invest in something that has a decade long path towards even having a market, much less a large market.

          • ilaksh 2 days ago ago

            I have seen at least one experiment running a language model or other neural network on (small scale) memory-based computing substrates. That suggests less than 1-2 years to apply them immediately to existing tasks once they are scaled up in terms of compute capacity.

            • foota 2 days ago ago

              I would have assumed it would take many years longer than that to scale something like this up, based on how long it takes traditional CPU manufacturers to design state of the art chips and manufacturing processes.

    • HPsquared 3 days ago ago

      And think of the embedded applications.

  • mirchiseth 3 days ago ago

    I must be old because first thing I thought reading AlphaChip was why is deepmind talking about chips in DEC Alpha :-) https://en.wikipedia.org/wiki/DEC_Alpha.

    • sedatk 3 days ago ago

      I first used Windows NT on a PC with a DEC Alpha AXP CPU.

    • lamontcg 2 days ago ago

      I miss Digital Unix, too (I don't really miss the "Tru64" rebrand...)

    • kQq9oHeAz6wLLS 2 days ago ago

      Same!

    • mdtancsa 3 days ago ago

      haha, same!

  • dreamcompiler 3 days ago ago

    Looks like this is only about placement. I wonder if it can be applied to routing?

    • amelius 3 days ago ago

      Exactly what I was thinking.

      Also: when is this coming to KiCad? :)

      PS: It would also be nice to apply a similar algorithm to graph drawing (e.g. trying to optimize for human readability instead of electrical performance).

      • tdullien 3 days ago ago

        The issue is that in order to optimize for human readability you'll need a huge number of human evaluations of graphs?

        • amelius 3 days ago ago

          Maybe start with minimization of some metric based on number of edge crossings, edge lengths and edge bends?

  • red75prime 3 days ago ago

    I hope I'll still be alive when they'll announce AlephZero.

  • QuadrupleA 2 days ago ago

    How good are TPUs in comparison with state of the art Nvidia datacenter GPUs, or Groq's ASICs? Per watt, per chip, total cost, etc.? Is there any published data?

  • FrustratedMonky 3 days ago ago

    So AI designing it's own chips. Now that is moving towards exponential growth. Like at the end of "Colossus" the movie.

    Forget LLM's. What DeepMind is doing seems more like how an AI will rule, in the world. Building real world models, and applying game logic like winning.

    LLM's will just be the text/voice interface to what DeepMind is building.

    • anna-gabriella 2 days ago ago

      I can tell you get excited by SciFi, that's where Google's work belongs - people have been unable to reproduce it outside Google by a long shot.

      • FrustratedMonky 2 days ago ago

        Alpha-GO was not sci-fi. And that was 2016

        Protein Folding? That was against a defined data set and other organizations.

        Nobody can re-produce? Isn't that the definition of a competitive advantage?

        They are building something others can't, and that is bad? That is what companies do.

        • anna-gabriella a day ago ago

          We are discussing AlphaChip in 2024, not AlphaGo from 2016. I don't know much about protein folding (there were some controversies there, but that's not relevant). Neither of these has been related to product claims.

          As for "nobody can re-produce", no, that's not the definition. Imaginary things are not competitive advantage. They are exaggerating, and that's bad. But yeah, that's what companies do, you are right.

          • FrustratedMonky a day ago ago

            "Imaginary things"

            I get the impression you just aren't keeping up with DeepMind.

            They have made huge break throughs in science, and they publish their results in Nature. Just because the parent company Google had some bad demo's doesn't mean it is all bunk.

            So guess if you are of the ilk that just doesn't trust anything anymore, that there is no peer reviews, all science is a fraud. I really can't help that.

            • anna-gabriella a day ago ago

              DeepMind made huge breakthroughs, agreed. AlphaGo beat a sitting go champion, which was very cool. AlphaFold solved a large number of proteins with verified results. Are we clear on this? Hope you are taking back your ad hominem.

              The team that did RL for chips work was at GoogleBrain, and you already pointed out that Google had bad demos. The fact that this team was absorbed into DeepMind does not magically rub the successes of DeepMind onto them.

              The RL for chips results were nothing like AlphaGo. Imagine if AlphaGo claimed to beat unknown go players, you would laugh. But the Nature paper on RL for chips claims to outperform unknown chip engineers. Also, imagine if AlphaFold claimed to fold only proprietary proteins. The Nature paper on RL for chips reports results on a small set of proprietary chip blocks (they released one design, and the results are not great on that one). That's where imaginary results come up. One of these things is not like the others.

              • FrustratedMonky 20 hours ago ago

                And AlphaStar, And recently got Silver Medal in Geometry Olympiad. Didn't beat all humans, but got a silver in one of those tasks that seemingly would be in the domain of humans for awhile. Like Go was considered.

                Really, I wasn't arguing about the chips so much. I mentioned DeepMind and you said I must like Sci-Fi, so I assumed you were inferring that DeepMind results were not that extraordinary.

                And, I can't keep up with the internal re-orgs now that DeepMind was merged with the other groups at Google. So Maybe I am assuming too much, if this wasn't the same DeepMind group. -- Though I think when companies merge groups like this, they are definitely hoping some 'magic success rubs off on them'.

                I guess for the Chip design, is your argument about it was compared against generic human engineer. So if they would set up some competition with some humans, that would satisfy your issue with the results?

                So my original more flippant post was Sci-Fi, it's just things are changing fast enough that the lines have blurred, and DeepMind has real results that aren't Sci-Fi:

                Take Games as a simplified world models, DeepMind has made a lot more progress in winning games than other companies, then take some of the other companies that have had break throughs in Video-to-Real-World-Models, where the video can be broken down into categories, and those can be fed into a 'Game' function. Now put that on a loop(default mode network in brain) and in a robot body (so it has embodied subjective experience of the world, there are consequences of actions). And I am making a bit of a Sci-Fi leap that you can get human behavior. And, if they can then make leap to designing chips, then they can reach that hockey stick of increasing intelligence.

                So, guess I am making Sci-Fi leap. But I think the actual results from DeepMind already seem Sci-Fi like but are real. So are we really that far away, given things we thought would take 100's of years are falling by the wayside.

                ok. take back ad-hominem. but it is hard to tell on the internet. it seemed like you were questioning verified results, and you must know that there is a large contingent on internet that casts doubt on all science. So once someone goes down that path it is easier to ignore them.

  • ur-whale 2 days ago ago

    Seems to me the article is claiming a lot of things, but is very light on actual comparisons that matter to you and me, namely: how does one of those fabled AI-designed chop compare to their competition ?

    For example, how much better are these latest gen TPU's when compared to NVidia's equivalent offering ?

  • colesantiago 3 days ago ago

    A marvellous achievement from DeepMind as usual, I am quite surprised that Google acquired them for a significant discount of $400M, when I would have expected it to be in the range of $20BN, but then again Deepmind wasn’t making any money back then.

    • dharma1 3 days ago ago

      it was very early. probably one of their all time best acquisitions in addition to YouTube.

      Re:using RL and other types AI assistance for chip design, Nvidia and others are doing this too

      • sroussey 3 days ago ago

        Applied Semantics for $100m which gave them their advertising business seems like their best deal.

      • hanwenn 2 days ago ago

        don't forget Android.

  • bankcust08385 2 days ago ago

    Technology singularity is around the corner as soon as the chips (mostly) design themselves. There will be a few engineers, zillions of semiskilled maintenance people making a pittance, and most of the world will be underemployed or unemployed. Technical people better understand this and unionize or they will find themselves going the way of piano tuners and Russian physicists. Slow boiling frog...

  • loandbehold 3 days ago ago

    Every generation of chips is used to design next generation. That seems to be the root of exponential growth in Moore's law.

    • AshamedCaptain 2 days ago ago

      I'm only tangential to the area, but my impression over the decades is that what is going to happen is that, eventually, designing the next generation is going to require more resources than the current generation can provide, thereby putting a hard stop at the exponential growth stage.

      I'd even dare to claim we are already at the point where the growth has stopped, but even then you will only see the effect in a decade or so as there are still many small low-hanging fruits you can fix, but no big improvements.

    • negativeonehalf 2 days ago ago

      Definitely a big part of it. Chips enable better EDA tools, which enable better chips. First it was analytic solvers and simulated annealing, now ML. Exciting times!

    • bgnn 2 days ago ago

      That's wrong. Chip design and Moore's law have nothing to do with each other.

      • smaddox 2 days ago ago

        To clarify what the parent is getting at: Moore's law is an observation about the density (and, really about the cost) of transistors. So it's about the fabrication process, not about the logic design.

        Practically speaking, though, maintaining Moore's law would have been economically prohibitive if circuit design and layout had not been automated.

        • bgnn 2 days ago ago

          That's true. The impact on design is reverse of the post I replied to though. Since we got more density, we had more compute available to automate more, which made it economically viable. Every generation had enough compute to design the next generation. Now the device scaling is stagnated, we have more (financially viable) compute available to us than before (compared to design complexity). This is why this AI generated floorplans become viable I think. I'm not sure if it would have been the same should the device scaling would be continuing at its peak.

          I want to emphasize the biggest barrier for IC design to the outsiders: prohibitively expensive software licenses. IC design software costs are the much higher than conpute and the production costs, and often similar order of magnitude but definitely higher than engineer salaries. This is because of the monopoly of the 3 big companies (Synopsys, Cadence and Mentor Graphics). What wxcites me the most about stuff like OP isn't AI, everyone is doing that. It's the premise of more competition and even open source tool options. In the good old days companies used to have their im-house tools. They are all sacrificed (and pretty much none made open source) because investors thought it's not a core business, so it's inefficient. Now even Nvidia or Apple have no alternative.

  • bachback 2 days ago ago

    Deepmind is producing science vapourware while OpenAI is changing the world

  • amelius 3 days ago ago

    Can this be abstracted and generalized into a more generally applicable optimization method?

  • kayson 3 days ago ago

    I'm pretty sure Cadence and Synopsys have both released reinforcement-learning-based placing and floor planning tools. How do they compare...?

    • RicoElectrico 2 days ago ago

      Synopsys tools can use ML, but not for the layout itself, rather tuning variables that go into the physical design flow.

      > Synopsys DSO.ai autonomously explores multiple design spaces to optimize PPA metrics while minimizing tradeoffs for the target application. It uses AI to navigate the design-technology solution space by automatically adjusting or fine-tuning the inputs to the design (e.g., settings, constraints, process, flow, hierarchy, and library) to find the best PPA targets.

    • hulitu 2 days ago ago

      They don't. You cannot compare reality (Cadence, Synopsys) with hype (Google).

      • pelorat 2 days ago ago

        So you're basically saying that Google should have used existing tools to layout their chip designs, instead of their ML solution, and that these existing tools would have produced even better chips than the ones they are actually manufacturing?

        • dsv3099i 2 days ago ago

          It’s more like no one outside of Google has been able to reproduce Google’s results. And not for lack of trying. So if you’re outside of Google, at this moment, it’s vapor.

        • hulitu 2 days ago ago

          > So you're basically saying that Google should have used existing tools to layout their chip designs, instead of their ML solution

          Did they tested their ML solution ? With real world chips ? Are there any "benchmarks" that show that their chip performs better ?

  • idunnoman1222 3 days ago ago

    So one other designer plus Google is using alpha chip for their layouts? - not sure on that title, call me when amd and nvidia are using it

  • 7e 2 days ago ago

    Did it, though? Google’s chips still aren’t very good compared with competitors.

  • DrNosferatu 3 days ago ago

    Yet, their “frontier” LLM lags all the others…

  • abc-1 3 days ago ago

    Why aren’t they using this technique to design better transformer architectures or completely novel machine learning architectures in general? Are plain or mostly plain transformers really peak? I find that hard to believe.

    • jebarker 2 days ago ago

      Because chip placement and the design of neural network architectures are entirely different problems, so this solution won't magically transfer from one to the other.

      • abc-1 2 days ago ago

        And AlphaGo is trained to play Go? The point is training a model through self play to build neural network architectures. If it can play Go and architect chip placements, I don’t see why it couldn’t be trained to build novel ML architectures.

        • jebarker 2 days ago ago

          Sure, they could choose to work on that problem. But why do you think that's a more important/worthwhile problem than chip design or any other problem they might choose to work on? My point was that it's not trivial to make self-play for some other problem work, so given all the problems in the world why did you single our neural network architecture design? Especially since it's not the transformer architecture that is really holding back AI progress.

          • abc-1 a day ago ago

            Recursive self improvement

  • mikewarot 3 days ago ago

    I understand the achievement, but can't square it with my belief that uniform systolic arrays will prove to be the best general purpose compute engine for neural networks. Those are almost trivial to route, by nature.

    • ilaksh 3 days ago ago

      Isn't this already the case for large portions of GPUs? Like, many of the blocks would be systolic arrays?

      I think the next step is arrays of memory-based compute.

      • mikewarot 3 days ago ago

        Imagine a bit level systolic array. Just a sea of LUTs, with latches to allow the magic of graph coloring to remove all timing concerns by clocking everything in 2 phases.

        GPUs still treat memory as separate from compute, they just have wider bottlenecks than CPUs.