AI chipmaker Cerebras files for IPO

(cnbc.com)

198 points | by TradingPlaces 15 hours ago ago

108 comments

  • knowitnone 14 hours ago ago

    NVIDIA is pretty established but there's also Intel, AMD, Google to contend with. Sure Cerebras is unique in that they make one large chip out of the entire wafer but nothing prevents these other companies from doing the same thing. Currently they are choosing not to because of wafer economics but if they chose to, Cerebras would pretty much lose their advantage. https://www.servethehome.com/cerebras-wse-3-ai-chip-launched... 56x the size of H100 but only 8x the performance improvement isn't something I would brag about. I expected much higher performance since all processing is on one wafer. Something doesn't add up (I'm no system designer). Also, at $3.13 million per node, one could buy 100 H100s at $30k each (not including system, cooling, cluster, etc). Based on price/performance Cerebras loses IMO.

    • fnordpiglet 13 hours ago ago

      I think the wafer itself isn’t the whole deal. If you watch their videos and read the link you posted the wafer size allows them to stack them in a block with integrated power and cooling at a higher density than blades and attach enormous amounts of memory. Not including the system, cooling, cluster, etc seems like a relatively unfair comparison too given the node includes all of those things - which are very expensive when considering enterprise grade data center hardware.

      I don’t think their value add is simple “single wafer” with all other variables the same. In fact I think the block and system that gets the most out of that form factor is the secret sauce and not as easily replicated - especially since the innovations are almost certainly protected by an enormous moat of patents and guarded by a legion of lawyers.

      • billconan 13 hours ago ago

        Can their system attach memory? from what I read, it doesn't seem to be able to: https://www.reddit.com/r/mlscaling/comments/1csquky/with_waf...

        • wmf 12 hours ago ago

          I think they do have external memory that they use for training.

        • winwang 12 hours ago ago

          Surprising. DRAM (and more importantly high-bandwidth DRAM) seems to be scaling significantly better than SRAM -- and I'm not sure if that could be seriously expected to shift.

      • 7e 13 hours ago ago

        At the end of the day, Cerebras has not submitted any MLPerf results (of which I am aware). That means they are hiding something. Something not very competitive.

        So, performance is iffy. Density for density sake doesn’t matter since clusters are power limited.

        • twothreeone 11 hours ago ago
          • rajnathani 7 hours ago ago

            Nothing for the training part of MLPerf's benchmark. If they're competing just on inference, then they have stiff competition from specialized NPU-for-inference makers like Hailo (see: it's even part of the official Raspberry Pi AI kit), Qualcomm, tons of other players, and also some players using optics instead of electrons for inference such as Lightmatter, and also SIMD on highly abundant CPU servers which are never in shortage unlike GPUs (and have recently gotten support for specialized inference ops besides simply SIMD ones).

        • KeplerBoy 6 hours ago ago

          I guess it's a software problem.

          Without optimized implementations their performance will look like shit, even if their chip were years ahead of the competition.

          Building efficient implementations with an immature ecosystem and toolchain doesn't sound like a good time. But yeah, huge red flag. If they can't get their chip to perform there's no hope for customers.

          • beng-nl 6 hours ago ago

            This hypothesis is an eerily exact instance of the tinygrad (tinycorp) thesis, along the lines of

            “nvidia’s chip is better than yours. If you can’t make your software run well on nvidia’s chip, you have no hope of making it run well on your chip, least of all the first version of your chip.”

            That’s why tinycorp is betting on a simple ML framework (tinygrad, which they develop and make available open source) whose promise is, due to the few operations needed by the framework: it’ll be very easy to get this software to run on a (eg your) new chip and then you can run ML workloads.

            I’m not a (real) expert in the field but find the reasoning compelling. And it might be a good explanation for the competition for nvidia existing in hardware, but seemingly not in reality (ie including software that does something with it).

            • KeplerBoy 5 hours ago ago

              Yes, sure. I'm occasionally reading up on what George Hotz is doing with tinygrad and him ranting about AMD hardware certainly has influenced my opinion on non-Nvidia hardware to some degree - even though I take his opinion with a grain of salt, he and his team are clearly encountering some non-trivial issues.

              I would love to try some of the stuff I do with CUDA on AMD hardware to get some first-hand experience, but it's a though sell: They are not as widely available to rent and telling my boss to order a few GPUs, so we can inspect that potential mess for ourselves is not convincing either.

    • artemisart 13 hours ago ago

      Correction: it's 8x the TFLOPS of a DGX (8 H100), not 1 H100. But it's true that if it stays at $3M it's probably too much and I don't think the memory bottleneck on gpus is large enough to justify this price/performance.

      • Tepix 4 hours ago ago

        So, the corrected statement is:

        "56x the size of H100 but only 64x the performance improvement"

        Doesn't sound too shabby.

      • throwup238 13 hours ago ago

        The company started in 2015 so I think they are (were?) banking on SRAM scaling better than it has in recent years.

      • bee_rider 9 hours ago ago

        If you have a problem that you can’t easily split up into 64 chunks, I guess it makes more sense, right?

    • jfoster 11 hours ago ago

      > 56x the size of H100 but only 8x the performance improvement isn't something I would brag about.

      It doesn't sound like it's too bad for a 9 year old company. Nvidia had a 20-year head start. I would expect that they will continue to shrink it and increase performance. At some point, that might become compelling?

      • statguy 6 hours ago ago

        Nvidia is also going to keep improving, so it will be a moving target.

        • jfoster 6 hours ago ago

          That's true, but the advantage of having a head start does eventually diminish. They won't catch up to Nvidia in the next couple of years, but they could eventually be a real competitor.

    • Zandikar 10 hours ago ago

      Comparing a WSE-3 to a H100 without considering the systems they go in or the systems, cooling, networking, etc that supports them means little when doing cost analysis, be it CapEx or TCO. A better (but still flawed) comparison would be a DGX H200 (a cluster of H100's and their essential supporting infra) to a CS-3 (a cluster of WSE-3's and their essential supporting infra in a similar form factor/volume of a DGX H200).

      Now, is Cerebras going to eventually beat Nvidia or at least compete healthily with Nvidia and other tech titans in the general market or a given lucrative niche of it? No idea. That'd be a cool plot twist, but hard to say. But it's worth acknowledging that investing in a company and buying their products are two entirely separate decisions. Much of silicon valleys success stories are a result of people investing in the potential of what they could become, not because they were already the best on the market, and for nothing else, Cerebras approach is certainly novel and promising.

    • pants2 13 hours ago ago

      Agreed, it just seems like Nvidia chips are going to be easier to produce at scale. Cerberas will be limited to a few niche use-cases, like HFT where hedge funds are using LLMs to analyze SEC filings as fast as possible.

      • BuckYeah an hour ago ago

        Where/how did you learn of the hedge fund usages?

    • dzhiurgis 12 hours ago ago

      > wafer economics

      What are they?

      Is this related to defects? Can't they disable parts of defective chip just like other CPUs do? Sounds cheaper than cutting up and packaging chips individually!

      • donavanm 11 hours ago ago

        Process development, feature size, and ultimate yield are probably what theyre after. Yes, for the past 30+ years everyone has used a combination of disabling (“fusing”) unused/unreliable logic on the die. In addition everyone also “bins” the chips from the same wafer to different SKUs based on stable clock speed, available/fused components, test results, etc. This can be very effective in increasing yield and salable parts.

        My recollection is that theres speculation cerebras is building in significant duplicate features to account for defects. They cant “bin” their wafers in the same way as packaged chips. That will reduce total yield/utilization of the surface area.

        The actual packaging steps are relatively low tech/cost compared to the semiconductor manufacturing. Theyre commonly outsourced somwhere like malaysia or thailand.

    • yieldcrv 13 hours ago ago

      they don’t need an advantage, they just need orders and inventory

      get extorted by nvidia sales people for a 2026 delivery date that gets pushed out if you say anything about it or decline cloud services

      or another provider delivering earlier

      thats what the market wants, and even then, who cares? this company is trying to IPO at whay valuation? this article didnt say but the last valuation was like $1.5bn? so you mean a 300x of delta between this and Nvidia’s valuation if these guys get a handful of orders? ok

      • rwmj 3 hours ago ago

        At the end of the day it's all made in the same factory. If nVidia have problems delivering then so do Cerebras.

    • hulitu 7 hours ago ago

      > Sure Cerebras is unique in that they make one large chip out of the entire wafer

      I'm sure tgey test it thoroughly. /s

  • cootsnuck 28 minutes ago ago

    Concerning in terms of hype bubble now having even more exposure to the stock market. Perhaps less concerning since it's a hardware startup? Nah, nvm, I think this will end up cratered within 3 years.

  • tempusalaria 14 hours ago ago

    On the one hand, the financials are terrible for an IPO in this market.

    On the other, Nvidia is worth 3trn so they can sell a pretty good dream of what success looks like to investors.

    Personally I would expect them to get a valuation well about the 4bln from the 2021 round, despite the financials not coming close to justifying it.

    • imdoxxingme 2 hours ago ago

      Saying the financials are terrible is a bit of a stretch. Rapidly growing revenue, decreasing loss/share and a loss/share similar to other companies that IPO'ed this year.

      The more concerning thing is just not having diversity of revenue, since most of it comes from G42.

    • m00x 6 hours ago ago

      IPOs are coming back. Expect pretty big ones in 2025.

    • TrapLord_Rhodo 8 hours ago ago

      Rev for last 2 years:

      $24.6M $78.7M $270M($136.4M)

      Sounds like a rocketship. You also get a better sharp if you take some money off the table in the form of leverage and put it in other firms within the industry. E.G. Leveraging your NVDA shares and buying Cerebras.

      • JumpCrisscross 7 hours ago ago

        > take some money off the table in the form of leverage and put it in other firms within the industry. E.G. Leveraging your NVDA shares and buying Cerebras

        Please don't do this. Sell your Nvidia shares and rebalance to Cerebras, whatever. But financially leveraging a high-multiple play to buy a correlated asset (which is also high multiple) is begging for a margin call. You may wind up having been right. But leverage forces you to be right and on time.

        • short_sells_poo 3 hours ago ago

          You are so on point! A huge number of amateur investors get obliterated on this. Your call may be right, but that's no help if you don't survive to see it realized.

          You may have a hugely profitable idea that could realize crazy gains over a 5 year horizon, but if you get margin called and liquidated in year 3, you'll end up with nothing.

          The magic of investment is compound returns, not crazy leverage. Take some of the crazy Nvidia profits and reinvest it elsewhere where you expect geometric growth. Keep things decently diversified.

    • lobochrome 10 hours ago ago

      It’ll pop. The it’ll rot.

  • alecco 4 hours ago ago

    If Cerebras keeps improving it will be a decent contender to Nvidia. Nvidia VRAM-SRAM is a bottleneck. For just inference, it needs to download a model at least once per token (divided by batch size). The bottleneck is not Tensor Cores but memory transfers. They say it themselves. Cerebras fixes that (at a cost of software complexity and narrower target solution).

  • fancyfredbot 6 hours ago ago

    The only way for Cereberas to actually succeed in the market is to raise funds. They need better software, better developer relations, and better hardware too. It's a gamble, but if they can raise enough money then there's a chance of success, whereas if they can't it's pretty hopeless.

  • GeorgeTirebiter 14 hours ago ago

    Cerebras is well-known in the AI chip market. They make chips that are an entire wafer.

    https://spectrum.ieee.org/cerebras-chip-cs3

    • throwup238 14 hours ago ago

      Cerebrus made a great (now deleted) video on the whole computer hosting the wafer: https://web.archive.org/web/20230812020202/https://www.youtu...

      It’s fascinating.

      • botro 11 hours ago ago

        This is a great video, thank you for sharing. My favorite part:

        "...next we have this rubber sheet, which is very clever, and very patented!"

      • dzhiurgis 12 hours ago ago

        TIL - web archive saves youtube videos

        Wow 200k amps in a chip. Whole thing looks like an early computer from 50s.

    • alephnerd 14 hours ago ago

      Yep! Them, SambaNova, and Groq are super exciting mid-late stage startups imo.

      • Der_Einzige 12 hours ago ago

        Shhhhh, stop telling the normies about the future!

        And especially don't tell them to start looking into who "sovereign clouds" actually are!

    • ericd 14 hours ago ago

      Interesting that they’ve scaled on-chip memory sublinearly with the growth of transistors between their generations, I would’ve thought they would try to bump that number up. Maybe it’s not a major bottleneck for their training runs?

    • tonetegeatinst 13 hours ago ago

      I'd bet that making a chip the size of the waver has the benefit on not losing any silicon to dicing the wafer up like a desktop or GPU chips coming from a wafer. Major downside is you need to either have a massive x and Y exposure size or break the wafer into smaller exposures which means your still needing to focus on alignment between the steps, and if a defect can't be corrected then is that wafer just scrap?

      • throwup238 13 hours ago ago

        They fuse off sections of the wafer with defects just like other manufacturers do in monolithic CPUs (as opposed to chiplets like AMD).

    • bigmattystyles 14 hours ago ago

      How does one cool that!? Heck power it...

    • idiotsecant 13 hours ago ago

      Making larger monolithic silicon doesn't get 2x as expensive to get 2x as large. Bigger silicon is massively more expensive. I'm not sure that making each piece require a large chunk of perfect wafer is a fantastic idea, especially when you're looking to unseat juggernauts who have a great deal of experience making high quality product already.

  • mlboss 9 hours ago ago

    The real winner in chip war is TSMC. Everyone is using them to make chips.

    • bjornsing 5 hours ago ago

      Yeah I also have a feeling more value will gravitate towards the really hard stuff once we’ve got the NN architectures fairly worked out and stable.

      To put my money where my mouth is I’m long TSMC and ASML among others, and (moderately) short NVidia. Very long the industry as a whole though.

  • zone411 12 hours ago ago

    They have a cloud platform. I just ran a test query on their version of Llama 3.1 70B and got 566 tokens/sec.

    • greesil 11 hours ago ago

      Is that a lot? Do they have MLPerf submissions?

      • zone411 11 hours ago ago

        Yes, that's very fast. The same query on Groq, which is known for its fast AI inference, got 249 tokens/s, and 25 tokens/s on Together.ai. However, it's unclear what (if any) quantization was used and it's just a spot check, not a true benchmark.

        https://www.zdnet.com/article/cerebras-did-not-spend-one-min...

        • Tetraslam 11 hours ago ago

          Met them at an MIT event last week, they don't quantize any models.

  • Nokinside 4 hours ago ago

    They use the whole wafer for a chip (wafer scale). The WSE-3 chip is optimized for sparse linear algebra ops, used 5nm TSMC process.

    Their idea is to have 44 GB SRAM per chip. SRAM is _very_expensive_ compared to DRAM (about two orders of magnitude).

    It's easy to design larger chip. What determines the price/performance ratio are things like

    - performance per chip area.

    - yield per chip area.

  • parentheses 8 hours ago ago

    So many things here smell funny...

    I have never heard of any models trained on this hardware. How does a company IPO on the basis of having the "best tech" in this industry, when all the top models are trained on other hardware.

    It just doesn't add up.

    • cootsnuck 25 minutes ago ago

      I thought they were fore inference not training...either way, kind of is concerning that I've heard about them plenty from the hype bubble but I apparently still don't really understand what they do.

    • ClassyJacket 8 hours ago ago

      Plenty of companies IPO before releasing anything, or before building a large audience. That's how lots of things that requite a long lead time and large initial investment get made. It's just a bigger risk for the investors.

      Tesla IPOed in 2010 after selling only a few hundred Roadsters.

  • lamontcg 10 hours ago ago

    Kind of vaguely reminds me of Transmeta vs Intel/AMD back in ~2000.

  • gdiamos 12 hours ago ago

    Cerebras has a real technical advantage in development of wafer scale.

  • gyre007 7 hours ago ago

    I don’t know enough to say they’ll fail or be successful but I am wondering who will underwrite this IPO — they must have balls of steal and confidence gallore

  • ggm 12 hours ago ago

    Wafer scale integration has been a thing since wafers. Yet, I almost never read of anyone taking it the full distance to a product. I don't know if it turns out the yield per die per wafer or the associated technology problems were the glitch, but it feels like a good idea which never quite makes it out the door.

    • desertrider12 12 hours ago ago

      They don't give yield numbers but this says that they get acceptable yields by putting extra cores on the silicon and then routing around the defective ones. https://cerebras.ai/blog/wafer-scale-processors-the-time-has...

      • ggm 10 hours ago ago

        I found this bit interesting: They worked with TSMC to ensure the off-die areas used for test and other foundry purposes have been more clearly circumscribed so they can use the blanks between the chips for the inter-chip connects. The distances are kept short and they can avoid a lot of encode/decode logic costs associated with how people used to do this:

        "The cross scribe line wiring has been developed by Cerebras in partnership with TSMC. TSMC allowed us to use the scribe lines for tens of thousands of wires. We were also allowed to create certain keep-out zones with no TSCM test structures where we could embed Cerebras technology. The short wires (inter-die spacing is less than a millimeter) enable ultra-high bandwidth with low latency. The wire pitch is also comparable to on-die, so we can run the inter-die wires at the same clock as the normal wires, with no expensive serialization/deserialization. The overheads and performance of this homogeneous communication are far more attractive than those of multi-chip systems that involve communication through package boundaries, transceivers, connecters or cables, and communication software interfaces."

      • mastax 10 hours ago ago

        I believe I’ve heard them say they have 100% yield. They haven’t made very many yet though, on the order of 100.

  • mgh2 12 hours ago ago
  • brk 14 hours ago ago

    I’m going to go ahead and predict this flubs long term. Not only is what they are doing very challenging, I’ve had some random brokerage house reach out to me multiple times about investing in this IPO. When your IPO process resorts to cold calling I don’t think it’s a good sign. Granted I have some associations with AI startups I don’t think that had anything to do with the outreach from the firm.

    • drcode 14 hours ago ago

      Agreed, it seems like NVIDIA would be happy to make whole-wafer chips if it seemed like a good play.

      My guess is there are a lot of bespoke limitations that the software has to work around to run on a "whole wafer" chip, and even companies that have 99% similar designs to Nvidia already are struggling to deal with software incompatibilities, even with such a tiny difference.

    • imdoxxingme 2 hours ago ago

      You do realize that brokerages earn commissions on selling shares, so why wouldn't they contact people who may be interested?

  • metadat 14 hours ago ago

    Does cerebras make gaming GPUs, or is it enterprise-only?

    • ericd 14 hours ago ago

      Very solidly enterprise-only. They make single chips that take an entire wafer, use something like 10 kilowatts, and have liquid cooling channels that go through the chip. Systems are >$1M.

      • ianbicking 11 hours ago ago

        It's the return of the supercomputer! I really didn't think the supercomputer would come back as a thing, for so long it seemed stuck as a weird research project that only made sense for a tiny set of workloads... but it does make sense now

    • elorant 14 hours ago ago

      The chip is huge. It wouldn't fit in any conceivable PC form card.

    • eikenberry 14 hours ago ago

      They sound more like NPUs or TPUs than GPUs. Though that doesn't answer the question about the market they are targeting.

  • system2 9 hours ago ago

    Is it a good idea to go IPO when the balance sheet looks terrible?

  • brcmthrowaway 11 hours ago ago

    How does Cerebras compare to D-Matrix?

  • bloqs 12 hours ago ago

    They have zero moat

  • will-burner 14 hours ago ago

    This is the first I've heard of Cerebras Systems.

    From the article

    >Cerebras had a net loss of $66.6 million in the first six months of 2024 on $136.4 million in sales, according to the filing.

    That doesn't sound very good.

    What makes them think they can compete with Nvidia, and why IPO right now?

    Are they trying to get government money to make chip fabs like Intel or something?

    • hn_throwaway_99 14 hours ago ago

      You seemed surprised that this company is having an IPO to actually raise funds for operations and expansion, vs as just an "exit" where VCs and other insiders can dump their shares onto the broader public.

      I might be a bit suspicious if a company in some low-capital-intensive industry was IPOing while unprofitable, but this is chip making. Even if they're not making their own fabs this is still an industry with high capital requirements.

      We should be thrilled at a company actually using an IPO for its original intended purpose as opposed to some financialization scheme.

      • knowitnone 13 hours ago ago

        they don't make chips. they design and contract TSMC to fab the chips. The high capital is in design tools and engineers.

        • hn_throwaway_99 13 hours ago ago

          Thanks - I said that in my comment, but then just realized I had a typo of "fans" where it should have said "Even if they're not making their own fabs..."

      • theogravity 13 hours ago ago

        Does this mean that they couldn't find VCs to raise more cash?

        • ketzo 11 hours ago ago

          VCs offer cash on different terms than the public does. This just means Cerebras believes it can get capital more cheaply (or on otherwise better terms) than it can from VCs.

          That might mean VCs are turning them down, yeah, but that’s just one of many possible factors into “where do we raise money”

        • fakedang 8 hours ago ago

          Cerebras is currently heavily backed by the Emirati government's sovereign wealth fund.

    • est31 14 hours ago ago

      Nvidia's moat is real but not big enough that one can't surpass it with a lot of engineering. It's not the only company making AI accelerators, and this has been the case for many years already. The first TPU was introduced in 2015. Nvidia has just managed to get a leader position in the race.

      • AlotOfReading 14 hours ago ago

        Saying it's "just" a lot of engineering effort to catch up isn't wrong, but it understates the reality. There are very few organizations on earth that have the technical and financial resources to meaningfully compete with even small parts of Nvidia's portfolio. Nvidia's products benefit from that breadth of strengths and the volumes they ship.

        They don't just make accelerators, they'll sell you the hardware too (unlike TPUs). They don't just sell you the hardware, the software ecosystem will work too (unlike AMD or Intel). That hardware won't just do a lot of computations, it'll also have a lot of off-chip memory bandwidth (vs Cerebras or others). Need to embed those capabilities in a device that can't fit a wafer cabinet or a server rack's worth of compute, Nvidia will sell you similar hardware that uses a similar stack, certified for your industry (e.g. automotive). Take any of that away and you're left with a significantly weaker offering.

        Also they benefit from the priority of paying fabs a lot of money and placing a lot of orders.

        If anything, Nvidia is less dominant than they should be because they've managed to ensure absolutely no one wants to buy from them when there are viable alternatives.

        • kortilla 13 hours ago ago

          People said the same about Cisco, Intel, IBM etc. It will only be a matter of time for companies to eat into the high margin stuff for specific use-cases and grow from there.

          • skeptrune 12 hours ago ago

            There's something weird about the market right now in that all the AI budgets being used to by GPUs are loss-leading. Orgs are treating the spend as a waste anyways, so I suspect they aren't going to be looking to cut costs. Make Cerebras a hard sell imo.

      • talldayo 11 hours ago ago

        > Nvidia's moat is real but not big enough that one can't surpass it with a lot of engineering.

        Yes, but you also need a lot of capital if you want node parity with them. Nvidia (supposedly) spent an estimated $9 billion dollars getting onto TSMC's 4nm node. https://www.techspot.com/news/93490-nvidia-reportedly-spent-...

    • will-burner 14 hours ago ago

      > Taiwan Semiconductor Manufacturing Company makes the Cerebras chips. Cerebrus warned investors that any possible supply chain disruptions may hurt the company.

      They get their chips from the same company that Nvidia does.

      • ajb 14 hours ago ago

        Virtually any competitors to Nvidia would be in the same position.

        It's not necessarily to TSMC's advantage for Nvidia to become a monopolist either, although they wouldn't be totally dependent on Nvidia even if they did because TSMC serves every chip market.

      • alephnerd 14 hours ago ago

        They both contract TSMC to fabricate their chips.

        The actual design and R&D is still done by Nvidia, Cerebras, AMD, Groq, etc.

        Think of TSMC like Kinko's - they do printing and fabrication which is very low margins.

        The main PMF for Cerebras is in simulations, drug discovery, and ofc ML.

        As I've mentioned before on HN, Public-Private Drug Discovery and NatLab research has been a major driver for HPC over the past 20 years.

        • est31 14 hours ago ago

          TSMC has a market cap of 0.9T USD. It would be the 7 largest US company by market cap if it were one. Manufacturing chips is extremely profitable, at least in the current climate. It used to be that software is more profitable than hardware, which is more commoditized, but AI gave hardware companies a renaissance of sorts.

          It's not a simple process at all but requires a lot of engineering and engineers to do it.

          https://companiesmarketcap.com/usa/largest-companies-in-the-... https://companiesmarketcap.com/tsmc/marketcap/

          • alephnerd 14 hours ago ago

            > Manufacturing chips is extremely profitable

            It only became profitable NOW in the last 2-3 years.

            Before that, foundry after foundry was shutting down or merging.

            TSMC, UMC, Samsung, Intel Foundry Services, and GloFlo are the last men standing after the severe contraction in the foundry model in the 2000s-2010s due to it's extremely high upfront costs and lack of moat to prevent commodification.

        • extesy 14 hours ago ago

          TSMC margins are over 30% and growing [1] - that's very far from "low".

          [1] https://www.macrotrends.net/stocks/charts/TSM/taiwan-semicon...

          • alephnerd 14 hours ago ago

            30% net due to a near monopoly and a recent upswing due to Nvidia.

            Almost every other foundry system died because of low net margins.

            Software (and fabless hardware like chip design) is expected to have 60-70% gross margins or the ability to reach that.

            Semiconductors is part of TMT just like Software or Telecom, and this has an impact on available liquidity.

            This is why TSMC is heavily subsidized by the Taiwanese government.

            • extesy 14 hours ago ago

              TSMC is neither software nor fabless. I'm not sure we are talking about the same company, there seems to be some disconnect here. For hardware business 30% margins are high, Apple is one of the most famous exceptions.

              • alephnerd 14 hours ago ago

                > For hardware business

                When a foundry wishes to raise capital from the private or public markets, it's bucketed under TMT - which includes software and fabless hardware as well.

                This means it's almost impossible to raise capital without a near monopoly and/or government support and intervention - which is what Taiwan did for TSMC and UMC - because the upfront costs are too high and the margins are much lower compared to other subsegments in the same sector.

                This is why industrial subsidizes like the CHIPS act are enacted - to minimize the upfront cost of some very CapEx heavy projects (which almost everything Foundry related is).

        • Cyph0n 14 hours ago ago

          > Think of TSMC like Kinko’s

          What an amazingly reductive analogy :)

        • typon 13 hours ago ago

          Kinko's is not the pinnacle of human engineering - TSMC is. A slight difference there.

    • ericd 14 hours ago ago

      Compare it to the same period last year ($8.7M in sales). That’s a pretty solid growth rate.

    • macawfish 14 hours ago ago

      Their tech is very impressive, look it up.

      • pldpb 4 hours ago ago

        It's a deadend. SRAM doesn't scale on advanced nodes.

        Similar to Tenstorrent who chose GDDR instead of HBM, they throught production AI models won't get bigger than GPT3.5 due to cost.