What happened to WebAssembly

(emnudge.dev)

321 points | by enz 16 hours ago ago

304 comments

  • Ameo 15 hours ago ago

    It seems to me that Wasm largely succeeded and meets most/all of the goals for when it was created. The article backs this up by listing the many niches in which its found support, and I personally have deployed dozens of projects (both personal and professional) that use Wasm as a core component.

    I''m personally a big fan of Wasm; it has been one of my favorite technologies ever since the first time I called malloc from the JS console when experimenting with an early version of Emscripten. Modern JS engines can be almost miraculously fast, but Wasm still offers the best performance and much higher levels of control over what's actually running on the CPU. I've written about this in the past.

    The only way it really fell short is in the way that a lot of people were predicting that it would become a sort of total replacement for JS+HTML+CSS for building web apps. In this regard, I'd have to agree. It could be the continued lack of DOM bindings that have been considered a key missing piece for several years now, or maybe something else or more fundamental.

    I've tried out some of the Wasm-powered web frameworks like Yew and not found them to provide an improvement for me at all. It just feels like an awkwardly bolted-on layer on top of JS and CSS without adding any new patterns or capabilities. Like you still have to keep all of the underlying semantics of the way JS events work, you still have to keep the whole DOM and HTML element system, and you also have to deal with all the new stuff the framework introduces on top of that.

    Things may be different with other frameworks like Blazor which I've not tried, but I just find myself wanting to write JS instead. I openly admit that it might just be my deep experience and comfort building web apps using React or Svelte though.

    Anyway, I strongly feel that Wasm is a successful technology. It's probably in a lot more places than you think, silently doing its job behind the scenes. That, to me, is a hallmark of success for something like Wasm.

    • xipix 14 hours ago ago

      The article seems to evaluate Wasm as it were a framework upon which apps are built. It's not that, it's an orthogonal technology allowing CPU optimisations and reuse of native code in the browser. Against that expectation, it has been a huge success despite not yet reaching bare-metal levels of performance and energy efficiency.

      One such example: audio time stretch in the browser based upon a C++ library [1]. There is no way that if this were implemented in JS that it could deliver (a) similar performance or (b) source code portability to native apps.

      [1] https://bungee.parabolaresearch.com/change-audio-speed-pitch

      • coldtea 13 hours ago ago

        >despite not yet reaching bare-metal levels of performance and energy efficiency.

        "Not yet"? It will never reach "bare-metal levels of performance and energy efficiency".

        • flohofwoe 11 hours ago ago

          FWIW the native and WASM versions of my home computer emulators are within about 5% of each other (on an ARM Mac), e.g. more or less 'measuring noise':

          https://floooh.github.io/tiny8bit/

          You can squeeze out a bit more by building with -march=native, but then there's no reason that a WASM engine couldn't do the same.

          • jasonjmcghee 9 hours ago ago

            SIMD and multithreading support really helped with closing the performance gap.

            Still surprised about the 5% though- I’ve generally seen quite a bit more of a gap.

            • flohofwoe 7 hours ago ago

              Maybe the emulator code is particularly WASM friendly ... it's mostly bit twiddling on 64-bit integers with very little regular integer math (except incrementing counters) and relatively few memory load/stores.

        • kannanvijayan 10 hours ago ago

          I'd have to take a contrary view on that. It'll take some time for the technologies to be developed, but ultimately managed JIT compilation has the potential to exceed native compiled speeds. It'll be a fun journey getting there though.

          The initial order-of-magnitude jump in perf that JITs provided took us from the 5-2x overhead for managed runtimes down to some (1 + delta)x. That was driven by runtime type inference combined with a type-aware JIT compiler.

          I expect that there's another significant, but smaller perf jump that we haven't really plumbed out - mostly to be gained from dynamic _value_ inference that's sensitive to _transient_ meta-stability in values flowing through the program.

          Basically you can gather actual values flowing through code at runtime, look for patterns, and then inline / type-specialize those by deriving runtime types that are _tighter_ than the annotated types.

          I think there's a reasonable amount of juice left in combining those techniques with partial specialization and JIT compilation, and that should get us over the hump from "slightly slower than native" to "slightly faster than native".

          I get it's an outlier viewpoint though. Whenever I hear "managed jitcode will never be as fast as native", I interpret that as a friendly bet :)

          • rudedogg 8 hours ago ago

            > JIT compilation has the potential to exceed native compiled speeds

            The battlecry of Java developers riding their tortoises.

            Don’t we have decades of real-world experience showing native code almost always performs better?

            For most things it doesn’t matter, but it always rubs me the wrong way when people mention this about JIT since it almost never works that way in the real world (you can look at web framework benchmarks as an easy example)

            • kannanvijayan 7 hours ago ago

              It's not that surprising to people who are old enough to have lived through the "reality" of "interpreted languages will never be faster than about 2x compiled languages".

              The idea that an absurdly dynamic language like JS, where all objects are arbitrary property bags with prototypical dependency chains that are runtime mutable, would execute at a tech budget under 2x raw performance was just a matter of fact impossibility.

              Until it wasn't. And the technology reason it ended up happening was research that was done in the 80s.

              It's not surprising to me that it hasn't happened yet. This stuff is not easy to engineer and implement. Even the research isn't really there yet. Most of the modern dynamic language JIT ideas which came to the fore in the mid 200X's were directly adapting research work on Self from about two decades prior.

              Dynamic runtime optimization isn't too hot in research right now, and it never was to be honest. Most of the language theory folks tend to lean more in the type theory direction.

              The industry attention too has shifted away. Browsers were cutting edge a while back and there was a lot of investment in core research tech associated with that, but that's shifting more to the AI space now.

              Overall the market value prop and the landscape for it just doesn't quite exist yet. Hard things are hard.

              • DonHopkins 5 hours ago ago

                You nailed it -- the tech enabling JS to match native speed was Self research from the 80s, adapted two decades later. Let me fill in some specifics from people whose papers I highly recommend, and who I've asked questions of and had interesting discussions with!

                Vanessa Freudenberg [1], Craig Latta [2], Dave Ungar [3], Dan Ingalls, and Alan Kay had some great historical and fresh insights. Vanessa passed recently -- here's a thread where we discussed these exact issues:

                https://news.ycombinator.com/item?id=40917424

                Vanessa had this exactly right. I asked her what she thought of using WASM with its new GC support for her SqueakJS [1] Smalltalk VM.

                Everyone keeps asking why we don't just target WebAssembly instead of JavaScript. Vanessa's answer -- backed by real systems, not thought experiments -- was: why would you throw away the best dynamic runtime ever built?

                To understand why, you need to know where V8 came from -- and it's not where JavaScript came from.

                David Ungar and Randall B. Smith created Self [3] in 1986. Self was radical, but the radicalism was in service of simplicity: no classes, just objects with slots. Objects delegate to parent objects -- multiple parents, dynamically added and removed at runtime. That's it.

                The Self team -- Ungar, Craig Chambers, Urs Hoelzle, Lars Bak -- invented most of what makes dynamic languages fast: maps (hidden classes), polymorphic inline caches, adaptive optimization, dynamic deoptimization [4], on-stack replacement. Hoelzle's 1992 deoptimization paper blew my mind -- they delivered simplicity AND performance AND debugging.

                That team built Strongtalk [5] (high-performance Smalltalk), got acquired by Sun and built HotSpot (why Java got fast), then Lars Bak went to Google and built V8 [6] (why JavaScript got fast). Same playbook: hidden classes, inline caching, tiered compilation. Self's legacy is inside every browser engine.

                Brendan Eich claims JavaScript was inspired by Self. This is an exaggeration based on a deep misunderstanding that borders on insult. The whole point of Self was simplicity -- objects with slots, multiple parents, dynamic delegation, everything just another object.

                JavaScript took "prototypes" and made them harder than classes: __proto__ vs .prototype (two different things that sound the same), constructor functions you must call with "new" (forget it and "this" binds wrong -- silent corruption), only one constructor per prototype, single inheritance only. And of course == -- type coercion so broken you need a separate === operator to get actual equality. Brendan has a pattern of not understanding equality.

                The ES6 "class" syntax was basically an admission that the prototype model was too confusing for anyone to use correctly. They bolted classes back on top -- but it's just syntax sugar over the same broken constructor/prototype mess underneath. Twenty years to arrive back at what Smalltalk had in 1980, except worse.

                Self's simplicity was the point. JavaScript's prototype system is more complicated than classes, not less. It's prototype theater. The engines are brilliant -- Self's legacy. The language design fumbled the thing it claimed to borrow.

                Vanessa Freudenberg worked for over two decades on live, self-supporting systems [9]. She contributed to Squeak EToys, Scratch, and Lively. She was co-founder of Croquet Corp and principal engineer of the Teatime client/server architecture that makes Croquet's replicated computation work. She brought Alan Kay's vision of computing into browsers and multiplayer worlds.

                SqueakJS [7] was her masterpiece -- a bit-compatible Squeak/Smalltalk VM written entirely in JavaScript. Not a port, not a subset -- the real thing, running in your browser, with the image, the debugger, the inspector, live all the way down. It received the Dynamic Languages Symposium Most Notable Paper Award in 2024, ten years after publication [1].

                The genius of her approach was the garbage collection integration. It amazed me how she pulled a rabbit out of a hat -- representing Squeak objects as plain JavaScript objects and cooperating with the host GC instead of fighting it. Most VM implementations end up with two garbage collectors in a knife fight over the heap. She made them cooperate through a hybrid scheme that allowed Squeak object enumeration without a dedicated object table. No dueling collectors. Just leverage the machinery you've already paid for.

                But it wasn't just technical cleverness -- it was philosophy. She wrote:

                "I just love coding and debugging in a dynamic high-level language. The only thing we could potentially gain from WASM is speed, but we would lose a lot in readability, flexibility, and to be honest, fun."

                "I'd much rather make the SqueakJS JIT produce code that the JavaScript JIT can optimize well. That would potentially give us more speed than even WASM."

                Her guiding principle: do as little as necessary to leverage the enormous engineering achievements in modern JS runtimes [8]. Structure your generated code so the host JIT can optimize it. Don't fight the platform -- ride it.

                She was clear-eyed about WASM: yes, it helps for tight inner loops like BitBlt. But for the VM as a whole? You gain some speed and lose readability, flexibility, debuggability, and joy. Bad trade.

                This wasn't conservatism. It was confidence.

                Vanessa understood that JS-the-engine isn't the enemy -- it's the substrate. Work with it instead of against it, and you can go faster than "native" while keeping the system alive and humane. Keep the debugger working. Keep the image snapshotable. Keep programming joyful. Vanessa knew that, and proved it!

                [1] Freudenberg et al. SqueakJS paper (DLS 2014, Most Notable Paper Award 2024). https://freudenbergs.de/vanessa/publications/Freudenberg-201...

                [2] Craig Latta, Caffeine. Smalltalk livecoding in the browser. https://thiscontext.com/

                [3] Self programming language. Prototype-based OO with multiple inheritance. https://selflanguage.org/

                [4] Hoelzle, Chambers & Ungar. Debugging Optimized Code with Dynamic Deoptimization (1992). https://bibliography.selflanguage.org/dynamic-deoptimization...

                [5] Strongtalk. High-performance Smalltalk with optional types. http://strongtalk.org/

                [6] Lars Bak. Architect of Self VM, Strongtalk, HotSpot, V8. https://en.wikipedia.org/wiki/Lars_Bak_(computer_programmer)

                [7] SqueakJS. Bit-compatible Squeak/Smalltalk VM in pure JavaScript. https://squeak.js.org/

                [8] SqueakJS JIT design notes. Leveraging the host JS JIT. https://squeak.js.org/docs/jit.md.html

                [9] Vanessa Freudenberg. Profile and contributions. https://conf.researchr.org/profile/vanessafreudenberg

            • anotherhue 8 hours ago ago

              Yeah I've heard this my whole career, and while it sounds great it's been long enough that we'd be able to list some major examples by now.

              What are the real world chances that a) one's compiled code benefits strongly from runtime data flow analysis AND b) no one did that analysis at the compilation stage?

              Some sort of crazy off label use is the only situation I think qualifies and that's not enough.

              • IggleSniggle 7 hours ago ago

                Compiled Lua vs LuaJIT is a major example imho, but maybe it's not especially pertinent given the looseness of the Lua language. I do think it demonstrates that the concept that it is possible to have a tighter type-system at runtime than at compile time (that can in turn result in real performant benefits) is a sound concept, however.

                • drysart 5 hours ago ago

                  The major Javascript engines already have the concept of a type system that applies at runtime. Their JITs will learn the 'shapes' of objects that commonly go through hot-path functions and will JIT against those with appropriate bailout paths to slower dynamic implementations in case a value with an unexpected 'shape' ends up being used instead.

                  There's a lot of lore you pick up with Javascript when you start getting into serious optimization with it; and one of the first things you learn in that area is to avoid changing the shapes of your objects because it invalidates JIT assumptions and results in your code running slower -- even though it's 100% valid Javascript.

            • pjmlp 8 hours ago ago

              Only if it doesn't make use of dynamic linking, reflection and is written to take advantage of value types.

              AOT compilers without PGO data usually tend to perform worse when those conditions aren't met.

              Which is why the best of both worlds is using JIT caches that survive execution runs.

          • xylophile 7 hours ago ago

            Any optimizations discovered at runtime by a JIT can also be applied to precompiled code. The precompiled code is then not spending runtime cycles looking for patterns, or only doing so in the minimally necessary way. So for projects which are maximally sensitive to performance, native will always be capable of outperforming JIT.

            It's then just a matter of how your team values runtime performance vs other considerations such as workflow, binary portability, etc. Virtually all projects have an acceptable range of these competing values, which is where JIT shines, in giving you almost all of the performance with much better dev economics.

            • kannanvijayan 6 hours ago ago

              I think you can capture that constraint as "anything that requires finely deterministic high performance is out of reach of JIT-compiled outputs".

              Obviously JITting means you'll have a compiler executing sometimes along with the program which implies a runtime by construction, and some notion of warmup to get to a steady state.

              Where I think there's probably untapped opportunity is in identifying these meta-stable situations in program execution. My expectation is that there are execution "modes" that cluster together more finely than static typing would allow you to infer. This would apply to runtimes like wasm too - where the modes of execution would be characterized by the actual clusters of numeric values flowing to different code locations and influencing different code-paths to pick different control flows.

              You're right that on the balance of things, trying to say.. allocate registers at runtime will necessarily allow for less optimization scope than doing it prior.

              But, if you can be clever enough to identify, at runtime, preferred code-paths with higher resolution than what (generic) PGO allows (because now you can respond to temporal changes in those code-path profiles), then you can actually eliminate entire codepaths from the compiler's consideration. That tends to greatly affect the register pressure (for the better).

              It might be interesting just to profile some wasm executions of common programs. If there are transient clusterings of control flow paths that manifest during execution. It'd be a fun exercise...

        • creata 13 hours ago ago

          Why? My only guess is that the instructions don't match x86 instructions well (way too few Wasm instructions) and the runtime doesn't have enough time to compile them to x86 instructions as well as, say, GCC could.

          • HPsquared 13 hours ago ago

            To be fair, x86 instructions don't match internal x86 processor architecture either.

            • 201984 9 hours ago ago

              How don't they? Most x86 instructions map to just one or two uops as you can see at https://uops.info

      • pjmlp 13 hours ago ago

        Yes there is, WebGPU compute shaders, or misusing WebGL fragment shaders.

    • lenkite 8 hours ago ago

      > It could be the continued lack of DOM bindings that have been considered a key missing piece for several years now, or maybe something else or more fundamental.

      No, it is NOT "something else or more fundamental" - it is most certainly the lack of proper, performant access to the DOM without having to use crazy, slow hacks. Do that and frontend web-apps will throw JS into the gutter within a decade.

      • codelikeawolf 6 hours ago ago

        > Do that and frontend web-apps will throw JS into the gutter within a decade.

        Why though? What's wrong with JS? I feel like it's gotten a lot better over the years. I don't really understand all the hate.

        • homarp 4 hours ago ago

          it is not hate. It is same reason people likes node on the backend: one language to do eveything.

          Wasm with 'fast' DOM manipulation opens the door to every language compiling to wasm to be used to build a web app that renders HTML.

          • codelikeawolf 4 hours ago ago

            I don't mean to split hairs here, but considering the wording of "throw something in the gutter", I would argue that "hate" isn't really too far off the mark.

            > Wasm with 'fast' DOM manipulation opens the door to every language compiling to wasm to be used to build a web app that renders HTML.

            This was never the goal of Wasm. To quote this article [1]:

            > What should be relevant for working software developers is not, "Can I write pure Wasm and have direct access to the DOM while avoiding touching any JavaScript ever?" Instead, the question should be, "Can I build my C#/Go/Python library/app into my website so it runs with good performance?"

            Swap out "pure Wasm" with <your programming language> and the point still stands. If you really want to use one language to do everything, I'm pretty sure just about every popular programming language has a way of transpiling to JS.

            [1] https://queue.acm.org/detail.cfm?id=3746174

        • tpm 6 hours ago ago

          > What's wrong with JS?

          Let's not go into that for the millionth time and instead perhaps ask yourself why is TS wildly successful and even before that everyone was trying to use anything-but-js.

          • codelikeawolf 5 hours ago ago

            > Let's not go into that for the millionth time

            Ok, that's fair. My goal with this question wasn't to open a can of worms. But whenever I see a strong averse reaction to JS, I assume that the person hasn't tried using _modern_ JS.

            > why is TS wildly successful

            From my perspective, it stops me from making stupid mistakes, improves autocomplete, and adds more explicitness to the code, which is incredibly beneficial for large teams and big projects. But I don't think that answers my original question, because if you strip away the types, it's JS.

            > even before that everyone was trying to use anything-but-js

            Because JS used to suck a lot more, but it sucks a lot less now.

            • homarp 4 hours ago ago

              > [...] sucks less

              so does c, zig, c++, go, rust, python, ruby, php, ada,...

              • codelikeawolf 4 hours ago ago

                I'm not sure if this is meant to be snarky or if you're saying that the languages you listed have improved over time. If you're being snarky, you've proven my point by saying several random programming languages are better than JS while providing zero justification.

                • homarp 7 minutes ago ago

                  It's a complement to my other answer to you (about your Q on why ppl would not want to learn/use js and would prefer WASM if there was FastDOM ==> because not everyone wants to be multilingual): I was listing a few languages that people are confortable with and would rather use through WASM than learning idiomatic JS/TS ( it's easy to learn the syntax, it takes practice to learn the idiomatic way).

                  And yes, I do meant that my listed have gotten better, just like JS/TS.

                  As for why not comp/transpiling to JS? it is my impression that WASM was born out of that ( Compiled to subset of js (asm.js)) and it's an evolution of compiling to JS.

        • SR2Z 5 hours ago ago

          Then why not allow WASM to access the DOM?

          • codelikeawolf 5 hours ago ago

            Wasm is essentially a CPU in the browser. It's very barebones in terms of its capabilities. The DOM API is pretty beefy, so adding DOM support to Wasm would be a massive undertaking. So why add all that complexity when you already have a perfectly capable mechanism for interacting with the DOM?

            • bloppe 2 hours ago ago

              That "perfectly capable mechanism" is one-off JS glue code, which is so cumbersome that approximately nobody actually uses it even though it's been an option for at least 6 years. It would be silly to mistake that for a satisfactory solution.

              From my (outsider) perspective, I think the main roadblock atm is standardizing the component model, which would open the door to WIT translations for all the web APIs, which would then allow browsers to implement support for those worlds into browser engines directly, perhaps with some JS pollyfill during the transition. Some people really don't like how slowly component model standardization has progressed, hence all the various glue solutions, but the component model is basically just the best glue solution and it's important to get it right for all the various languages and environments they want to support.

          • drysart 5 hours ago ago

            There's not some conspiracy that's stopped it from happening. Nobody, anywhere, has ever said "DOM access from WASM isn't allowed". It's not a matter of 'allow', it's a matter of capability.

            There's a lot of prerequisites for DOM access from WASM that need to be built first before there can be usable DOM access from within WASM, and those are steadily being built and added to the WASM specification. Things like value references, typed object support, and GC support.

      • ggiigg 6 hours ago ago

        They will not. E.g WordPress and Dajango are 20 years old and still very popular. People don't just jump to hype because hacker news does.

    • WorldMaker 7 hours ago ago

      > Things may be different with other frameworks like Blazor which I've not tried, but I just find myself wanting to write JS instead.

      Blazor WASM probably is among the best approaches to what is possible with WASM today, for better and worse. C# is a great language to write domain code in. A lot of companies like C# for their backends so you get same-language sharing between backend and frontend. The Razor syntax is among the better somewhat type-safe template languages in the wild, with reasonably good IDE support. C# was designed with FFI in mind (as compared to Java and some other languages) so JS imports and exports fit reasonably well in C#; the boundaries aren't too hairy.

      That said, C# by itself isn't always that big of leap from Typescript. C# has better pattern matching today, but overall the languages feel like step-brothers and in general the overhead of shipping an entire .NET CLR, most of the BCL, and your C# code as web assembly is a lot more than just writing things more vanilla in Typescript. You can also push C# more functional with libraries like LanguageExt (though you also fight the reasons to pick C# by doing so as so many engineers don't think LanguageExt feels enough like C# to justify using C#).

      I'm curious to try Bolero [0] as F# would be a more interesting jump/reason for WASM, but I don't think I could sell it to engineering teams at my day job. (Especially as it can't use Razor syntax, because Razor is pretty deeply tied to C# syntax, and has its own very different template languages.)

      With WASM not having easy direct access to the DOM, Blazor's renderer is basically what you would expect it to be: it tosses simple objects over to a tiny Virtual DOM renderer on the JS side. It has most of the advantages and disadvantages of just using something like React or Preact directly, but obviously a smaller body of existing performance optimizations. Blazor's Virtual DOM has relatively great performance given the WASM to JS and back data management and overhead concerns, but it's still not going to out-compete hand written Vanilla JS any time soon.

      [0] https://fsbolero.io/

      • Shalomboy 7 hours ago ago

        I found Blazor WASM to be extremely helpful if you have to start from the opposite side of the spectrum. I was working in a self-proclaimed gov agency "Microsoft Shop" whose head of development was adamantly opposed to any sort of JS-driven web app development, but kept accepting requesting apps that fit perfectly into the SPA model. .NET 6 released a few months after I started and with it came a huge amount of progress with Blazor WASM. I had plenty of experience with Vue and Typescript, so Blazor WASM and C# mapped really easily to my existing model of how to build. That similarity also made it easy to onboard new grads who had experience in web dev but weren't familiar with C#. After enough evangelizing, we built a critical mass of projects leveraging Blazor WASM to convince leadership to reconsider his position on Typescript. I can't say enough nice things about the work Steve Sanderson has done to bring Blazor to the public.

    • eek2121 13 minutes ago ago

      It won't ever replace anything, because most folks don't understand the tech. I've no other way to explain this except to present this question: which is more popular, ASM, or literally any other higher level language (C, C++, etc.)?

      Compilers, languages, and frameworks were built for ease of use for the end developer specifically so that any type of ASM would be avoided. Web technologies/frameworks along with Operating system APIs, etc. were a FURTHER level of abstraction. WASM has it's place, just the same as ASM has it's place. Trying to replace React with x86 ASM sounds foolish, does it not? The same goes for WASM. Why?

      WASM is designed for situations where performant, low latency compute is needed, along with low level control, etc. Even IF they integrated DOM, very few would use it. Most of today's developers don't even know ASM for any platform, and they aren't about to learn it. They want to be productive, not rewrite basic stuff.

      I mean shoot, as much as I dislike the AI bubble (AI/LLMs are great, corporate america is the issue), it is SHOWING what people want, which most of us already knew people wanted: we want to automate out the boring stuff and focus on the hard stuff.

      • cush 7 minutes ago ago

        > WASM is designed for situations where performant, low latency compute is needed

        I don't get this argument at all. When is performance not needed? Every website can benefit from be faster than it currently is.

    • stkdump 14 hours ago ago

      I never thought that it would be a promising approach to build entire web apps using wasm. You don't just have to make it possible to interact with the DOM. You also have to have the right high level language to do this kind of DOM interaction and application logic. JS isn't bad for that purpose and it would probably take a lot to find something that is much better (which compiles to WASM instead of js, like ts and svelte do).

      The only real avenue for js-free web applications would be to completely abandon the browser rendering path and have everything render into a canvas. There are experiments to use UI toolkits designed for the desktop. But even that I see more of a niche solution and unlikely to become very widely used. HTML/css/js have become the lingua franca of UI development and they are taking over desktop applications as well. Why should that trend reverse?

      • yencabulator 2 hours ago ago

        > You also have to have the right high level language to do this kind of DOM interaction and application logic.

        That just means you personally like JS. In my opinion many languages are better than it.

      • lynx97 12 hours ago ago

        > completely abandon the browser rendering path and have everything render into a canvas

        Yeah, go ahead and trash the little bit of accessibility we still have. <canvas> by itself already asks webdevs to shit on people with visual disabilities. But getting rid of the DOM (for vague reasons) would really nail the coffin of these pesky blind users. After all, why should they be able to use anything on the internet?

        This, and AI making webdevs consider to obfuscate things for scraping reasons, and Microsoft Recall making devs play with the idea of obfuscating OS-level access to their (privacy-sensitive) apps, which in essence would also trash accessibility, are the new nightmares that will haunt me for the next few years.

        • tcoff91 8 hours ago ago

          Unfortunately this is how Flutter web apps work.

      • codeflo 13 hours ago ago

        Maybe that's not the dominant mindset anymore, but I for one would love to use a language that's actually built for functional/reactive programming instead of inventing half-baked JavaScript dialects for that purpose. Elm was a language in that spirit, but it never felt complete.

        • lynx97 12 hours ago ago

          You can probably build something in PureScript.

          • ddellacosta 3 hours ago ago

            Makes me sad that PureScript doesn't have more adoption, not that I'm surprised. It's orders of magnitude better than Elm and even improves upon Haskell in some meaningful ways (row polymorphism).

          • pjmlp 11 hours ago ago

            Gone are the days it used to show up routinely in sites like HN, another proof how the language adoption cycles go.

    • afavour 9 hours ago ago

      > The only way it really fell short is in the way that a lot of people were predicting that it would become a sort of total replacement for JS+HTML+CSS for building web apps.

      Agreed and I’m personally glad progress on that hasn’t moved quickly. My biggest fear with WASM is that even the simplest web site would end up needing to download a multi MB Python runtime just because the author didn’t want to use JS!

      The sad reality is that the slowness very often comes from the DOM, not from JavaScript. Don’t get me wrong, there could be improvements, e.g. VDOM diffing would be a cinch with tuples and records, but ultimately you have to interact with the DOM at some point.

    • catapart 9 hours ago ago

      Agreed. This article feels like someone asking "What happened to ffmpeg?"

      It's like...ah, yeah, I see how you might not hear about it, but uh... it's everywhere.

    • pmontra 10 hours ago ago

      About building web apps:

      > It could be the continued lack of DOM bindings that have been considered a key missing piece for several years now, or maybe something else or more fundamental.

      More fundamentally, every front end developer uses more or less the same JS language (Typescript included) and every module is more or less interoperable. As WASM is a compilation target, every developer could be using a different language and different tools and libraries. One of them could have reached critical mass but there is a huge incumbent (JS) that shadows everything else. So special purpose parts of web apps can be written in one of those other languages but there still is a JS front end between them and the user and GUIs can be huge apps. It looks like a system targeted to optimizations.

      And for the backend, if one writes Rust or any other compiled language that can target WASM, why compiling to WASM and not to native code?

      • CuriouslyC 10 hours ago ago

        Using WASM lets you bundle native stuff in NPM packages without cross compiling.

    • rob74 15 hours ago ago

      > The only way it really fell short is in the way that a lot of people were predicting that it would become a sort of total replacement for JS+HTML+CSS for building web apps.

      I for one hope that doesn't happen anytime soon. YouTube or Spotify could theoretically switch to Wasm drawing to a canvas right now (with a lot of development effort), but that would make the things that are currently possible thanks to the DOM (scraping, ad blockers etc.) harder or impossible.

      • gf000 15 hours ago ago

        > DOM (scraping, ad blockers etc.) harder or impossible.

        This is a cat mouse fight, and Facebook already does some ultra-shady stuff like rendering a word as a list of randomly ordered divs for each character, and only using CSS to display in a readable way.

        But it can't be made impossible, at the worst case we can always just capture the screen and use an AI to recognize ads, wasting a lot of energy. The same is true for cheating in video games and many forms of online integrity problems - I can just hire a good player who would play in my place, and no technology could recognize that.

        • palata 14 hours ago ago

          > ultra-shady stuff like rendering a word as a list of randomly ordered divs for each character, and only using CSS to display in a readable way.

          I wonder how much the developers writing that are being paid to be complete assholes.

          • strangegecko 12 hours ago ago

            I can't speak for FB. But I know a local (non-US) real estate company which does crap like this (they also love to disable right click and detect when browser tools are open and programmatically close the tab/page when that happens), and they're not paying much. I'm guessing it's double of minimum wage, which isn't high here.

          • pxc 9 hours ago ago

            Shouldn't this kind of thing be illegal as a matter of accessibility?

            • CyberDildonics 9 hours ago ago

              Can you link to the law you're talking about?

              • pxc 8 hours ago ago

                I'm not making a legal argument.

                If someone else would like to make one, though, I'd be happy to read it.

                • CyberDildonics an hour ago ago

                  Shouldn't this kind of thing be illegal

                  I'm not making a legal argument.

                  Why would someone else make a legal argument for you? You're the one saying it should be illegal.

                • panzi 5 hours ago ago

                  Since this is about something nobody wants to see (ads) my guess would be that it might be legal here.

          • ljm 13 hours ago ago

            Knowing what total comp is like for those companies, I'm sure Facebook more than exceeded the price one might put on ethics.

            I've personally resigned from positions for less and it hasn't cost me much comfort in life (maybe some career progression perhaps but, meh).

        • smallnix 14 hours ago ago

          > no technology could recognize that.

          Perhaps require monitoring of the arm muscle electrical signals, build a profile, match the readings to the game actions and check that the profile matches the advertised player

        • jfengel 7 hours ago ago

          I think that's hilarious. Can you point me to some documentation on that? Such as why they'd do it?

          (To make scraping and automation harder, perhaps?)

        • DonHopkins 9 hours ago ago

          >I can just hire a good player who would play in my place, and no technology could recognize that.

          Just like Elon does.

          Elon Musk stands accused of pretending to be good at video games. The irony is delicious:

          https://www.theguardian.com/games/2025/jan/20/elon-musk-stan...

          >Musk desperately wants to appropriate gamer credibility, but he may be faking it – and doing exactly what toxic nerds have been accusing women of doing for decades

      • ivell 15 hours ago ago

        I suspect this will be coming soon. For ad-driven companies, having an opaque deployment which would prevent ad-blockers would be ideal.

        However ads still need to be delivered over the net so there is still some way to block them (without resorting to router/firewall level blocking).

        • creata 14 hours ago ago

          They'd be raked over the coals for the lack of accessibility, I hope.

          • javcasas 13 hours ago ago

            That's like the mafia being raked over the coals for not having accessibility ramps for wheelchairs in their clandestine distilleries.

            Not gonna happen.

            • lynx97 12 hours ago ago

              You are probably right. What will happen is that ad-blocker people will indirectly kill accessibility. That would make a lot of sense in this world. Its a reoccuring pattern. Spam killed a part of accessibility indirectly via CAPTCHA. And "it is my god-given right to block ads of free services I use" people will indirectly finally kill accessibility for good, now that we have <canvas>.

      • p_l 14 hours ago ago

        multiple web apps already work by rendering all to canvas - for example Google Docs and O365

      • lynx97 12 hours ago ago

        Add Accessibility to that list. Morally speaking, it is likely more important then scraping and ad-blockers.

        • willtemperley 11 hours ago ago

          Yes, however I reject the idea that a full WASM app would be strictly worse for accessibility in the long term. Native UI frameworks do have accessibility APIs and browsers could implement something similar.

          I see it as an opportunity to do better.

          • lynx97 8 hours ago ago

            So far, huge rewrites/rearchitecturings typically worsened the end user experience from an a11y POV. I even know people personally who have lost their job of 20 years because their employer decided to redo their IT, "accidentally" leaving the disabled employee behind. It is naiv to think a big rewrite will NOT make things much worse for years.

      • m00dy 14 hours ago ago

        >>possible thanks to the DOM (scraping, ad blockers etc.) harder or impossible.

        lol, you can scrape anything visible on your screen.

    • gritzko 15 hours ago ago

      That is like Linux on a laptop. When you buy a laptop, you pay for Windows anyway.

      • graemep 14 hours ago ago

        Not necessarily. I bought a laptop with Linux preinstalled and its the best thing to do if you buy one with the intent of using Linux on it.

        • zdragnar 6 hours ago ago

          That's what I thought when I bought a Dell XPS. Probably the worst laptop I've owned.

          There's lots of good options that come with windows preinstalled.

  • thecupisblue 13 hours ago ago

    As someone who worked actively with webassembly for the last few years, and is about to drop a WASM based framework, here's what happened:

    - The ecosystem evolved fast, then slow. This caused adoption problems, especially for things such as WASI and Component model, as a lot of folks did it their own way/using 3rd party, which now meant they had to rewrite to this new thing that still isn't fully properly supported everywhere.

    - The way it's "developed" means a lot of things are distributed, unsynced and have different support levels based on the engine you're using. This causes confusion among developers, especially since you have to go from reading an article, to reading a spec, to reading a github issue, then you're 3 repositories deep reading random rust code at 2 AM trying to figure out if you can rely on this stranger's fork just to try something out that should have been dead simple.

    - Both of these combined can lead to even greater confusion for our LLM's, as they are trained on varied data which is by now stale, so they can often misunderstand things or look for things that aren't there anymore, just like us humans would.

    - And now let's focus on the biggest and most important one IMO: Javascript/Typescript support. That is the holy grail for any technology that wants to be a widely adopted intermediary. While it is possible, you are layering hacks on hacks and begging that the next user won't break it all. Until my users can bring whatever they're using with them, the transition isn't really worth it, and writing my own wiring for every possible combination/need is quite unnecessary. We got a step closer with Web Containers, but by that time a lot of folks already moved onto Bun.

    • matt_kantor 9 hours ago ago

      I don't think I understand your last point. Could you elaborate? What does "Javascript/Typescript support" mean to you (i.e. what specific features/capabilities are missing from the current engines)?

      • thecupisblue 8 hours ago ago

        I mean compiling JS/TS to WASM or running a JS runtime like bun inside WASM

        • tennex 7 hours ago ago

          JS/TS interop, sure--but what are the use cases for running a JS runtime inside WASM?

          • potsandpans 7 hours ago ago

            Iirc, Shopify uses this to execute storefront code on the edge in a sandboxed environment.

            People who want to write JavaScript for backend store functionality can, and then Shopify deploys that code into containers with small IO semantics

  • benrutter 16 hours ago ago

    I think one the big things with web assembly is it's shear potential is huge.

    In theory, WASM could be a single cross platform compile target, which is kind of a CS holy grail. It's easy to let your mind spin up a world where everything is web assembly, a desktop enivornment, a server, day to day software applications.

    After I've imagined all of that, being told web assembly helps some parts of Figma run faster feels like a big let down. Of course that isn't fair, almost nothing could live up to the expectations we have for WASM.

    Its development is also by committee, which is maybe the best option for our current landscape, but isn't famous for getting things going quickly.

    • torginus 15 hours ago ago

      Like the gifted kid who lives with his mom at 30, at some point in time, we have to stop talking about potential and start talking about results.

      Theory and practice doesn't match in this case, and many people have remarked that companies that sit on the WhatWG board have vested interest in making sure their lucrative app stores are not threatened by a platform that can run any app just as well.

      I remember when Native Client came to the scene and allowed people to compile complex native apps to the web that run at like 95% of native speed. While it was in many ways an inelegant solution, it worked better than WebAssembly does today.

      Another one of WebAssembly's killer features was supposed to be native web integration. How JS engines work is that you have an IDL that describes the interface of JS classes which is then used to generate code to bind to underlying C++ implementations. You could probably bind those to Webassembly just as well.

      I don't think a cross-platform as in cross CPU arch matters that much, if you meant 'runs on everything' then I concur.

      Also the dirty secret of WebAssembly is that it's not really faster than JS.

      • PunchyHamster 15 hours ago ago

        I start to think that's why there is still no DOM for the WASM and we have to pingping over JS

        > Also the dirty secret of WebAssembly is that it's not really faster than JS.

        That is near purely due to amount of work it took to make that shitty language run fast. Naive webassembly implementation will beat interpreted JS many times over but modern JIT implementations are wonder.

        • torginus 12 hours ago ago

          For WASM, the performance target isn't Javascript - but native code and NaCl. Considering WASM has had tremendously more time and effort invested into it, and still underperforms NaCl (and JS) signals to me that this is not the right approach.

          The WASM runtime ended up from something that ingests pseudo-assembly,validates it and turns it into machine code, into a full-fledged multi-tiered JIT, like what JS has, with crazy engineering complexity per browser, and similar startup performance woes (which was one of the major goals of Nacl/Wasm to alleviate the load time issues with huge applications).

        • gritzko 15 hours ago ago

          Yep. These things have been solved by massive investments. The question is, can WASM as a language (not an implementation) do something JavaScript can't?

          • CryZe 14 hours ago ago

            Wasm can do 64-bit integers, SIMD and statically typed GC classes.

            • davidmurdoch 8 hours ago ago

              JS could have had support for SIMD and 64 bit it's by now, and progress was actually being made (mostly just through the asm.js experiments), but it was deprioritized specifically to work on WASM.

            • DonHopkins 9 hours ago ago

              WASM can even do 32-bit integers, which JavaScript can't, so uses floats instead.

              • WorldMaker 7 hours ago ago

                JS has had byte arrays like Int32Array for a while. The JS engines will try to optimize math done into them/with them as integer math rather than float math, but yeah you still can't use an integer directly outside of array math.

          • moralestapia 15 hours ago ago

            The answer to that is no. But innovating at the language level was never a goal for WASM; quite the opposite, as simple as possible so it can be compiled and run anywhere.

        • WorldMaker 7 hours ago ago

          > I start to think that's why there is still no DOM for the WASM and we have to pingping over JS

          I don't think you need conspiracy theories for that. DOM involves complex JS objects and you have to have an entirely working multi-language garbage collection model if you are expecting other languages to work with DOM objects otherwise you run the risk of memory leaking some of the most expensive objects in a browser.

          That path to that is long and slow, especially with the various committees' general interest being in not requiring non-JS languages to entirely conform to JS GC (either implementing themselves on top of JS GC alone or having to implement their own complex subset of JS GC to interop correctly), so the focus has been on very low level tools over complex GC patterns. The first basics have only just been standardized. The next step (sharing strings) seems close but probably still has months to go. The steps after that (sharing simple structs) seem pretty complex with a lot of heated debate still to happen, and DOM objects are still some further complexity step past that (as they involve complex reference cycles and other such things).

        • moralestapia 15 hours ago ago

          I don't dislike JS but the reason why it's fast is because billions were poured into making that happen.

          V8 is a modern engineering marvel.

          • rob74 15 hours ago ago

            Yeah, and like many engineering marvels, it was instantly misused for purposes its creators didn't intend and became a scourge on humanity (looking at you NodeJS & co).

          • Findecanor 12 hours ago ago

            And WASM hasn't been around for as long, so WASM implementations are not as mature.

            There is no reason why WASM couldn't be as fast, or faster than JS, especially now with WASM 3.0. Before, every programs in a managed language had to be shipped with its own GC and exception handling framework in WASM which was probably crippled by size constraints.

            • pjmlp 11 hours ago ago

              They still need to, because WASM GC is a MVP that only covers a subset.

              Any language with advanced GC algorithms, or interior pointers, will run poorly with current WASM GC.

              It works as long as their GC model overlaps with JS GC requirements.

              • WorldMaker 7 hours ago ago

                It's also currently only a subset of JS GC requirements at that. It's the bare minimum to share references between JS and WASM to byte arrays like Int32Array. It's like basic OS-level memory page sharing only for now.

                Some of the real GC tests will be strings support (because immutability/interning) and higher-level composite objects, which is all still in various draft/proposal states.

                • pjmlp 7 hours ago ago

                  Oh, even worse than I thought.

      • CuriouslyC 10 hours ago ago

        WASM is way way way faster if you need explicit memory management. It's only 100% a wash if you're doing DOM stuff.

        • torginus 9 hours ago ago

          Not necessarily. I found a benchmark that you can run yourself, that's doing pretty much just raw compute (JS vs C/C++ in Wasm):

          https://takahirox.github.io/WebAssembly-benchmark/

          Js is not always faster, but in a good chunk of cases it is.

          • azakai 7 hours ago ago

            It is easy to make benchmarks where JS is faster. JS inlines at runtime, while wasm typically does not, so if you have code where the wasm toolchain makes a poor inlining decision at compile time, then JS can easily win.

            But that is really only common in small computational kernels. If you take a large, complex application like Adobe Photoshop or a Unity game, wasm will be far closer to native speed, because its compilation and optimization approach is much closer to native builds (types known ahead of time, no heavy dependency on tiering and recompilation, etc.).

          • CuriouslyC 8 hours ago ago

            Things might be getting better for JS, but just looking over those briefly, they don't look memory constrained, which is the main place where I've seen significant speedups. Also, simpler code makes JIT optimizations look better, but that level of performance won't be consistent in real world code.

            • torginus 8 hours ago ago

              You might be right in your use case, but still, JS is not the benchmark to beat. Native Client was already almost as fast as native code, started up almost instantly, and didn't get a decade of engineering with who knows how much money behind it invested into it.

              Webassembly that was supposed to replace it needs to be at least as good, that was the promise. We're a decade in, and still Wasm is nowhere near while it has accumulated an insane amount of engineering complexity in its compiler, and its ability to run native apps without tons of constraints and modifications is still meh as is the performance.

              • azakai 7 hours ago ago

                To be fair, Native Client achieved much of its speed by reusing LLVM and the decades of work put into that excellent codebase.

                Also, Native Client started up so fast because it shipped native binaries, which was not portable. To fix that, Portable Native Client shipped a bytecode, like wasm, which meant slower startup times - in fact, the last version of PNaCl had a fast baseline compiler to help there, just like wasm engines do today, so they are very similar.

                And, a key issue with Native Client is that it was designed for out-of-process sandboxing. That is fine for some things, but not when you need synchronous access to Web APIs, which many applications do (NaCl avoided this problem by adding an entirely new set of APIs to the web, PPAPI, which most vendors were unhappy about). Avoiding this problem was a major principle behind wasm's design, by making it able to coexist with JS code (even interleaving stack frames) on the main thread.

                • torginus 4 hours ago ago

                  I think youre referring to PNaCl(as opposed to Native Client), which did away with the arch-specific assembly, and I think they shipped the code as LLVM IR. These are 2 completely separate things, I am referring to the former.

                  I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today, and I think managing that level of complexity is tenable, considering the monster the current Wasm implementation became, which is still lacking in key ways.

                  As for out of process sandboxing, I think for a lot of things it's fine - if you want to run a full-fat desktop-app or game, you can cram it into an iframe, and the tab(renderer) process is isolated, so Chrome's approach was quite tenable from an IRL perspective.

                  But if seamless interaction with Web APIs is needed, that could be achieved as well, and I think quite similarly to how Wasm does it - you designate a 'slab' of native memory and make sure no pointer access goes outside by using base-relative addressing and masking the addresses.

                  For access to outside APIs, you permit jumps to validated entry points which can point to browser APIs. I also don't see why you couldn't interleave stack frames, by making a few safety and sanity checks, like making sure the asm code never accesses anything outside the current stack frame.

                  Personally I thought that WebAssembly was what it's name suggested - an architecture independent assembly language, that was heavily optimized, and only the register allocation passes and the machine instruction translation was missing - which is at the end of the compiler pipeline, and can be done fairly fast, compared to a whole compile.

                  But it seems to me Wasm engines are more like LLVM, an entire compiler consuming IR, and doing fancy optimization for it - if we view it in this context, I think sticking to raw assembly would've been preferable.

                  • azakai 3 hours ago ago

                    Sorry, yes, I meant PNaCl.

                    > I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today,

                    That is true today, but it would prevent other architectures from getting a fair shot. Or, if another architecture exploded in popularity despite this, it would mean fragmentation.

                    This is why the Portable version of NaCl was the final iteration, and the only one even Google considered shippable, back then.

                    I agree the other stuff is fixable - APIs etc. It's really portability that was the sticking point. No browser vendor was willing to give that up.

          • AlienRobot 7 hours ago ago

            I would take these benchmarks with a pinch of salt. Within a single function, it's very easy to optimize JS because you know every way a single variable will be defined. When you have to call a function, the data type of the argument can be anything the caller passes to the function, which makes optimization far more complex.

            In practice, WASM codebases won't be simply running a single pure function in WASM from JS but instead will have several data structures being passed around from one WASM function to another, and that's going to be faster than doing the same in JS.

            By the way, if I remember correctly V8 can optimize function calls heuristically if every call always passes the same argument types, but because this is an implementation detail it's difficult to know what scenarios are actually optimized and which are not.

      • avadodin 15 hours ago ago

        As someone with no intersection with the web, it is disconcerting to attend the burial of wasm when she was still getting her first compiler backends.

      • z3t4 10 hours ago ago

        Also WebAssembly is meant to be a compiler target with the biggest advantage being it's sandboxed. The problem is that the JS engines can do that too. Just like JS engines, WebAssembly can run outside browsers. I think in theory Wasm is better then JS in those areas, but not better enough.

      • throwaway314155 8 hours ago ago

        > Like the gifted kid who lives with his mom at 30, at some point in time, we have to stop talking about potential and start talking about results.

        This is an entirely unnecessary jab. There’s a whole generation dealing with stuff like this because of economic and other forces outside their control.

        • axus 5 hours ago ago

          WASM not taking over the world is probably also due to forces outside its control; I guess that's only relevant if money was being spent to accomplish that goal.

      • EGreg 15 hours ago ago

        Actually the great secret of wasm (that will piss off a lot of people on HN I am sure) is that it is deterministic and can be used to build decentralized smart contracts and Byzantine Fault Tolerant distributed systems :)

        Some joker who built Solana actually thought Berkeley Packet Filter language would be better than WASM for their runtime. But besides that dude, everyone is discovering how great WASM can be to run deterministic code right in people’s browsers!

        • torginus 12 hours ago ago

          I don't think you need WASM for that, I'm sure you can write a language that transpiles to JS and is still deterministic.

          • circuit10 12 hours ago ago

            But WASM already exists and has many languages that are able to compile to it, why reinvent the wheel?

          • undeveloper 12 hours ago ago

            JS isn't deterministic in its performance

          • EGreg 9 hours ago ago

            They tried their best: https://deterministic.js.org/

            No, WASM is deterministic, JS is fundamentally not. Your dislike of all things blockchain makes you say silly things.

    • shevy-java 15 hours ago ago

      Well - the problem is... the "in theory" means that nobody will bet on WASM if it is not really going to be useful. People use HTML, CSS, JavaScript - that has been shown to be very useful. WASM is not useless but how can people relate to it? It is like an alien stack for most people.

      • azakai 7 hours ago ago

        It is totally fine if most people don't relate to wasm - it's good for some things, but not most things. As another example, most web devs don't use the video or audio tag, I'd bet, and that's fine too.

        Media, and wasm, are really important when you need them, but usually you don't.

      • frez1 15 hours ago ago

        The way things usually gain traction is when a big tech company has success experimenting with it. it happened with node way back and happening with rust now.

        the fact we haven't heard much about was use is probably because it isnt as valuable as we think, or no one has played around with it yet to find out

        • matt_kantor 9 hours ago ago

          > when a big tech company has success experimenting with it

          TFA has many examples of big tech companies using Wasm in production. It's not exhaustive either, e.g. the article doesn't mention:

          - Google using it as a backend for Flutter and to implement parts of Google Maps, Earth, Meet, Sheets, Keep, YouTube, etc

          - Microsoft using it in Copilot Studio

          - eBay using it in their mobile app

          - MongoDB using it for Compass

          - Amazon supporting it in EKS

          - 1Password using it in their browser extension

          - Unity having it as a build target

          (And this was just what I found with some quick web searches; I'm sure there are many other examples.)

          ---

          > the fact we haven't heard much about was use is probably because it isnt as valuable as we think

          One of the conclusions of the article is that it's mostly used in ways that aren't very visible.

    • x3haloed 16 hours ago ago

      There’s no reason we shouldn’t be replacing our containers with WASI. Containers are absolutely miserable things that should just be VMs (in the WASM sense, not in the “run Linux in a virtual X86” sense)

      The tooling is just not there yet. Everyone is just stuck on supporting Docker still.

      • mike_hearn 13 hours ago ago

        There are a thousand reasons, which is why nobody is doing it. They're orthogonal. Problems WASM/WASI doesn't solve:

        - Building / moving file hierarchies around

        - Compatibility with software that expects Linux APIs like /proc

        - Port binding, DNS, service naming

        - CLI / API tooling for service management

        And about a gazillion other things. WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it. It's not meaningfully portable in any way outside of UNIX so you might as well just write a real Linux app. WASI buys you nothing.

        WASM is heavily overfit to the browser user case. I think a lot of the dissipated excitement is due to people not appreciating how much that is true. The JVM is a much more general technology than WASM is which is why it was able to move between such different use cases successfully (starting on smart TV boxes, then applets, then desktop apps, then servers + smart cards, then Android), whereas WASM never made it outside the browser in any meaningful way.

        WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO. Before WASM Google also had a less well known proposal to formally extend the web with JVM bytecode as a first class citizen, which would have allowed fast DOM/JS bindings (Java has an official DOM/JS bindings API for a long time due to the applet heritage). The bytecode wouldn't have had full access to the entire Java SE API like applets did, so the security surface area would have been much smaller and it'd have run inside the renderer sandbox like V8. But Mozilla rejected that too.

        So we have WASM. Ignoring the new GC extensions, it's basically just regular assembly language with masked memory access and some standardized ABI stuff, with the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense. A strange animal, not truly excellent at anything except pleasing the technical aesthetic tastes of the Mozillians. But if you don't have to care about what Mozilla think it's hard to come up with justifications for using it.

        • Findecanor 11 hours ago ago

          > WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it.

          WASI fixed well-known flaws in the POSIX API. That's not a bad thing.

          > the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense.

          WASM was designed to be JIT-compiled into its final form at the speed it is downloaded by a web browser. JS JIT-compilers in modern web browsers are much more complex, often having multiple compilers in tiers so it spends time optimising only the hottest functions.

          Outside web browsers, I'd think there are few use-cases where WASM couldn't be AOT-compiled.

        • azakai 7 hours ago ago

          > WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO.

          No, Mozilla's concerns at the time were very concrete and clear:

          - NaCl was not portable - it shipped native binaries for each architecture.

          - PNaCl (Portable Native Client, which came later) fixed that, but it only ran out of process, making it depend on PPAPI, an entirely new set of APIs for browsers to implement.

          Wasm was designed to be PNaCl - a portable bytecode designed to be efficiently compiled - but able to run in-process, calling existing Web APIs through JS.

          • mike_hearn 6 hours ago ago

            I don't think their concerns were concrete or clear. What does "portable" mean? There are computers out there that can't support the existing feature set of HTML5, e.g. because they lack a GPU. But WebGPU and WebGL are a part of the web's feature set. There's lots of stuff like that in the web platform. It's easy to write HTML that is nearly useless on mobile devices, it's actually the default state. You have to do extra work to ensure a web page is portable even just with basic HTML to mobile. So we can't truly say the web is always "portable" to every imaginable device.

            And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.

            So the idea of portability is not and never has been a requirement for something to be "the web". There have been non-portable web pages for the entire history of the web. The sky didn't fall.

            The idea that everything must target an abstract machine whether the authors want that or not is clearly key to Mozilla's idea of "webbyness", but there's no historical precedent for this, which is why NaCL didn't insist on it.

            • azakai 6 hours ago ago

              > What does "portable" mean?

              In the context of the web, portability means that you can, ideally at least, use any browser on any platform to access any website. Of course that isn't always possible, as you say. But adding a big new restriction, "these websites only run on x86" was very unpopular in the web ecosystem - we should at least aim to increase portability, not reduce it.

              > And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.

              Historically, yes, and Flash as well. But the web ecosystem moved away from those things for a reason. They brought not only portability issues but also security risks.

              • mike_hearn 5 hours ago ago

                Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares. Large parts of the industry don't agree with the goal of portability or even explicitly reject it, which is one reason why so much software isn't distributed as web apps.

                Security is similar. It sounds good, but is always in tension with other goals. In reality the web doesn't have a goal of ever increasing security. If it was, then they'd take features out, not keep adding new stuff. WebGPU expands the attack surface dramatically despite all the work done on Dawn and other sandboxing tech. It's optional, hardly any web pages need it. Security isn't the primary goal of the web, so it gets added anyway.

                This is what I mean by saying it was vague and unclear. Portability and security are abstract qualities. Demanding them means sacrificing other things, usually innovation and progress. But the sort of people who make portability a red line never discuss that side of the equation.

                • azakai 5 hours ago ago

                  > Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares.

                  As far back as I can remember well (~20 years) it was an explicitly stated goal to keep the web open. "Open" including that no single vendor controls it, neither in terms of browser vendor nor CPU vendor nor OS vendor nor anything else.

                  You are right that there has been tension here: Flash was very useful, once, despite being single-vendor.

                  But the trend has been towards openness: Microsoft abandoned ActiveX and Silverlight, Google abandoned NaCl and PNaCl, Adobe abandoned Flash, etc.

                  • mike_hearn 5 hours ago ago

                    There are shades of the old GPL vs BSD debates here.

                    Portability and openness are opposing goals. A truly open system allows or even encourages anyone to extend it, including vendors, and including with vendor specific extensions. Maximizing the number of devices that can run something necessarily requires a strong central authority to choose and then impose a lowest common denominator: to prevent people adding their own extensions.

                    That's why the modern web is the most closed it's ever been. There are no plugin APIs. Browser extension APIs are the lowest power they've ever been in the web's history. The only way to meaningfully extend browsers is to build your own and then convince everyone to use it. And Google uses various techniques to ensure that whilst you can technically fork Chromium, in practice hardly anyone does. It's open source but not designed to actually be forked. Ask anyone who has tried.

                    So: the modern web is portable for some undocumented definition of portable because Google acts as that central authority (albeit is willing to compromise to keep Mozilla happy). The result is that all innovation happens elsewhere on more open platforms like Android or Linux. That's why exotic devices like VR headsets or AI servers run Android or Linux, not ChromeOS or WebOS.

        • creata 12 hours ago ago

          > with a bunch of stuff renamed

          And a capability system and a brand new IDL, although I'm not sure who the target audience is...

          > it's basically just regular assembly language

          This doesn't affect your point at all, but it's much closer to a high-level language than to regular assembly language, isn't it? Nonaddressable, automatically managed stack, mandatorily structured control flow, local variables instead of registers, etc.

          • mike_hearn 12 hours ago ago

            Some hardware in the past has had a hidden/cpu managed stack. Modern CPUs with features like CFG have mandatorily structured control flow. Using a stack machine instead of a register machine is indeed a key difference but the actual CPU is a register machine so that just means WASM has to be converted first, hence the JIT. Stack based assembly languages are still assembly languages.

      • hosh 14 hours ago ago
      • HendrikHensen 14 hours ago ago

        It helps if you actually qualify statements such as "Containers are absolutely miserable things". I'm in a world where were using containers extensively, and I don't experience any issues whatsoever about which one might thing "WASI would be the solution to this".

      • torginus 15 hours ago ago

        Imo stuff like Flatpak has the right idea - provide a rich but controllable set of features, API/ABI compatibility, while providing zero overhead isolation (same as docker since it relies on the same APIs).

        I also rather like the idea of deploying programs rather than virtual machines.

        Docker's cardinal sin imo is that it was designed as a monetizable SaaS product, and suffers from inner platform effect, reinventing stuff (package management, lifecycle management etc) that didn't need to be invented.

      • creata 14 hours ago ago

        But what's the benefit of replacing containers with WASI?

        The performance would be worse, and it would be harder to integrate with everything else. It might be more secure, I guess.

      • IshKebab 15 hours ago ago

        Yeah the real answer is that all of this stuff is still a work in progress. Last I checked WASI doesn't have a concept of "current directory" for example, so porting software is not trivial.

        Also WASI is a way of running a single process. If your app needs to run subprocesses you'll need to do more work.

    • daef 15 hours ago ago

      I recommend you watch [0] if you haven't seen it yet, it describes the history of javascript, iirc until 2035.

      [0] https://www.destroyallsoftware.com/talks/the-birth-and-death...

      • afandian 12 hours ago ago

        Thanks! There's a video I've been looking for, for years. It's about web technologies and recusion ("I put a VM in side a VM"), and it's satire / comedy.

        It might be this one I'm thinking of, as it closely fits the bill. But something is telling me it's not, and that it was published earlier.

        Any ideas?

    • vbezhenar 16 hours ago ago

      You can use javascript as a single cross platform compile target. What's the difference?

      • yencabulator an hour ago ago

        WASM, and asm.js before it, roughly exist because Javascript is such a bad compile target.

      • lxgr 15 hours ago ago

        Javascript comes with mandatory garbage collection. I suppose you could compile any language to an allocation-free semantic subset of Javascript, but it's probably going to be even less pretty than transpiling to Javascript already is.

        • creata 14 hours ago ago

          > it's probably going to be even less pretty than transpiling to Javascript already is.

          I don't see how it'd be much different to compiling to JavaScript otherwise. Isn't it usually pretty clear where allocations are happening and how to avoid them?

          • lxgr 14 hours ago ago

            “Pretty clear” is good, “guaranteed by language specifications” is better.

            Why reverse-engineer each JS implementation if you can just target a non-GC runtime instead?

      • hnb2137 16 hours ago ago

        WASM allows you to run some parts of the application a bit faster. ;)

      • merlindru 16 hours ago ago

        WASM works with any language and can be much faster than javascript

        • vbezhenar 16 hours ago ago

          You can compile any language to JavaScript. jslinux compiled x86 machine code to JavaScript.

          So basically wasm is some optimisation. That's fine but it's not something groundbreaking.

          And if we remove web from the platform list, there were many portables bytecodes. P-code from Pascal era, JVM bytecode from modern era and plenty of others.

          • IshKebab 15 hours ago ago

            > some optimisation

            That's underselling it a bit IMO. There's a reason asm.js was abandoned.

            • creata 14 hours ago ago

              Wikipedia mentions that Wasm is faster to parse than asm.js, and I'm guessing Wasm might be smaller, but is there any other reason? I don't think there's any reason for asm.js to have resulted in slower execution than Wasm.

              • IshKebab 12 hours ago ago

                > I don't think there's any reason for asm.js to have resulted in slower execution than Wasm

                The perfect article: https://hacks.mozilla.org/2017/03/why-webassembly-is-faster-...

                Honestly the differences are less than I would have expected, but that article is also nearly a decade old so I would imagine WASM engines have improved a lot since then.

                Fundamentally I think asm.js was a fragile hack and WASM is a well-engineered solution.

                • gr4vityWall 11 hours ago ago

                  After reading the, I don't feel convinced abtout the runtime performance advantages of WASM over asm.js. he CPU features mentioned could be added to JS runtimes. Toolchain improvements could go both ways, and I expect asm.js would benefit from JIT improvements over the years.

                  I agree 100% with the startup time arguments made by the article, though. No way around it if you're going through the typical JS pipeline in the browser.

                  The argument for better load/store addressing on WASM is solid, and I expect this to have higher impact today than in 2017, due to the huge caches modern CPUs have. But it's hard to know without measuring it, and I don't know how hard it would be to isolate that in a benchmark.

                  Thank you for linking it. It was a fun read. I hope my post didn't sound adversarial to any arguments you made. I wonder what asm.js could have been if it was formally specified, extended and optimized for, rather than abandoned in favor of WASM.

                  • IshKebab an hour ago ago

                    Whatever it would have ended up like it would have been a big hack so I'm glad everyone agreed to go with a proper solution for once!

            • gf000 15 hours ago ago

              Both undersell and oversell. There are still cases where vanilla JS will be faster.

              And AFAIK asm.js is the precursor to WASM, like the early implementations just built on top of asm.js's primitives.

    • xnx 13 hours ago ago

      *sheer

      shear potential = likely to break apart

      • benrutter 12 hours ago ago

        Haha, shear potential seems like a great accidental pun. I'll have to find an excuse to use it deliberately over the next week.

    • lionkor 11 hours ago ago

      Java and JVM all over again

    • jiggawatts 14 hours ago ago

      > WASM could be a single cross platform compile target, which is kind of a CS holy grail.

      The JVM says "Hello!" from 1995.

      • benrutter 14 hours ago ago

        Hello back!

        The JVM is a great parallel example. Anyone listening to the hype in the early days based around what the JVM could be would surely be disappointed now. It isn't faster than C, it doesn't see use everywhere due to practical constraints, etc.

        But you'd be hard pushed to say the JVM is a total failure. It's used by lots all round the world, and solves real problems, just not the ones we were hoping it would solve. I suspect the future of WASM looks something like that.

        • DonHopkins 9 hours ago ago

          Now JVM's sole purpose is to solve Larry Ellison's problems, so if you're not Larry Ellison and you don't have the same problems he does, then it's a total failure caging you, but a predatory trap serving him.

          None of the technical arguments for JVM matter any more. It's just bait to trick you into sticking your hand under the lawnmower and helping Larry Ellison solve his problems.

          • pjmlp 8 hours ago ago

            Except JetBrains, Red-Hat, SAP, Azul, Oracle, Google, Microsoft, PTC, Aicas, Cisco, Ricoh, microEJ, Bluejay,... are also part of the Java party.

            • jiggawatts 3 hours ago ago

              Microsoft largely cloned the Java Runtime to create the .NET Runtime and similarly cloned Java to create C#.

              The two are so similar that Java bytecode to .NET bytecode translators exist. With some, it is possible to take a class defined in Java, subclass it with C#, call it from Java, etc...

              • pjmlp 2 hours ago ago

                I was in 2001 at a MSFT gold partner, you are skipping quite a few relevant steps on that timeline.

    • whywhywhywhy 14 hours ago ago

      > being told web assembly helps some parts of Figma run faster feels like a big let down.

      Not really when tools like Figma were not really possible before it

      • creata 14 hours ago ago

        What was preventing the development of Figma before Wasm?

        For developing brand new code, I don't think there's anything fundamentally impossible without Wasm, except SIMD.

        • azakai 7 hours ago ago

          Performance. JS can be as fast as wasm, but generally isn't on huge, complex applications. Wasm was designed for things like Unity games, Adobe Photoshop, and Figma - that is why they all use it. Benchmarks on such applications usually show a 2x speedup for wasm, and much faster startup (by avoiding JS tiering).

          Also, the ability to recompile existing code to wasm is often important. Unity or Photoshop could, in theory, write a new codebase for the Web, but recompiling their existing applications is much more appealing, and it also reuses all their existing performance work there.

      • pjmlp 8 hours ago ago

        Yet Figma like tools do exist without Wasm.

  • pjmlp 13 hours ago ago

    Tooling is holding back WebAssembly.

    It is very hard to debug WebAssembly applications, depending on the source language, we are still on printf debugging kind of experience.

    Even the DWARF plugin for Chrome (only, nowhere else), hasn't been updated since 2023.

    Then there is the whole experience, again depending on the language, to produce a .wasm file, alongside the set of imports/exports for the functions, instead of a plain "-arch=wasm".

    GC support is now available, however it is a "yes but", because it doesn't support all kinds of GC requirements, thus some ecosystems like .NET, still need to ship their own.

    Finally we have WIT trying to be yet another go at COM/CORBA/gRPC.

    • valadaptive 12 hours ago ago

      I second this; the tooling is somehow still not there after 10 years.

      - The main toolchain for compiling existing C codebases to WebAssembly is Emscripten. It still hasn't escaped its tech-demo origins, and it's a rats' nest of compiler flags and janky polyfills. There are at least 3 half-finished implementations of everything. It doesn't follow semver, so every point release tends to have some breaking changes.

      - The "modern" toolchain, wasi-sdk, is much more barebones. It's getting to the point of being usable, but I can't use it myself because it ships a precompiled libc and libc++ that use `-O3`, whereas Emscripten recompiles and caches the sysroot and uses `-Oz` if I tell it to. This increases the code size, which is already quite large.

      - LLVM is still not very good at emitting optimized WebAssembly bytecode.

      - Engines are still not very good at compiling WebAssembly bytecode to optimized machine code.

      - Debug info, as you mentioned, is a total mess.

      - Rust's WebAssembly tooling is on life support. The rustwasm GitHub organization was "sunset" in mid-2025 after years of inactivity.

      - There is still no official way to import WebAssembly modules from JavaScript in a cross-platform manner, in the year of our lord 2026. If you're deploying to the browser and using Vite or raw ES modules, you can use `WebAssembly.instantiateStreaming(fetch(new URL('./foo.wasm', import.meta.url)))` and eat the top-level await. Vite recognizes the `new URL('...', import.meta.url)` pattern and will include the asset in the build output, but most other bundlers (e.g. Rollup and esbuild) do not. If you're on Node, you can't do this, because `fetch` does not work for local files. Most people just give up and embed the WebAssembly binary as a huge Base64 string, which increases the filesize by 33% and greatly reduces the compression ratio.

      - If you want multithreaded WebAssembly, you need to set the COOP/COEP headers in order to gain access to `SharedArrayBuffer`. GitHub Pages still doesn't let you do this, although it's the third-most-upvoted feature request. There's a janky workaround that installs a service worker. All bets are off on how that workaround interacts with PWAs.

      If the tooling situation had advanced past "tech demo" in the past 8 years since WebAssembly first shipped, a lot more people would be using it.

      • adrian17 12 hours ago ago

        > LLVM is still not very good at emitting optimized WebAssembly bytecode.

        Like I said in the other comment, I find it incredibly weird that wasm-opt can still squeeze like 10% better code (as in, both smaller binary and somehow faster code) on top of what LLVM does. And it hasn't changed much within the last 5 years.

        And in general, the tooling ecosystem is doing... weirdly. Rust is doing badly yeah, but for example there was also a long stretch of time (I think it's solved now?) when you couldn't pass a .wasm with bulk-memory or other extensions to webpack, as its builtin wasm parser (why was it parsing the binary anyway?) didn't recognize new opcodes.

    • embedding-shape 12 hours ago ago

      > Then there is the whole experience, again depending on the language, to produce a .wasm file, alongside the set of imports/exports for the functions, instead of a plain "-arch=wasm".

      Doesn't the "WASM Components Model" kind of solve this? I've been hacking on a WASM-app runner (in Rust) which basically loads tiny apps that are compiled into "Components", and seems simple enough to me to use and produce those.

      • pjmlp 11 hours ago ago

        Not really, you have a blessed experience by using the language most people on WebAssembly ecosystem are using nowadays.

  • jillesvangurp 15 hours ago ago

    Many of the build tools Javascript people use are written in Rust now. Some of them can be made to run in browsers, via WASM. React, the defacto UI framework for Javascript has a lot of web assembly components. A lot of the npm ecosystem has quietly brought in web assembly. And a lot of UI stuff gets packaged up as web components these days; some of that uses WASM as well.

    If you pulled the plug on WASM, a lot would stop working and it would heavily impact much of the JS frontend world.

    What hasn't caught on is modern UI frameworks that are native wasm. We have plenty of old ones that can be made to work via WASM but it's not the same thing. They are desktop UI toolkits running in a browser. The web is still stuck with CSS and DOM trees. And that's one of the areas where WASM is still a bit weak because it requires interfacing with the browser APIs via javascript. This is a fixable problem. But for now that's relatively slow and not very optimal.

    Solutions are coming. But that's not going to happen overnight. But web frontend teams being able to substitute Javascript for something else is going to require more work. Mobile frontend developers cross compiling to web is becoming a thing though. Jetbrain's compose multiplatform is has native Android/IOS supported now with a canvas rendered web frontend supported in Beta currently.

    You can actually drive the dom from WASM. There are some RUST frameworks. I've dabbled with using kotlin's wasm support to talk to browser dom APIs. It's not that hard. It's just that Rust is maybe not ideal (too low level/hard) for frontend work and a lot of languages lack frameworks that target low level browser APIs. That's going to take years to fix. But a lot compiles to wasm at this point. And you kind of have access to most of the browser APIs when you do. Even if there is a little performance penalty.

    • ChocolateGod 15 hours ago ago

      I think you're confusing CLI tools for React with web components.

    • marisen 15 hours ago ago

      > React, the defacto UI framework for Javascript has a lot of web assembly components.

      I'm pretty sure this is just plain false. Do you have an exemple?

      • gf000 15 hours ago ago

        They might mean build dependencies? Or I'm sure there are ready-built components in wasm, but they are most definitely third-party ones.

  • apignotti 15 hours ago ago

    We use WebAssembly aggressively at Leaning Technologies across our tools.

    WebAssembly makes it possible to:

    * Run x86 binaries in the browser via JIT-ting (https://webvm.io)

    * Run Java applications in the browser, including Minecraft (https://browsercraft.cheerpj.com)

    * Run node.js containers in the browser (https://browserpod.io)

    It's an incredibly powerful tool, but very much a power-user one. Expecting your average front-end logic to be compiled in WebAssembly does not make much sense.

    • doodlesdev 11 hours ago ago

      Any reason why browsercraft doesn't work in Firefox? Ended up opening it in Brave and had a lot of fun hahaha

      It's pretty impressive how far along CheerpJ is right now. I kinda wish this existed about five or ten years ago with this level of performance, maybe it would've allowed some things in the web platform to pan out differently.

      • apignotti 10 hours ago ago

        Firefox should work just fine, I tried it right now.

        Consider dropping in our Discord for further help: https://discord.leaningtech.com

        • doodlesdev 9 hours ago ago

          Tried it again in a private window and it worked indeed. The problem is probably caused by one of my extensions (not sure which, though, seems a bit unexpected to me). Thank you anyway, and sorry for bothering!

    • sfn42 15 hours ago ago

      > Expecting your average front-end logic to be compiled in WebAssembly does not make much sense.

      Why not? .NET Blazor and others already do that. In my eyes this was the whole hype of WASM. Replace JS. I don't give a crap about running node/java/whatever in the browser, why would i want that? I can run those outside the browser. I mean sure if you have some use case for it that's fine and I'm glad WASM lets you do it but I really don't see why most devs would care about that. We use the browser for browsing the web and displaying our websites.

      To me the browser is for displaying websites and I make websites but I loathe JS. So being able to make websites without JS is awesome.

      • gf000 14 hours ago ago

        Because people don't want to load 300MB for a simple website (and this is blocking the first render, not just loading in the background).

        Not every language is a good source for targeting WASM, in the sense that you don't want to bring a whole standard library, custom runtime etc with you.

        High-level languages may fare better if their GC is compatible with Wasm's GC model, though, as in that case the resulting binaries could be quite small. I believe Java-to-wasm binaries can be quite lean for that reason.

        In c#'s case, it's probably mostly blazor's implementation, but it's not a good fit in this form for every kind of website (but very nice for e.g. an internal admin site and the like)

        • sfn42 14 hours ago ago

          A modern blazor wasm app is nowhere near 300mb. There are techniques to reduce this size like tree shaking. There's no need to include lots of unused libraries.

          Modern Blazor can do server side rendering for SEO/crawlers and fast first load similar to next.js, and seamlessly transition to client side rendering or interactive server side rendering afterwards.

          Your info/opinion may be based on earlier iterations of Blazor.

        • michalsustr 12 hours ago ago

          300MB is nonsense, we are at 2MB compressed with https://minfx.ai

          • doodlesdev 9 hours ago ago

               > we are at 2MB compressed with https://minfx.ai
             
            That's still pretty bloated. That's enough size to fit an entire Android application a few years ago (before AndroidX) and simple Windows/Linux applications. I'll agree that it's justified if you're optimizing for runtime performance rather than first-load, which seems to be appropriate for your product, right?!

            What is this 2 MB for? It would be interesting to hear about your WebAssembly performance story!

            Regarding the website homepage itself: it weighs around 767.32 kB uncompressed in my testing, most of which is an unoptimized 200+kB JPEG file and some insanely large web fonts (which honestly are unnecessary, the website looks _pretty good_ and could load much faster without them).

      • pjmlp 13 hours ago ago

        It does, but honestly besides people missing out on WebForms and Silverlight, it has very little uptake.

        • sfn42 11 hours ago ago

          Thats not too surprising as most web developers are JS developers. I'm sure JS will stay dominant at least a while longer, but in the .NET world Blazor is quite popular as far as web frameworks go. I imagine it will keep gaining popularity.

          • pjmlp 10 hours ago ago

            Not really, most of my .NET project assignments use Angular/React with .NET MVC/Minimal APIs.

            Additionally Blazor is a bad fit for .NET CMS and commerce platforms, none of them supports it for rendering components.

  • jkelleyrtp 15 hours ago ago

    I work on Dioxus (Rust WASM framework).

    WASM for frontend, at least, has been held back by fundamental tools like bundle splitting, hot-reload, debugger symbols, asset integration, etc. We spent a lot of 2025 working on improving this. Vite and friends are really good!

    I've been working on a big Dioxus project recently and am pretty happy with where WASM is now. The AI tools make working with Rust code much faster. I'm hopeful people gravitate towards WASM frameworks more now that the tools are better.

  • guntis_dev 8 hours ago ago

    I've worked with WebAssembly on several real world use cases

    Codec support: Built video and audio decoding in Wasm to bring codec support to browsers that didn't have it natively. Also helped with a custom video player to work around HLS latency issues on Safari.

    Code sharing: We had business logic written in C that needed to run both frontend and backend. Compiled it to Wasm for the frontend, which guaranteed identical behaviour across environments.

    Obfuscation: Currently exploring Wasm for "hiding" some JavaScript logic by rewriting critical parts in Rust and compiling to Wasm. We tried JS obfuscators (including paid ones), but they killed performance. Wasm gives us both obfuscation and better performance.

    • austin-cheney 7 hours ago ago

      To hide parts of JavaScript my best recommendation is to just not send the undesirable JavaScript to the browser in the first place. There are performance and security improvements to that which would be lost when trying to remove this same code after it does arrive to the browser.

      That modification could be as simple as opening the concerned code file in your back end application as a large string and slicing out the parts you don't want. This will likely require some refactoring of the JavaScript code first to ensure the parts you wish to remove are islands whose absence won't break other things.

      • guntis_dev 6 hours ago ago

        Without revealing too much, the business logic must remain client side for this use case, and it's a common problem across our industry.

        I've explained the security reality to the business many times - any JavaScript sent to the client can be read, executed, proxied, or tampered with. That's just how browsers work.

        The current directive is - make it as difficult to understand as reasonably possible. We're not trying to stop determined adversaries (that's impossible), but we can raise the bar high enough to deter script kiddies and casual attackers from easily abusing it.

    • socalgal2 6 hours ago ago

      are any of those codecs open source? A idea for a side project is browser based VLC (play any format). More ideally, a library that lets you play any old format in the browser.

      • guntis_dev 4 hours ago ago

        For open implementations, look at ffmpeg.wasm - it's FFmpeg compiled to WebAssembly and supports a wide range of codecs. It's open source and actively maintained.

        Some truly open/royalty-free codecs you could use - video: VP8, VP9, AV1. audio: Opus, Vorbis, FLAC.

        That said, building a VLC in the browser gets complicated quickly because of licensing - even if the decoder implementation is open source, some codecs have patent licensing requirements depending on jurisdiction and use case. For example, H.264's basic patents have mostly expired, but I'd verify the specific profiles you need.

  • charcircuit 15 hours ago ago

    A big thing overlooked with speed is binary size. WebAssembly is incredibly inefficient at storage space. For people still on DSL they will have to wait seconds (or minutes in the case of Godot) for the blob to download before execution can start.

    Meanwhile javascript will be much faster to download since it is smaller and javascript can execute while it is downloading.

    • chrismorgan 15 hours ago ago

      This is badly wrong.

      • WebAssembly is not huge. Fundamentally it’s generally smaller than JavaScript, but JavaScript comes with more of a standard library and more of a runtime, which unbalances comparisons. If you use something like Rust, it’s not difficult to get the basic overhead down to something like 10 kB, or for a larger project still well under 100 kB, until you touch things that need Unicode or CLDR tables; and it will generally scale similarly to JavaScript, once you take transport compression into account. If you use something like Go or .NET, sure, then there’s a heavier runtime, maybe a megabyte, maybe two, also depends on whether Unicode/CLDR tables are needed, and then JS will probably win handily on bundle size and startup time.

      • JavaScript can’t execute while it’s downloading. In theory speculative parsing and even limited speculative execution is possible, but I don’t think any engine has tried that seriously. As for WebAssembly, it can be compiled and instantiated while streaming, generally at a faster rate than you can download it. The end result is that in an apples-to-apples comparison WebAssembly is significantly faster to start than JavaScript.

      • charcircuit 14 hours ago ago

        >WebAssembly is not huge

        I always feel like I'm downloading megabytes of it whenever someone uses it. In practice it is. Even a basic hello world in rust will set you back a few megabytes compared to a the tens of bytes it takes in javascript.

        >JavaScript comes with more of a standard library and more of a runtime, which unbalances comparisons.

        Being able to make programs in a few bytes is a legitimate strength. You can't discount it because it's an effective way javascript saves size.

        • chrismorgan 13 hours ago ago

          > Even a basic hello world in rust will set you back a few megabytes

          Lies. It’s 35 kB:

            $ cargo new x
            …
          
            $ cd x
          
            $ cat src/main.rs
            fn main() {
                println!("Hello, world!");
            }
          
            $ cargo build --release --target=wasm32-unknown-unknown
            …
          
            $ ls -l target/wasm32-unknown-unknown/release/x.wasm
            … 34597 …
          
          And that’s with the default allocator and all the std formatting and panic machinery. Without too much effort, you can get it to under 1 kB, if I remember correctly.

          For the rest: I mention comparisons being unbalanced because people often assume it will scale at the rate they’ve seen—twice as much code, twice as much size. Runtimes and heavy tables make for non-scaling overhead. That 35 kB you’ve paid for once, and now can use as much as you like without further growth.

          • hajile 7 hours ago ago

            35kb is the size of a lot of entire JS frameworks.

            • chrismorgan 5 hours ago ago

              I should also mention that that 35 kB is uncompressed—gzipped, it’s 13 kB, and with brotli it’s 11.3 kB.

              Meanwhile, an empty React project seems to be up to 190 kB now, 61 kB gzipped.

              For startup performance, it’s fairly well understood that image bytes are cheap while JavaScript bytes are expensive. WebAssembly bytes cost similar to images.

        • adrian17 13 hours ago ago

          > Even a basic hello world in rust will set you back a few megabytes compared to a the tens of bytes it takes in javascript.

          That's definitely not true.

          A debug build of a "hello wasm-bindgen" style Rust program indeed takes ~2MB, but most of that is debug into; disabling that and/or stripping gets it down to 45-80kB (depending how I did it). And a release build starts at 35kB, and after `wasm-opt -O` gets down to 25kB. AFAIK most of the remaining space is used by wasm-bindgen boilerplate, malloc and panic machinery.

          ...and then, running wasm-bindgen to generate JS bindings somehow strips most of that boilerplate too, down to 1.4kB.

          Side note, I never understood how wasm-opt is able to squeeze so much on top of what LLVM already did (it's a relatively fast post-build step and somehow reduces our production binaries by 10-20% and gives measurable speedups).

      • room271 13 hours ago ago

        Just to correct slightly, I suspect most people who write Go WebAssembly are using https://tinygo.org/, which also achieves starting binaries in the 10kb range.

      • kg 6 hours ago ago

        I think you're understating the cost of having to ship your own standard library with every wasm application - the chunk of stdlib used by a real app is bigger than 10kb. ICU data files are in the tens of megabytes, TZDB is a chunk of data too.

        Lots of people pretend they don't need ICU or TZDB but that means leaving non-english-speakers or people outside of the US in the cold without support, which isn't the case for JS applications.

        I still think this is a major unsolved problem for WebAssembly and I've previously raised it. I understand why it's not solved though - specifying and freezing the bitstream for ICU databases is a big task, etc.

        • chrismorgan 5 hours ago ago

          A lot of the stuff WebAssembly is obviously good at is stuff where you never need any of those tables—algorithms, computations, not UI stuff. I would like to see improvements like you to make it more generally useful, but there’s still plenty of scope for WASM even without running into these issues. Also I think you’re probably overestimating the size of the stuff you actually need to include. You tend to include specific tables rather than all the data. More typical figures are like “oh, you used regex with its default features… that’ll cost 500 kB”. And that’ll also be a pre-compression 500 kB. Though on the other hand you can also end up including multiple copies of things when you’re careful about optimising what each one has!

    • tannhaeuser 15 hours ago ago

      This, plus after the Godot runtime, game assets itself have to be downloaded, often making use of ZIP-like archive formats that may have made sense with DLCs or physical media, but require huge downloads (like GBs) to access a single sprite when the browser DOM rendering itself is pretty much about priorizing resources as they're viewed.

      Plus, WASM game runtimes need to bundle redundant 2D or 3D stacks, audio, fonts, harfbuzz, etc. yet don't expose eg. text rendering capabilities on par with those that browsers already have natively.

      The whole thing is priorizing developer over user experience.

    • s-macke 15 hours ago ago

      WebAssembly itself is not that inefficient in storage. It is mostly the usual bloat that comes with binaries. For example, Go binaries have to provide a full runtime, including garbage collection.

      If size is your top priority, you can produce very small binaries, for example with C. Project [0] emulates an x86 architecture, including hardware, BIOS, and DOS compatibility, and ends up with a WebAssembly size of 78 kB uncompressed and a 24 kB transfer size.

      [0] https://github.com/s-macke/FSHistory

      • charcircuit 14 hours ago ago

        >you can produce very small binaries, for example with C

        Not many people are going to want to be rolling their own libc like that author. Most people just compile their app and ship megabytes of webassembly at the expense of their users. To me webassembly is just a shortcut to ship faster because you don't have to port existing code.

        • valadaptive 12 hours ago ago

          > Not many people are going to want to be rolling their own libc like that author.

          Emscripten provides a libc implementation based on musl, and so does wasi-libc (https://github.com/WebAssembly/wasi-libc).

          If you explicitly list which functions you want to export from your WebAssembly module, the linker will remove all the unused code, in the same way that "tree-shaking" works for JS bundlers.

          In my experience, a WebAssembly module (even with all symbols exported) is smaller than the equivalent native library. The bytecode is denser.

          WebAssembly modules tend to be larger than JavaScript because AOT-compiled languages don't care as much about code size--they assume you only download the program/library once. In particular, LLVM (which I believe is the only mainstream WebAssembly-emitting backend) loves inlining everything.

          Judicious use of `-Oz`, stripping debug info, and other standard code size techniques really help here. The app developer does have to care about code size, of course.

    • qouteall 12 hours ago ago

      WebAssembly standard design has considered binary size optimization. The format itself is quite compact. But porting native code to Wasm often brings many large existing libraries which contain a lot of code which makes the binary large.

      The native ecosystem never payed attention to binary size optimization, but the JS ecosystem payed attention to code size in the very beginning.

      • pjmlp 11 hours ago ago

        Yes, because compilers and linkers have optimize for size switches since the 1980's for fun.

    • CryZe 15 hours ago ago

      It depends. If you are compiling a high level GC language to WasmGC then there's really close to no reason why it would be larger than JS.

      • gf000 14 hours ago ago

        That is, if the source language's GC model is compatible with Wasm's.

        • WorldMaker 7 hours ago ago

          WASM's current GC model is mostly about sharing large byte buffers. It's on about the order of OS-level memory page management. Mostly it is getting used to share memory surfaces to JSON serialization/deserialization without copying that memory across the WASM to JS boundary anymore.

          It will be a while before WASM GC will look close to any language's GC.

    • IshKebab 15 hours ago ago

      It's more that JavaScript comes with a large standard library already available. You don't need to ship code to print integers or parse JSON or Unicode tables etc.

    • michalsustr 12 hours ago ago

      It’s not an issue. When we run LTO optimizations, strip all symbols, we get 2MB compressed for decently complex GPU-accelerated rendering (minfx.ai)

  • Tepix 6 hours ago ago

    Here is a minimal example for inline webassembly: A function a that adds two numbers. Can someone make the entire example shorter? (added linebreaks for readability)

        <!DOCTYPE html>
        <p id=r>
        <script>
        WebAssembly.instantiateStreaming(fetch(
        'data:application/wasm;base64,AGFzbQEAAAABBwFgAn9/AX8DAgEABwUBAWEAAAoJAQcAIAAgAWoL'))
        .then(x=>r.append(x.instance.exports.a(51,4)))
        </script>
    
    
    And here is the wat code that we can turn into wasm with wat2wasm and then into base64 for a data URL:

        (module
          (func (export "a") (param i32 i32) (result i32)
            local.get 0
            local.get 1
            i32.add))
  • fourside 16 hours ago ago

    > On every WebAssembly discussion, there is inevitably one comment (often near the top) asking what happened

    The meat of the article is informative, but the headline and motivation is based on this statement. It’s doesn’t reflect my experience but maybe I just don’t hang out in the same internet spots as the OP.

    > We don’t yet see major websites entirely built with webassembly-based frameworks

    I don’t know why this entered into the zeitgeist. I don’t think this was ever a stated goal of the WebAssembly project. I get the sense that some people assumed it and then keep wondering why this non-goal hasn’t been realized.

  • qouteall 12 hours ago ago

    I've written about limitations of WebAssembly https://qouteall.fun/qouteall-blog/2025/WebAsembly%20Limitat...

    WebAssembly still doesn't provide a way to release memory back to browser (unless using Wasm GC). The linear memory can only grow.

    The Wasm GC limits memory layout and doesn't yet support multi-threading.

    Wasm multithreading has many limitations. Such as cannot block on main thread, cannot share function table, etc. And web worker has "impedance mismatch" between native threads.

    And tooling is also immature (debugging requires print debugging)

    • turnsout 10 hours ago ago

      Honestly lack of true multithreading (without the Web Worker hack) is the biggest downside for me. Every major project I work on needs the concept of a main thread for UI and a separate thread for processing.

  • ubavic 15 hours ago ago

    From my experience, WASM is great for easily porting existing codebases to the browser. It took me less than a day to download Emscripten, learn a little about WASM, make one toy project, and then port a 20-year-old, 40-KLOC C++ project to the browser [1]. The last part only took me half an hour, and I don't even write C++.

    [1] https://poincare.matf.bg.ac.rs/~janicic/gclc/

  • radarsat1 10 hours ago ago

    Something I wonder is, what happened to asm.js? It got killed by WASM. In a way this is good, WASM is a "better" solution, being a formal bytecode machine description, but on the other hand, asm.js would not have the same limitations e.g. with respect to DOM interaction, or debates on how to integrate garbage collection, since you stay squarely in the JS VM you get these things for free.

    Basically in some ways it was a superior idea: benefit from the optimizations we are already doing for JS, but define a subset that is a good compilation target and for which we know the JS VM already performs pretty optimally. So apart from defining the subset there is no extra work to do. On the other hand I'm sure there are JS limitations that you inherit. And probably your "binaries" are a bit larger than WASM. (But, I would guess, highly compressible.)

    I guess the good news is that you can still use this approach. Just that no one does, because WASM stole the thunder. Again, not sure if this is a good or bad thing, but interesting to think about... for instance, whether we could have gotten to the current state much faster by just fully adopting asm.js instead of diverting resources into a new runtime.

    • pjmlp 8 hours ago ago

      Which only existed because Mozzilla was against adopting PNaCL.

  • MORPHOICES 15 hours ago ago

    WebAssembly was supposed to be the first “universal runtime” that could literally run anywhere at lightning speed, and while it was certainly an impressive achievement, it was clear to me that the technology friction was mostly on how it was integrated, or more specifically how it was integrated with tools, debugging, interoperation, etc. ~

    The web technologies and frameworks that we have today and how we use them to create solutions, still a lot of developers rely on JavaScript. It may be an outdated language with a lot of issues and problems, but it is still the most popular programming language today. Platforms tend to fail because the workflow surrounding them doesn't offer the flexibility to make the most of the platform.

    The most valuable lesson to learn is that potential alone doesn't drive widespread use of a technology. The flexible integration offered by JavaScript is the only thing that made widespread use of it possible. What is the most valuable thing that Web Assembly offered you? What is the missing element in Web Assembly that makes it hard to use? What does the new technology offer that makes it harder to use and does it repeat the same patterns as Web Assembly?

  • fxj 13 hours ago ago

    WebAssembly in the browser does feel great when you look at things like Pyodide/Pyolite, JupyterLite, xeus, webR and even small tools like texlyre – you get a full language/runtime locally with zero server, just WASM and some JS glue. The sad part is that VS Code for the Web never really became that kind of self-contained WASM IDE: the WASI story is focused on extensions and special cases, and running real toolchains (Emscripten, full Python, etc.) keeps breaking or depending on opaque backend magic. So right now the best “pure browser” experiences are these focused notebook/tool stacks, not the general-purpose web IDE people were hoping vscode.dev would become.

  • nabla9 14 hours ago ago

    Technical details only verify Wasm's potential. Wide adoption is not a technical matter.

    Just like with JVM and other better options before and after it, it's politics, interests and momentum. JVM in the browser was not killed by technology, it was killed by Microsoft. Similarly, we should look who gains and loses relative to other if Wasm becomes mainstream.

    Easy portability and less platform dependence. Who wants it, who does not? Apple, Microsoft, Google, ...

    Just like with JVM the Wasm can be killed with wrong embrace. Microsoft Java Virtual Machine (MSJVM) was named in the United States v. Microsoft Corp. antitrust civil actions, as an implementation of Microsoft's "Embrace, extend and extinguish" strategy. Adopt JVM, remove portability with extensions.

  • johnfn 16 hours ago ago

    I'm actually using WASM (from Rust) on a image editor project. It's pretty good - I see around a 4x perf improvement over JS depending on the benchmark.

    But what happened? Why am I not using it for all of my other random side projects? I posit that the JS ecosystem got so incredibly good that it it's a no-brainer for a very large percentage of workflows. React + Vite + TypeScript is an incredibly productive stack. I can use it to build all but the most demanding apps productively. Additionally, JS is pretty fast these days, so the speed boost from WASM isn't actually that meaningful for most use cases. Only really heavy use cases like media editing or Figma-like apps really benefit from what WASM has to offer.

  • herobird 12 hours ago ago

    > There is a lot of desire for advancement, but standardization means decisions are hard to reverse. For many, things are moving too quickly and in the wrong direction.

    Most Wasm proposals are very elegantly designed and effective - meaning they provide lots of value for relatively minor specification bloat. Examples are tail-calls, multi-value, custom-page-sizes, memory64 and even gc.

    However, the simd and flexible-simd increased spec bloat by a lot, are not future-proof and caused more fragmentation due to non-determinism. In my opinion work should have focused on flexible-vector (SVE-like) which was more aligned to Wasm's original goals of near-native performance. The reason for this development was that simd was simpler to implement and thus users could reap benefits earlier. Unfortunately, it seems the existence of simd completely stalled development of the superior flexible-vectors proposal.

    If flexible-vectors (or similar) will ever be stabilized eventually, we will end up in one of two (bad) scenarios:

    1) People will have to decide between simd and flexible-vectors for their compilation, depending on their target hardware which is totally against Wasm's original goals.

    2) The simd proposal will be mostly unused and deprecated. Dead weight.

    • whizzter 11 hours ago ago

      From what viewpoint do you view them?

      simd128 fills a common need(most games using vector operations) and was a viable option with _broad hardware support_, yes, it adds a ton of instructions and impacts a ton of places with regards to memory ops but vec4 operations commonly use much of those instructions. Better useful than something that will never have a chance of standardization.

      On the other spectrum, things like custom-page-sizes seems like a simple flexible solution but smells like an implementation nightmare if you already have a runtime since that really impacts things on a far deeper level (64k pages was probably a mistake, but reading up on the issues of emulating x86 with 4k vs 16k pages on Mac's kinda hints at how devious "small" things like that is), i'm not surprised if it never comes about as an offical part (only 3 runtimes supporting it so far).

      I can understand the need for tail-calls but at the same time it's also an annoying can of worms to implement into compilers that wasn't prepared (could have been a large part of why it took so long for Safari to support).

      wasm-gc really hit a real-world need (bindings did really suck.. they're better but not perfect now) but also comes in a bit half-assed in some respects (languages like C# needing workarounds to use it), same with memory64 being a real-world need.

      I can see different camps (popular/functional languages for gc,m-val and tail-calls), games (simd128, multithreading, memory64), embedded(flexible pages),etc all competing and having focus on what they want but all camps also need to understand that pushing _everything_ will be pushing the risks of the web (security) and in the end that's what wasm was for, providing a runtime to run non-JS code on the web.

      • herobird 9 hours ago ago

        My view on specifications is that their long-term success depends on the value they provide relative to their complexity. Complexity inevitably grows over time, so spending that complexity budget carefully is crucial, especially since a specification is only useful if it remains implementable by a broad set of engines.

        WebAssembly MVP is a good example: it offered limited initial value but was exceptionally simple. Overall, I am happy with how the spec evolved with the exceptions of 128-bit simd and relaxed-simd.

        The main issue I see with 128-bit simd is that it was always clear it would not be the final vector extension. Modern hardware already widely supports 256-bit vector widths, with 512-bit becoming more common. Thus, 128-bit simd increasingly delivers only a fraction of native performance rather than the often-cited "near-native" performance. A flexible-vectors design (similar to ARM SVE or the RISC-V vector extension) could have provided a single, future-proof SIMD model and preserved "near-native" performance for much longer.

        From a long-term perspective, this feels like a trade-off of short-term value for a large portion of the spec's complexity budget. Though, I may be underestimating the real challenges for JIT implementers, and I am likely biased being the author of a Wasm interpreter where flexible-vectors would be far more beneficial than 128-bit simd.

        Why you think flexible-vectors might never have a realistic path to standardization?

  • misiek08 10 hours ago ago

    For me the biggest issue is all the articles and videos showing how people run entire companies in WASM and then sample code with fn(i32, i32) i32. The interoperability between languages and pre-defined APIs like WASI are just not there yet and its just rough to use.

    Of course crazy things can be (were already!) done with WASM, but it's more like Rust in the beginning and is still advertised as Go ;)

  • koito17 12 hours ago ago

    To add to the author's list of examples, the regex test site Regex 101 (http://regex101.com/) relies on WebAssembly. To verify this in Firefox, you can set javascript.options.wasm to false, and you will be instructed to use a browser supporting WebAssembly.

    If you're willing to risk some safety guarantees, then you can embed SQLite in Go without cgo by using WASM builds of SQLite. In particular, this package: https://github.com/ncruces/go-sqlite3

    Note: the risk here is that it's unclear how well-tested SQLite WASM builds are compared to native builds for things like data integrity. With that said, in most of my personal projects using Go, I frequently reach for the WASM builds because it keeps builds fast and easy.

    Also, I want to mention that I have seen web apps that are C# / Blazor programs compiled into WebAssembly. The accessibility is predictably terrible, but I have seen at least one such web app in the wild. I assume this is largely why one doesn't encounter WASM web frameworks often. In any case, WASM is surprisingly useful in many niches, and that's kind of the problem for WASM's visibility: the niches where I find WASM useful are almost completely disjoint from each other. But it's a solid technology nowadays. The only real gripe I have is the fact that only wasmtime seems to fully support wasm32-wasip2. You can actually compile quite a lot of Rust backend stuff into WASM and run that instead of a container. Not that this is particularly useful, but I've found it interesting as an exercise.

  • singularity2001 11 hours ago ago

    The biggest blunder was not adding UTF-8 strings as a first class citizen

    The second blunder was not allowing for any direct memory mapping I know it's against the security system but if you have to copy every pixel one by one to the host then that won't be effective

    The third blunder was when they finally added GC objects to not make any of the objects properties readable from the host

  • liampulles 12 hours ago ago

    At the last few places I've worked, we've seen most users engaging with our mobile app, and so it has made sense to develop a mobile app with flutter or kotlin multiplatform (or similar) for our broad userbase and then to use good ol' backed templated HTML for administrative sites rather than an SPA. Doing backend templating with good ol forms and what not is still a pretty good way to develop normal, boring websites.

  • adamdecaf 9 hours ago ago

    Compilation target support from Go and other languages makes it really easy to provide your library to websites - which we use for demos. It's quick to compile Go code into WASM and show folks quickly what your library offers.

    Plus the demo's computation happens client side so no data is sent to a server.

    We can offer our full payment parsing libraries to the web as developer tools without any code changes. I don't have to care about the details of WASM because it "just works".

    https://moov-io.github.io/ach/webui/

  • socalgal2 6 hours ago ago

    To more examples of web assembly

    Photoshop Online:

    https://www.adobe.com/products/photoshop/online.html

    And just announced

    Unity Online:

    https://youtu.be/xJONoHr1N6A?t=1717

  • troad 15 hours ago ago

    The folks behind WASM are wilfully blind to the big honking use case that everyone wants (a fully-featured JS replacement, targetable in any language), in favour of an abstract adventure in chasing some ideal Platonic ISA, except there's no obvious market or practical use case for such a thing.

    WASM will live and die in the browser. I wish the folks behind it would acknowledge that fact and give it sufficient browser interop to finally render JS unnecessary.

    • creata 14 hours ago ago

      > a fully-featured JS replacement, targetable in any language

      Any language can target a combination of JS and Wasm, right now, to get a "fully-featured JS replacement". How would adding more features to Wasm improve that situation?

      • troad 10 hours ago ago

        How does substituting JS for some other JS achieve a goal of replacing JS? If someone wanted to replace C, would you suggest C as a C replacement? That's nonsensical in context.

        Adding browser interop to Wasm that obviated the need for JS would, obviously, achieve that goal. Hence the improvement.

        • creata 2 hours ago ago

          > If someone wanted to replace C, would you suggest C as a C replacement?

          If someone wanted to replace C, I would strongly suggest compiling something else to C, yes. That seems kind of obvious. It's how many programming languages that aim to replace C get their start.

          To put it another way: I can see how [replacing JS as the interface that the human deals with] can be valuable goal, but why is [replacing JS so completely that it doesn't even exist as generated code] so valuable to you?

          • troad an hour ago ago

            If someone wanted to replace C, they are likely seeking to do so for reasons intrinsic to C, such as the risk of memory over/underflow, use after free, problematic numeric promotion, lack of proper strings and bools, etc.

            Transpiling to C is the worst possible way of trying to address these issues. If you write `x + y` in your newlang, and that transpiles to `x + y` in C, you have simply inherited all of C's implicit type conversion nonsense. If, alternatively, you write a whole bunch of machinery to ensure x + y is always safe, congratulations, you're now writing a VM in C, which is even harder to get right and do safely. If you're trying to reduce your use of C, and you wind up maintaining a VM in C, I rather think you've somewhat catastrophically failed in your objective.

            It's for this reason that languages trying to replace C don't generally transpile to C, despite your claim. The biggest C replacement candidates right now are Zig and Rust, both of which target LLVM IR, not C. There are precious few use cases where you'd want to leave C in as an unnecessary, problematic middleman, when LLVM IR is available.

            Similarly, transpiling to JS inherits all of JS' baggage and issues, of which much has accumulated in the last thirty years. It would be an undoubted improvement to be able to bypass that layer when it isn't useful, regardless of one's opinion on JS.

            Just as C replacements compile to pure LLVM IR, a JS replacement should be able to compile to pure WASM.

            • creata 33 minutes ago ago

              > Transpiling to C is the worst possible way of trying to address these issues.

              It's really not. Compiler writers, in your words, "write a whole bunch of machinery" so that the code has the intended semantics. It's not fundamentally that different to generating LLVM IR.

              I never said every single language compiles via C. Sure, you're right, Zig and Rust generate LLVM IR instead (edit: iirc Zig is moving to their own backend, but I don't think that's relevant), and there isn't much reason not to target LLVM these days, unless you want to target niche platforms that LLVM doesn't support.

              > It would be an undoubted improvement to be able to bypass that layer when it isn't useful, regardless of one's opinion on JS.

              I will ask you clearly: you are asking for a lot of work from browser makers. What use cases, concretely, will that work actually enable?

              If you hate the baggage of JS -- which is fair enough, there's a big mismatch between it and many other languages -- Wasm can be used for most of the heavy lifting, and it lacks that baggage. The JS only needs to be a little layer between Wasm and the browser.

              But what will getting rid of that layer enable? What will it let people do that couldn't be done before?

              I'm not trying to be mean, so I'm sorry if it came off that way.

    • runarberg 15 hours ago ago

      I see a lot of people declaring that, but I immediately disregard it. In my circles this is a fringe believe. Wasm’s most obvious use case is running x86 binaries in your browser. It is pretty good at that, and that is what most people are using it for. Personally I don‘t see why Wasm needs a different future then the one it is already on.

  • dfabulich 13 hours ago ago

    > Many seem to think there is a path to Wasm replacing JavaScript within the browser—that they might not need to include a .js file at all. This is very unlikely.

    This article didn't even seriously entertain replacing JavaScript as an idea, saying nothing about why it's "very unlikely." But it's the #1 thing most devs are excited about in WASM: maybe they could ditch JS and use another language instead for browser UI, at least Rust, but maybe Go or even Python.

    The reason that's unlikely is that browser UI is defined in standards as a JavaScript API; restandardizing an ABI for low-level languages would take years (perhaps decades). https://danfabulich.medium.com/webassembly-wont-get-direct-d...

  • tdrz 15 hours ago ago

    One thing that happened to WebAssembly is that it allowed for npm packages like PGlite to be created. With a simple `npm install` you now have a PostgreSQL instance in your web or node app, no (connection) strings attached (pun intended). Full disclosure: I am one of the maintainers.

  • stanac 15 hours ago ago

    Anyone knows why docker is dropping wasm workloads? I never heard of anyone using it, I thought it was because wasi hasn't reached "1.0" yet, so ecosystem is still small.

    Wasm and wasi are very promising, as stated in the article, it's safe/isolated by default, it can target different hardware and almost any popular language (in theory) can be compiled to wasm. It sounds perfect on paper. It's quick to start (quicker than docker). Maybe it will be replacement/supplement for lambda-esque type of workloads.

    https://docs.docker.com/desktop/features/wasm/

  • currywurst 14 hours ago ago

    WASM (and WebGL) seems to have powered Figma to a $20 billion aquisition offer by Adobe a while ago ; )

    It's a fantastic item for the browser toolbox, and i agree with Amea that the "hallmark of success" has been achieved by this technology

  • hollowturtle 8 hours ago ago

    I just wish wasm had some kind of api for drawing on a canvas, managing pointer/touch events and provide some accessibility apis. That's all I wanted and I believe also need for ditching altogether dom and other horrendous apis and start making real native like apps in the browser

  • winternewt 12 hours ago ago

    The use case I always envisioned from the sidelines was that I could ditch a poorly designed, garbage-collected mess of a language (JavaScript) for something typesafe, with predictable performance, cache locality by default (as in not making everything a reference), no GC, generics, etc. But WASM won't be there until it has first-class access to the DOM, in my opinion.

  • pdubroy 14 hours ago ago

    Shameless plug: if you're interested in learning WebAssembly — like really learning the bytecode format and how it works — you might like our book, WebAssembly from the Ground Up: https://wasmgroundup.com.

    It starts with handcrafted bytecode for a minimal Wasm module in JavaScript, and then guides you through the creation of a simple compiler for a toy language.

  • zcw100 11 hours ago ago

    I'm usually pretty good at explaining new technologies to people but WebAssembly has got to be the most difficult to try and explain. The sheer number of misunderstandings about it is amazing. Luckily the misunderstandings serve my purpose for right now so I'm glad to see all the noise.

  • subset 11 hours ago ago

    I recently wrote an eigenvalue solver for an interactive component on my blog with Rust compiled to WebAssembly. Being able to write-once and compile for the web and desktop felt like the future. But then, I'm no fan of JavaScript and wouldn't have attempted it if WASM didn't exist.

    • Avamander 11 hours ago ago

      I've just recently done the same, turned Rust into WASM and it does feel great. Being able to compile mature and well-tested libraries into WASM instead of trying to find a JS equivalent is incredible value.

  • TN1ck 12 hours ago ago

    > Figma runs untrusted user plugins in your browser by running them in a QuickJS engine that is compiled to Wasm.

    According to the linked blog article, this is not what they are doing, but rather an option they explored. They use JavaScript Realm shims to isolate the execution.

  • rho4 14 hours ago ago

    The author makes beautiful concise statements that make me feel like he has a deep, big-picture kind of understanding of computing.

    I think this person would be very satisfying to work with, because decisions would be based on a discussion of tradeoffs, and an awareness of similar technologies and approaches throughout computing history.

  • psychoslave 15 hours ago ago

    >It is almost 1:1 in that you can compile WAT to Wasm and then back to WAT with barely any loss in information (you may lose variable names and some metadata).

    I love it! It reads like, "you can put your snowman in a oven to obtain water and then the water to a snow machine to get to your initial material state with almost no information lost."

    • everfrustrated 7 hours ago ago

      I was reading a research paper who benchmarked some wasm compilers and the fastest was converting wasm back to c and recompiling it again!

      • vitalnodo 6 hours ago ago

        Can you recall the link?

        • everfrustrated 5 hours ago ago

          see w2c2 in this paper

          https://www.opencloudification.com/wp-content/uploads/2025/0...

          Tho I have mis-remembered it. They transpile wasm back to C and compile that to a native binary.

          • yencabulator 36 minutes ago ago

            w2c2 has only 2 mentions. wasm2c is not a clear winner, it's specifically losing several of their benchmarks.

            In general, using a preexisting compiler as a JIT backend is an old hack, there's nothing new there. It's just another JIT/AoT backend. For example, databases have done query compilation for probably decades by now.

  • michalsustr 15 hours ago ago

    We love wasm! You can get pretty far with it. We’re building new machine learning experiment tracker using wasm on the front end. (If you know what Wandb or Neptune is, you should give us a try!)

    As far as I know, we are the fastest on the market. The multithreaded support is a pain though.

    https://minfx.ai

  • Ono-Sendai 12 hours ago ago

    I use webassembly for Substrata (https://substrata.info/). It works pretty well, allows building a c++ app using OpenGL for the web.

  • austin-cheney 14 hours ago ago

    Going back a decade I remember numerous comments, in here and Reddit, from developers (typically Java developers) completely desperate for WASM to be a JavaScript replacement. This was despite the design goals of WASM literally stating the opposite.

    Otherwise, WASM looks like a complete success.

    • pjmlp 8 hours ago ago

      Java developers already had GWT as alternative, no one cares that much.

  • childintime 12 hours ago ago

    There is an intriguing alternative to WASM for many use cases: a RISC-V VM.

  • weinzierl 15 hours ago ago

    "We don’t yet see major websites entirely built with webassembly-based frameworks."

    The more telling question to me is:

    Do we see real world websites that are not just tech demos coming out of WASM aficionados circles. Sites that are actually useful to a significant number of people, even if we wouldn't necessarily call them major websites.

    https://cbva.com/

    comes to my mind, but there must be more.

    • Jweb_Guru 9 hours ago ago

      This website made me suddenly have a huge feeling of loss for what the web could be like. It is so snappy (in the way old static sites could be) but without the page transitions that made them fall out of fashion.

    • okokwhatever 11 hours ago ago

      The "Loading..." message makes it so 90's. I like it.

  • cjs_ac 15 hours ago ago

    The weakest point in any computer system is the bag of meat operating the thing; the second weakest point is the network. Most web apps that are slow are slow because of the endless chit-chat between client and server across the network, and because too much business logic runs on the client machine, which might be a ten-year-old smartphone. For these apps, improving performance is about minimising the number of HTTP request-response pairs and moving logic to the server, not making the frontend code run faster.

    > I figure most are under the impression that the advancement of this technology would have had a more visible impact on their work. That they would intentionally reach for and use Wasm tools.

    > Many seem to think there is a path to Wasm replacing JavaScript within the browser—that they might not need to include a .js file at all. This is very unlikely.

    This is because most of us are not writing fancy browser-based 3D game engines; we're writing boring enterprisey CRUD apps, and the only things we want from out frontend code are HTTP request-response handling and DOM manipulation. Consequently, the irrelevance of WASM evangelism is frankly boorish.

  • mickael-kerjean 10 hours ago ago

    I use WASM extensively in my OSS work with Filestash (https://github.com/mickael-kerjean/filestash ) in 3 main areas:

    1. to create web versions of applications that are traditionally desktop only to render things like Parquet, PSD, TIFF, SQLite, EPS, ZIP, TGZ, and many more, where C libraries are often the reference implementations. There are almost a hundred supported file formats, most of which are supported through WASM: https://github.com/mickael-kerjean/filestash?tab=readme-ov-f...

    2. to create plugins that extend the core application. As of today, you can add your own endpoint or middleware in Filestash, package it with its own manifest, and run server-side code in a constrained environment. For example, there is a libreoffice wasm edition that can run from your browser but requires a couple HTTP headers to be sent by the server to work so the plugin has this bit that run server side to add those HTTP headers: https://github.com/mickael-kerjean/filestash/blob/master/ser...

    3. in the workflow engine to enable people to run their own code in actions while ensuring they can't fuck everything up

  • WhereIsTheTruth 15 hours ago ago

    The sandboxification of WASM is what happened

    Instead of building a true portable binary format with system access, we got a JavaScript VM from TEMU:

    - Reference Types

    - Exception Handling

    - GC

    Makes GC'd languages compile better, not system programming

    Meanwhile, the actually needed capabilities remain blocked forever:

    - Memory: Still can't mmap, still can't allocate outside linear memory

    - Networking: Still needs JS interop bullshit

    - Device: Still need JS interop bullshit and still sandboxed behind browser security model

    The Result: WASM isn't a serious systems target, it's a compilation artifact for managed languages that could've just targeted JS directly

    • pjmlp 13 hours ago ago

      Correction, some GC languages, the GC doesn't support interior pointers for example.

  • ceving 15 hours ago ago

    The data exchange between host and guest is still unspecified. You can not access host objects from Wasm. Most do just string serialization, which is not fast. Or they write libraries for some particular languages, which damages the universal idea of Wasm. And WASI seems to be quite controversial: https://www.assemblyscript.org/standards-objections.html

    • saghm 14 hours ago ago

      Reading though that link is very confusing as someone who hasn't been actively following all of the various standards and organizations in detail. Quite a lot of the technical claims seem a bit vague and hard to evaluate without being more of an expert (like the repeated complaints about WASI being bad for Java/JavaScript/"the Web"). Pretty much the only concrete technical concern I could glean is that whoever wrote that was extremely unhappy about the use of UTF-8 over UTF-16, which I can understand feeling strongly about, but I also feel like the situation where there's a choice between the way Java/JavaScript/Windows does it and the way a lot of other things do it isn't exactly an original problem to WebAssembly. There's merit in the idea of sticking with what a lot of web stuff already uses, but it's not really that crazy to consider that a new standard would be the place where you might try to design with an eye towards shaping the future rather than always following the past.

      Moreso than anything technical though, there sure seems to be a lot of bad blood between the group of people behind AssemblyScript and the people behind WASI. This feels like a classic case of small initial technical disagreements spiraling out of control and turning into a larger conflict fueled by personalities and organizational politics. I agree that overall this doesn't add confidence to the WebAssembly ecosystem as a whole, but it's not clear to me that the obvious conclusion is "WASI is controversial" as "WebAssembly seems like it might have a problem with infighting".

      • nightpool 8 hours ago ago

        "the group of people behind AssemblyScript" is just one person, as far as I can tell from this doc / the relevant Github threads. I wouldn't necessary call it infighting per se, at least not from this interaction.

      • ceving 13 hours ago ago

        Wasm has a fundamental problem: int64 is an insufficient data type for real use cases. If you want to create some kind of plugin system based on Wasm, you need to exchange structured data. But most languages disagree about the memory layout. Dynamic languages do tagging, compiled languages do not. And the UTF issue shows that even with strings, there's still no real agreement.

        Furthermore, there are now competing interest groups within the Wasm camp. Wasm originally launched as a web standard: an extension of the JavaScript environment. However, some now want to use Wasm as the basis for replacing containers: an extension of a POSIX environment.

  • neomantra 12 hours ago ago

    November 2025 was when AntiGravity and Gemini 3 came out and everything changed for me. Six months earlier, I had tried to vibe-code the 21+ verification page to AgentDank (an OSS cannabis MCP server connecting LLMs to open data via DuckDB SQL). After hours, I couldn't get a page fully working.

    I tried it with AG+G3, I prompted both the age 21+ screen AND the chat interface. It one-shotted a working version of both in less than a minute.

    I was immediately free to start exploring my idea! Adding multiple personality Budtenders, a Stash box for frequent item; it would create mocks and tests. So liberating.

    Then I had this other idea in my head for a while, that since DuckDB is broadly portable and can target WASM, I could play with the datasets in the browser and much of the app doesn't need an MCP-connected LLM or any backend services.

      next up, there is another mode where we will browse and visualize the cannabinoid contents.  the dataset will be the data here https://github.com/AgentDank/dank-data    we will use apache echarts for visualizations.  we can probably embed duckdb in the browser and do it all the queries there.   we can have some simple UI for exploring, as well as raw SQL query
    
    
    And it one-shotted an entire DBA application interface with a custom UI and visualization to explore the data. Then I asked for some 3D WebGL charts with echarts and we got that working too.

    So WASM is gonna be as important as ever because we have tons of software which can be compiled to WASM, Web is the UX meeting point, and LLMs can help bring it all together.

  • hexo 5 hours ago ago

    Nothing, it is still disabled on all my devices as it always was and will be.

  • mirzap 15 hours ago ago

    Feels like most of the disappointment comes from the wrong expectation. Wasm was never going to replace HTML/CSS/JS for normal frontend work, and JS got good enough that most apps don’t need it anyway. On the other hand, Wasm as a universal runtime (WASI replacing containers, etc.) is clearly still unfinished.

    Where it has worked is as infrastructure: fast, sandboxed, portable code for the parts that actually need it. A lot of people are already using it indirectly without realizing. So it’s less "what happened to Wasm?" and more "it didn’t become the silver bullet people imagined."

  • coldtea 13 hours ago ago

    >But I think this alone is not very convincing. We don’t yet see major websites entirely built with webassembly-based frameworks

    Unless it's something like Figma or a game, why the fuck would they be?

    So that you get the joy of writing your website in some language that ports to WebAssembly (and is much more difficult to find frameworks and developers for) and not the native Javascript?

  • syrusakbary 13 hours ago ago

    Great analysis. I'm Syrus, from Wasmer, and I've been working on WebAssembly professionally for the last 7 years (we are, in fact, the first Wasm-first company!)... hopefully my point of view would be useful to read!

    Why things are not as heated? Simply, because many of the big players are no longer doing the big bets on the technology, nor they are spending any marketing on making it successful. Mainly because most of their bets have been unsuccessful: WASI, Component Model. Many of the small players that raised money on the space either died or ended being acqui-hired by bigger players. The only ones that survive is the ones that truly understand is that tech doesn't matter a thing, is the product (what are you enabling with WebAssembly).

    In my view, this happens because there's a great mismatch between technical capabilities and the go-to-market skills that bringing the tech to the masses requires.

    The developers that tend to be technically great and understand the value of WebAssembly, are usually not as good on Go To Market to make it successful. For example, WASI proponents wanted to completely break the POSIX model (because in their view, is completely wrong... and they are partially right!). But they don't only want Wasm to succeed... they also want their mental model of new Operating System calls to go along with it (thus, you tie the success of one, to the success of another).

    AI only amplifies the Go To Market skills even further, by accelerating tech even more. When your MOAT is fully built around the tech but there's nothing that sustains it (a product), then you have an issue. The market is what sustains it, nothing else. People in the ecosystem cared way more about politics (creating a working group to control other companies), than they cared about creating something that many people could use tomorrow.

    At Wasmer, it took us a bit of time to understand this, but overtime we have been able to improve our skills to continuing capturing value from it.

    So, it's possible to create something successful with WebAssembly. You just need to make something people want (tl;dr: is not the tech!)

    • adrian17 11 hours ago ago

      > Mainly because most of their bets have been unsuccessful: WASI, Component Model.

      Can you expand on that? I've only been using wasm for web (and the current status quo of JS bindings to the DOM is working just fine for me) so I haven't been following that strongly, but for the last couple months I was under impression that people are still trying to push WASI.

      • syrusakbary 11 hours ago ago

        WASI in their first attempt has been a success (what now is called wasip1). wasip2, p3, are tied into the component model.

        I'd say that wasip1 has been successful, but not any future version. You can check which version is the most popular just by looking at the Rust WASI crate versions and how much downloads each one has:

        wasip1:

          https://crates.io/crates/wasi/0.11.1+wasi-snapshot-preview1
          https://crates.io/crates/wasi/0.10.2+wasi-snapshot-preview1
          https://crates.io/crates/wasi/0.10.0+wasi-snapshot-preview1
          https://crates.io/crates/wasi/0.9.0+wasi-snapshot-preview1
        
        wasip2, p3:

          https://crates.io/crates/wasi/0.14.7+wasi-0.2.4  
          https://crates.io/crates/wasi/0.13.3+wasi-0.2.2
          https://crates.io/crates/wasi/0.12.1+wasi-0.2.0
  • Zedriv 16 hours ago ago

    Love WASM! Still hoping for proper multithreading someday.

    • creata 14 hours ago ago

      Emscripten has pthreads. You can multithread it yourself using web workers and shared memory. What's missing?

  • gaanbal 11 hours ago ago

    happened? past tense? it's still being worked on. expect massive things in the near future.

  • perryizgr8 7 hours ago ago

    > Separately, I think the community is not helped by the philosophy of purposely obfuscating teaching material around Wasm

    What does the author mean by this?

    • azakai 6 hours ago ago

      Yes, that puzzles me too. Not only do I not know what the author means, I'm not sure what it could mean: teaching material for wasm is generated by many independent people, each for their own tools and purposes. There is no organization behind all that, much less a philosophy.

  • guluarte 8 hours ago ago

    Most managers don't care about performance

  • doloop 16 hours ago ago

    I was wondering if wasm was used to transpile jsx in browserland.

  • classified 11 hours ago ago

    AI has eaten so much attention that actually good ideas are being forgotten or overlooked.

  • jokethrowaway 12 hours ago ago

    As someone who's going to write frontend in Leptos in the next 2 weeks, what stops me from recommending wasm for every frontend application is the bundle size. I don't want to ship compiled megabytes to the user to render UI.

    If there was a rust frontend framework that compiles to JS, I'd use it for all my frontend code.

  • BiteCode_dev 14 hours ago ago

    The success of webassembly has not been the "universal language for the web clients" we all expected.

    But it has been "damn, that's a pretty good sandbox we all can compile to".

    And of course, it means we can now have safe Python execution services from user input thanks to stuff like pyodide now.

  • forrestthewoods 15 hours ago ago

    WASM works. Except for all the convoluted edge cases where it doesn’t.

    Also, wasm doesn’t solve enough real problems. JavaScript sucks but is plenty good enough for most things. Wasm unlocks a few things. But it makes no sense for, say, Steam games that are tens of gigabytes.

    If wasm didn’t exist the internet and world would be… fine? Use JavaScript in a browser or go actual native. The space inbetween for wasm exists but is extremely small. Especially for anything other than cool visualization widgets.

  • rvz 15 hours ago ago

    A solution in search of a very very very tiny problem to solve.

    Which almost no-one cares about.

  • dboreham 8 hours ago ago

    I think it's more productive to analyze the adoption (or not) of a technology from the perspective of it being a cult, rather than with strict technical factors. A cult grows as an emergent phenomenon where the conditions on the ground create incentives for new members to join faster than old ones leave.

    Through this lens there are actually two cults with two cult rallying cries:

    1. The browser is, arguably, a terrible program execution environment. You have to use a stupid language and there's a ton of pretty standard things you can't do (e.g. have proper concurrency). Let's fix that by baking a proper program execution environment into the browser.

    2. There are lots of places where someone builds an application (a real application that runs as a process on an OS) that then needs to support some sort of embedded programmability. Historically there have been many ways to do this: embed Lua, embed a Python interpreter, embed a JS interpreter, write the application in a language that inherently supports runtime dynamic binding (Java, Lisp, ...). Let's make a better version of that thing such that it supports all common languages.

    My take is that while WASM was developed by people in the #1 cult, it has actually been adopted by people in the #2 cult. I see WASM used all over the place as a way to host user-provided code inside things. Blockchain nodes are a common use case, for example.

    Then there's a third use case that I think motivates many of the comments here which is: back in the day we could make an application and distribute it to users who would run it on their computers. That pretty much isn't possible now for various reasons, but primarily because computers are locked down (particularly mobile). If only we could be allowed to run regular code inside the one execution environment that's not locked down (the browser), imagine what we could do then. Problem is that WASM doesn't have all the features necessary for this use case. Experience in the past with similar things (ActiveX, Java, ...) suggests that if it did, it would also become locked down.

  • zb3 10 hours ago ago

    WebAssembly sucks with regard to emulation speed, it doesn't even support native JIT. If you disagree, go and make a QEMU port where TempleOS doesn't take 5+ minutes to load.

    WebKVM is what we need..

    • butterisgood 4 hours ago ago

      Was that not basically in the same vein as Google's NaCL? Was that not largely abandoned due to the success of WASM.

      lol?

  • Yizahi 11 hours ago ago

    This article misses one important point. Maybe even the most important. WebAssembly didn't get traction because of theft. Making a game or professional software in it essentially equals to publishing full source and assets online, ripe for taking by any unscrupulous party. SAAS may endure that, but games will not. And that's why we can't have nice things.

    • bigfishrunning 9 hours ago ago

      > Making a game or professional software in it essentially equals to publishing full source and assets online, ripe for taking by any unscrupulous party.

      How is this true? seems to me that webassembly looks kind of equivalent to the output you'd get from an x86 disassembler for an x86 native program -- sure it's editable, but it's certainly not equivalent to the original source used to produce it.

      To put it another way -- Webassembly encourages theft exactly as much as any other kind of DRM-free publishing; and you can add anti-piracy measures to it in the same way you can with other software.

  • shevy-java 15 hours ago ago

    I am kind of disappointed with regard to WebAssembly.

    There were several articles that promoted it heavily - aka the hype phase.

    And then ... nothing really materialized. If you look at, for instance, ruby WASM, https://github.com/ruby/ruby.wasm - there is virtually zero real documentation. Granted, this is a specific problem of ruby, and japanese devs not understanding english; but when you search for webassembly, contrast it to the numerous tutorials we have with regards to HTML, CSS, JavaScript. I get it, it is younger, it is harder than the other three tech stacks, but virtually nothing really improves here. It is like a borne-dead technology that has only a tiny niche, e. g. Rust developers. That's about it. And I fear this is also not going to change anymore. After a while, if the hype fails to deliver, people will lose interest - and a technology will eventually subside. That also happened to e. g. XHTML and the heavy use of XML in general in, say, 2000. I also don't think WebAssembly can be brought back now that the hype stage went off.