XSLT RIP

(xslt.rip)

693 points | by edent 3 days ago ago

469 comments

  • koito17 3 days ago ago

    I was hoping the site itself would be an XML document. Thankfully, it is an XML document.

      % curl https://xslt.rip/
      <?xml version="1.0" encoding="UTF-8"?>
      <?xml-stylesheet href="/index.xsl" type="text/xsl"?>
      <html>
        <head>
          <title>XSLT.RIP</title>
        </head>
        <body>
          <h1>If you're reading this, XSLT was killed by Google.</h1>
          <p>Thoughts and prayers.</p>
          <p>Rest in peace.</p>
        </body>
      </html>
    • ktpsns 3 days ago ago

      This is actually a clever way to distinguish if the browser supports XSLT or not. Actual content is XHTML in https://xslt.rip/index.xsl

      The author is frontend designer and has a nice website, too: https://dbushell.com/

      I like the personal, individual style of both pages.

      • konimex 3 days ago ago

        > https://dbushell.com

        Heh, I honestly thought the domain name stood for "D-Bus Hell" and not their own name.

        • bpoyner 3 days ago ago

          Chuckling at the disclaimer 'No AI made by a human.' I doubt many web devs could tell you that because so many use AI now. I was speaking with a web dev this summer and he told me AI made him at least twice as productive. It's an arms race to the bottom imo.

          • Cthulhu_ 3 days ago ago

            Which begs the question, are people consciously measuring their productivity? If so, how? And did they do it the same way before and after using AI tooling?

            Anecdotal, but I don't measure my productivity, because it's immeasurable. I don't want to be reduced to lines of code produced or JIRA tickets completed. We don't even measure velocity, for that matter. Plus when I do end up with a task that involves writing something, my productivity depends entirely on focus, energy levels and motivation.

          • tracker1 3 days ago ago

            I tried Github CoPilot for about a year, I may try Claude for a bit sooner or later.

            It felt like it got in the way about half the time. The only place I really liked it was for boilerplate SQL code... when I was generating schema migration files, it did pretty good at a few things based on what I was writing. Outside that, I don't feel like it helped me much.

            For the Google search results stuff, Gemini, I guess... It's hit or miss... sometimes you'll get a function or few things that look like they should work, but no references to the libraries you need to install/add and even then may contain errors.

            I watched a friend who is really good with the vibe coding thing, but it just seemed like a frustrating exercise in feeding it the errors/mistakes and telling it to fix them. It's like having a brilliant 10yo with ADD for a jr developer..

            • DANmode 3 days ago ago

              …that never stops asking you for more work.

              And doesn’t bother you when the tab is closed.

              I can see why a lot of high school and college kids are going to need to claw.

              • tracker1 a day ago ago

                The issue is that you can't give AI a task and let it go off and it actually performs said task in a couple hours and then comes to ask for more... you have to pretty much baby sit it.

                Now, I could see a single person potentially managing 2-3 AI sessions across different projects as part of a larger application. Such as a UI component/section along with one or more backend pieces. But then, you're going to need 2-3x the AI resources, network use, etc. Which is something I wouldn't mind experimenting with on someone else's dime.

                • DANmode a day ago ago

                  Are you using project architecture, and rules documents?

          • jeltz 3 days ago ago

            One of the only studied made so far showed lower actual productivity despite higher self reported productivity. That study was quite limited but I would take self reported productivity with a huge grain of salt.

          • rekabis 3 days ago ago

            > he told me AI made him at least twice as productive.

            He’s not only lying to you, he’s also lying to himself.

            Recent 12-month studies show that less than 2% of AI users saw an increase in work velocity, and those were only the very top-skilled workers. Projection also indicated that of the other 98%, over 90% of them will never work faster with AI than without, no matter how long they work with AI.

            TL;DR: the vast majority of people will only ever be slower with AI, not faster.

        • andrelaszlo 3 days ago ago

          I guess we've both been traumatized by modern linux distros?

        • leberknecht 3 days ago ago

          for those wondering: No, its not "d-bus hell" ^^

          • cachius 3 days ago ago

            It's David Bushell

        • egorfine 3 days ago ago

          Same here. This is f..n hilarious.

        • alsotang 3 days ago ago

          funny segmentation.

      • qrios 3 days ago ago

        > This is actually a clever way to distinguish if the browser supports XSLT or not. Actual content is XHTML in https://xslt.rip/index.xsl

        I agree it is a clever way. But it also shows exactly how hard it is to use XML and XSLT in a "proper way": Formal everything is fine to do it in this way (except the server is sending 'content-type: application/xml' for the /index.xsl, it should be 'application/xslt+xml').

        Almost all implementations in XML and XSLT that I have seen in my career showed a nearly complete lack of understanding of how they were intended to be used and how they should work together. Starting with completely pointless key/value XMLs (I'm looking at you, Apple and Nokia), through call-template orgies (IBM), to ‘yet-another-element-open/-close’ implementations (almost every in-house application development in PHP, JAVA or .NET).

        I started using XSLT before the first specification had been published. Initially, I only used it in the browser. Years later, I was able to use XSLT to create XSDs and modify them at runtime.

    • blablabla123 3 days ago ago

      To me XSLT came with a flood of web complexity that led to having effectively only 2 possible web browsers. It seems a bit funny because the website looks like straight out of the 90s when "everything was better"

      • mrguyorama 3 days ago ago

        But this is wrong.

        It was not rendering that killed other browsers. Rendering isn't the hard part. Getting most of rendering working gets you about 99% of the internet working.

        The hard part, the thing that killed alternative browsers, was javascript.

        React came out in 2012, and everyone was already knee-deep in earlier generation javascript frameworks by then. Shortly after, Google would release the V8 engine which was able to bring the sluggish web back to some sense of usable. Similarly, Mozilla had to spend that decade engineering their javascript engine to claw itself out of the "Firefox is slow" gutter that people insisted.

        Which is funny because if you had adblock, I'm not convinced firefox was ever slow.

        A modern web browser doesn't JUST need to deal with rendering complexity, which is manageable and doable.

        A modern web browser has to do that AND spin up a JIT compiler engineering team to rival Google or Java's best. There's also no partial credit, as javascript is used for everything.

        You can radically screw up rendering a page and it will probably still be somewhat usable to a person. If you get something wrong about javascript, the user cannot interact with most of the internet. If you get it 100% right and it's just kind of slow, it is "unusable".

        Third party web browsers were still around when HTML5 was just an idea. They died when React was a necessity.

        • MrJohz 3 days ago ago

          Conveniently, all three of the major JS engines can be extracted from the browsers they are developed for, and used in other projects. Node and Bun famously use V8 and the WebKit one, and Servo I believe embeds SpiderMonkey.

          If you want to start a new browser project, and you're not interested in writing a JS engine from scratch, there are three off-the-shelf options there to choose from.

        • deafpolygon 2 days ago ago

          This tracks - most simpler browsers run great, until anything more than basic JS is introduced. Then they slow to a crawl.

      • api 3 days ago ago

        I have the same mixed feelings. Complexity is antidemocratic in a sense. The more complex a spec gets the fewer implementations you get and the more easily it can be controlled by a small number of players.

        It’s the extend part of embrace, extend, extinguish. The extinguish part comes when smaller and independent players can’t keep up with the extend part.

        A more direct way of saying it is: adopt, add complexity cost overhead, shake out competition.

        • FredPret 3 days ago ago

          This is also the argument against overregulation.

          A little bit can be very good, a lot can strangle everyone but the biggest players

          • api 3 days ago ago

            Yes, it is. Complexity is a regressive tax.

      • varjag 3 days ago ago

        We can only thank the millennials for killing the whole XML tech stack for good. That and blood diamonds industry.

        • layer8 3 days ago ago

          It's far from dead, though. XML is deeply ingrained in many industries and stacks, and will remain so for decades to come, probably until something better than JSON comes along.

          • varjag 3 days ago ago

            Yes, kind of like COBOL. Dead.

            • layer8 3 days ago ago

              You have no idea. New projects with XML-based formats and interfaces are being implemented all the time. XML isn’t going anywhere.

              • varjag 2 days ago ago

                There was fresh COBOL code written up until early 1990s too, long past its heyday.

                Thing is you couldn't swing a dead cat in 00s without hitting XML. Nearly every job opening had XML listed in requirements. But since mid-2010s you can live your entire career without the need to work on anything XML-related.

                • latexr 2 days ago ago

                  Apple’s operating systems to this day make heavy use of XML by way of plists. You can’t have an app without it.

                  https://en.wikipedia.org/wiki/Property_list

                  • varjag 2 days ago ago

                    I guess? Although in the few iOS/watchOS apps I made I never edited it manually.

                    • latexr 2 days ago ago

                      But it’s still there and needs to be supported by the OS and tooling. Wether you edit it manually isn’t relevant (and as counterpoint, I do it all the time, for both apps and launchd agents).

                      • varjag 2 days ago ago

                        Of course it's there. Expecting all the stuff laid in 00s disappear overnight would be unrealistic.

                        COBOL code is also still there.

        • efreak 3 days ago ago

          There's still epub and tons of other standards built on xml and xhtml. Ironically, the last epub file I downloaded, a comic book from humble bundle, had a 16mb css file composed entirely of duplicate definitions of the same two styles, and none of it was needed at all (set each page and image to the size of the image itself, basically)

        • grishka 3 days ago ago

          On the web. I, among other things, make Android apps, and Android and XML are one and the same. There is no such thing as Android development without touching XML files.

          • varjag 2 days ago ago

            I did Android Developer Challenge back in 2008 and honestly don't remember doing that much of XML. Although it is the technology from peak XML days so perhaps you're right.

          • spixy 3 days ago ago

            Flutter? React Native? Maui?

            • grishka 3 days ago ago

              None of that is what I would call "Android development".

              But even if you use one of those terrible technologies, your app still needs a manifest and some native resources.

        • MetroWind 2 days ago ago

          RSS, MusicXML, SVG, Docbook, Epub, JATS, XMP, ...

          Sorry, web frontend is not the "whole XML tech stack", despite popular belief.

          And yes all of the above are mainstream in their respective industry.

        • data-ottawa 3 days ago ago

          Some of it deserved to die, mostly because it was misused.

          I don’t know how many times I had to manually write <![CDATA[ … ]]>

          I know all markup languages have their quirks, XML could be come impressively complex and inscrutable.

          • shadowgovt 3 days ago ago

            It has, I think, one nice feature that few markups I use these days have: every node is strongly-typed, which makes things like XSLT much cleaner to implement (you can tell what the intended semantics of a thing is so you aren't left guessing or hacking it in with __metadata fields).

            ... but the legibility and hand-maintainability was colossally painful. Having to tag-match the closing tags even though the language semantics required that the next closing tag close the current context was an awful, awful amount of (on the keyboard) typing.

    • shiomiru 3 days ago ago

      Ironically, that text is all you get if you load the site from a text browser (Lynx etc.) It doesn't feel too different from <noscript>This website requires JavaScript</noscript>...

      I now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).

      • le-mark 3 days ago ago

        > now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).

        Edge IE 11 mode is still there for you. Which also supports IE 6+ like it always did, presumably. They didn’t reimplement IE in Edge; IE is still there. Microsoft was all in on xml technologies back in the day.

      • auscompgeek 3 days ago ago

        Firefox haven't removed XSLT support yet.

        • shiomiru 3 days ago ago

          I should've worded differently. By the narrative of this website, Google is "paying" Mozilla & Apple to remove XSLT, thus they are "controlled" by Google.

          I personally don't quite believe it's all that black and white, just wanted to point out that the "open web" argument is questionable even if you accept this premise.

      • layer8 3 days ago ago

        I suspect that it wouldn't actually be that difficult to add XSLT support to a textmode browser, given that XSLT libraries exist and that XSLT in the browser is a straightforward application of it. They just haven't bothered with it.

      • StilesCrisis 3 days ago ago

        The page works fine in Mobile Safari.

      • meindnoch 3 days ago ago

        Opera.

  • gucci-on-fleek 3 days ago ago

    I'm strongly against the removal of XSLT support from browsers—I use both the JavaScript "XSLTProcessor" functions [0] and "<?xml-stylesheet …?>" [1] on my personal website, I commented on the original GitHub thread [2], and I use XSLT for non-web purposes [3].

    But I think that this website is being hyperbolic: I believe that Google's stated security/maintenance justifications are genuine (but wildly misguided), and I certainly don't believe that Google is paying Mozilla/Apple to drop XSLT support. I'm all in favour of trying to preserve XSLT support, but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.

    [0]: https://www.maxchernoff.ca/tools/Stardew-Valley-Item-Finder/

    [1]: https://www.maxchernoff.ca/atom.xml

    [2]: https://github.com/whatwg/html/pull/11563#issuecomment-31909...

    [3]: https://github.com/gucci-on-fleek/lua-widow-control/blob/852...

    • coldtea 3 days ago ago

      >I use both the JavaScript "XSLTProcessor" functions [0] and "<?xml-stylesheet …?>" [1] on my personal website

      You are on some very very small elite team of web standards users then

      • ndriscoll 3 days ago ago

        Small, sure, but not elite. xml-stylesheet is by far the easiest way to make a simple templated website full of static pages. You almost could not make it any simpler.

      • einpoklum 3 days ago ago

        FYI: Many Firefox and Thunderbird extensions use <?xml-stylesheet?> . Perhaps not XSLTProcessor though.

        • mimasama 2 days ago ago

          WebExtensions still have them? I thought the move to HTML (for better or worse) would've killed that. Even install.rdf got replaced IIRC so there shouldn't be much traces of XML in the new extensions system...

    • bazoom42 3 days ago ago

      Can’t you just do the xslt transformation server-side? Then you can use the newest and best xslt tools, and the output will work in any browser, even browsers that never had any built-in xslt support.

      • gucci-on-fleek 3 days ago ago

        > Cant you just do the xslt transformation server-side?

        For my Atom feed, sure. I'm already special-casing browsers for my Atom feed [0], so it wouldn't really be too difficult to modify that to just return HTML instead. And as others mentioned, you can style RSS/Atom directly with CSS [1].

        For my Stardew Valley Item Finder web app, no. I specifically designed that web app to work offline (as an installable PWA), so anything server-side won't work. I'll probably end up adding the JS/wasm polyfill [2] to that when Chrome finally removes support, but the web app previously had zero dependencies, so I'm a little bit annoyed that I'll have to add a 2MB dependency.

        [0]: https://github.com/gucci-on-fleek/maxchernoff.ca/blob/8d3538...

        [1]: https://news.ycombinator.com/item?id=45874305

        [2]: https://github.com/mfreed7/xslt_polyfill

      • mmis1000 3 days ago ago

        That is actually mozilla's stand in the linked issue except it's on client though. They would rather replace it with some non native replacement (So there is no surprising security issue anymore) if remove directly is impractical.

        There is actually a example of such situation. Mozilla removed adobe pdf plugin support a long time ago and replaced it with pdf.js. It's still a slight performance regression for very giant pdf. But it is enough for most use case.

        But the bottom line is "it's actually worth to do it because people are using it". They won't actively support a feature that little people use because they don't have the people to support it.

        • wombatpm 3 days ago ago

          > They won't actively support a feature that little people use because they don't have the people to support it.

          Companies always cut too deep. If only they were making enough money to properly support Chrome.

          /sarcasm

      • Fileformat 3 days ago ago

        On my blog that uses a static site generator?

        • crazygringo 3 days ago ago

          Yes, it would be part of the static site generator.

          • Fileformat 3 days ago ago

            Huh? How would a static site generator serve both RSS and the HTML view of the RSS from the same file?

            To be extra clear: I want to have <a href="feed.xml">My RSS Feed</a> link on my blog so everyone can find my feed. I also want users who don't know about RSS to see something other than a wall of plain-text XML.

            • crazygringo 3 days ago ago

              You don't serve them from the same file. You serve them from separate files.

              As I mention in my other comment to you, I don't know why you want an RSS file to be viewable. That's not an expected behavior. RSS is for aggregators to consume, not for viewing.

            • FateOfNations 3 days ago ago

              Technically, the web server can do content negotiation based on Accept headers with static files. But… In theory, you shouldn't need a direct link to the RSS feed on your web page. Most feed readers support a link-alternate in the HTML header:

              <link rel="alternate" type="application/rss+xml" title="Blog Posts" href="/feed.xml">

              Someone who wants to subscribe can just drop example.com/blog in to the feed reader and it will do the right thing. The "RSS Feed" interactive link then could go to a HTML web page with instructions for subscribing and/or a preview.

            • Too 2 days ago ago

              Apply that argument to any other file format and it quickly become absurd.

    • f33d5173 3 days ago ago

      >But I think that this website is being hyperbolic

      Intentionally in a humourous way, yes

      • glenstein 3 days ago ago

        I think also literally, independent of the cheeky tone.

        Where it lost me was:

        >RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by multiple government sites. Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?

        I mean yes Google lobbies, and certainly can lobby for bad things. And though I personally didn't know much of anything about XSLT, I from reading a bit about it I certainly am ready to accept the premise that we want it. But... is Google lobbying for an XSLT law? Does "control legislation" mean deprecate a tool for publishing info on government sites?

        I actually love the cheeky style overall, would say it's a brilliant signature style to get attention, but I think this implying this is tied to a campaign to control laws is rhetorical overreach even by its own intentionally cheeky standards.

        • Vinnl 3 days ago ago

          I think the reason you're considering it rhetorical overreach is because you're taking it seriously. If the author doesn't actually mind the removal of XSLT support (i.e. possibly rues its removal, but understands and accepts the reasons), then it's really a perfectly fine way to just be funny.

        • degamad 3 days ago ago

          > Does "control legislation" mean deprecate a tool for publishing info on government sites?

          I believe the intended meaning, in context, is "... for publishing the literal text of laws on government sites".

          • glenstein 3 days ago ago

            Right, my quote and your clarification are saying the same thing (at least that's what I had in mind when I wrote that).

            But that leaves us back where we started because characterizing that as "control the laws" is an instance of the the rhetorical overreach I'm talking about, strongly implying something like literal control over the policy making process.

            • necovek 3 days ago ago

              Laws that are designed to help you but you can't easily access, or laws that are designed to control/restrict you and that get shoved in your face: once you manage "consumption" of laws, you can push your agenda too.

              At least, this is how I read that part.

              • glenstein 3 days ago ago

                I agree that you would have to believe something like that to make sense of what it's implying. But by the same token, that very contention is so implausible that that's what makes it rhetorical overreach.

                It would be ridiculous to suggest that anyone's access to published legislation would be threatened by its deprecation.

                This is probably the part where someone goes "aha, exactly! That's why it's okay to be deprecated!" Okay, but the point was supposed to be what would a proponent of XSLT mean by this that wouldn't count as them engaging in rhetorical overreach. Something that makes the case against themselves ain't it.

        • idatum 3 days ago ago

          > actually love the cheeky style overall

          Also towards the bottom of the site:

          > Tell your friends and family about XSLT.

          It's hard enough telling them to also get off Instagram and Whatsapp and switch to Signal to maintain privacy. I'm going to have a hard time explaining what XSLT is!

    • littlestymaar 3 days ago ago

      > but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.

      You cannot “convince decision-makers” with a webpage anyway. The goal of this one is to raise awareness on the topic, which is pretty much the only thing you can do with a mere webpage.

      • bawolff 3 days ago ago

        For some reason people seem to think raising awareness is all you need to do. That only works if people already generally agree with you on the issue. Want to save endangered animals? raising awareness is great. However if you're on an issue where people are generally aware but unconvinced, raising more awareness does not help. Having better arguments might.

        • glenstein 3 days ago ago

          >For some reason people seem to think raising awareness is all you need to do.

          I guess I'm not seeing how that follows. It can still be complimentary to the overall goal rather than a failure to understand the necessity of persuasion. I think the needed alchemy is a serving of both, and I think it actually is trying to persuade at least to some degree.

          I take your point with endangered animal awareness as a case of a cause where more awareness leads to diminishing returns. But if anything that serves to emphasize how XSLT is, by contrast, not anywhere near "save the animals" level of oversaturation. Because save the animals (in some variation) is on the bumper sticker of at least one car in any grocery store parking lot, and I don't think XSLT is close to that.

          • CamouflagedKiwi 3 days ago ago

            I think it's the other way around. Simply raising awareness about endangered animals may be enough to gain traction since many/most people are naturally sympathetic about it. Conversely, XSLT being deprecated has lower awareness initially, but when you raise it many people hearing that aren't necessarily sympathetic - I don't think most engineers think particularly fondly about XSLT, my reaction to it being deprecated is basically "good riddance, I didn't think anyone was really using it in browsers anyway".

            • bawolff 3 days ago ago

              As an open source developer, i also have a lot of sympathy to google in this situation. Having a legacy feature holding the entire project back despite almost nobody using it because the tiny fracation that do are very vocal and think its fine to be abusive to developers to get what they want despite the fact its free software they didn't pay a dime for, is something i think a lot of open source devs can sympathize with.

              • necovek 3 days ago ago

                I think all that you say applies to a random open source project done by volunteer developers, but really doesn't in case of Google.

                Google has used its weight to build a technically better product, won the market, and are now driving the whole web platform forward the way they like it.

                This has nothing to do with the cost of maintaining the browser for them.

                • CamouflagedKiwi 3 days ago ago

                  It seems likely to me that it is about the 'cost' - not literally monetary cost but one or two engineers periodically have to wrangle libxslt for Chrome and they think it's a pain in the ass and not widely used, and are now responding by saying "What if I didn't have to deal with this any more".

                  I'm not sure what else it would be about - I don't see why they would especially care about removing XSLT support if cost isn't a factor.

                • bawolff 3 days ago ago

                  Google is still made up of people, who work a finite amount of hours in a day, and maybe have other things they want to spend their time on then maintaining legacy cruft.

                  There is this weird idea that wealthy people & corporations arent like the rest of us, and no rules apply to them. And to a certain extent its true that things are different if you have that type of wealth. But at the end of the day, everyone is still human, and the same restrictions still generally apply. At most they are just pushed a little further out.

                  • necovek 3 days ago ago

                    My comment is not about that at all: it's a response to claim how Google SW engineering team is feeling the heat just like any other free software project, and thus we should be sympathetic to them?

                    I am sure they've got good reasons they want to do this: them having the same problems as an unstaffed open source project getting vocal user requests is not one of them.

            • glenstein 3 days ago ago

              >I think it's the other way around. Simply raising awareness about endangered animals may be enough to gain traction since many/most people are naturally sympathetic about it.

              You're completely right in your literal point quoted above, but note what I was emphasizing. In this example, "save the animals" was offered as an example of a problem oversaturated in awareness to a point of diminishing returns. If you don't think animal welfare illustrates that particular idea, insert whatever your preferred example is. Free tibet, stop diamond trade, don't eat too much sodium, Nico Harrison shouldn't be a GM in NBA basketball, etc.

              I think everyone on all sides agrees with these messages and agrees that there's value in broadcasting them up to a point, but then it becomes not an issue of awareness but willpower of relevant actors.

              You also may well be right that developers would react negatively, honestly I'm not sure. But the point here was supposed to be that this pages author wasn't making the mistake of strategic misunderstanding on the point of oversaturating an audience with a message. Though perhaps they made the mistake in thinking they would reach a sympathetic audience.

        • littlestymaar 3 days ago ago

          > For some reason people seem to think raising awareness is all you need to do.

          I don't think many do.

          It's just that raising awareness is the first step (and likely the only one you'll ever see anyway, because for most topics you aren't in a position where convincing *you* in particular has any impact).

          • bawolff 3 days ago ago

            Convincing me personally does not have any impact. Convincing people like me, in mass, does.

            • littlestymaar 3 days ago ago

              A mass doesn't move because it's convinced (i.e. rationally) of something, but because they are emotionally impacted.

              Rational arguments come later, and mostly behind closed doors.

              • bawolff 3 days ago ago

                Sure, but translating that movement to actual policy change usually depends on how much uninvolved people are sympathetic to the protestors, which usually involves how rational the protestors are precieved as. Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.

                Thats why the other side usually try to smear protests as being crazy mobs who would never be happy. The moment you convince uninvolved people of this, the protestors lose most power.

                > Rational arguments come later, and mostly behind closed doors.

                I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after. If you're resorting to protest you are trying to leverage public support into a more powerful position. That's about how much power you have not the soundness of your argument.

                • littlestymaar 3 days ago ago

                  > Sure, but translating that movement to actual policy change usually depends on how much uninvolved people are sympathetic to the protestors

                  No, that's the exception rather than the rule. That's a convenient thing to teach to the general public and that's why people like MLK Jr. and Gandhi are being celebrated, but most movement that make actual policy changes do so while disregarding bystanders entirely (or even actively hurting bystanders. That's why terrorism, very unfortunately, is effective in practice).

                  > which usually involves how rational the protestors are precieved as

                  I'm afraid most people don't really care about how rational anyone is perceived at. Trump wouldn't have been elected twice if that was the case.

                  > Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.

                  They only care about the sentiment of the people that can cause them nuisance. A big crowd of passively annoyed people will have much less bargaining power than a mob of angry male teenagers doxxing and mailing death threats: see the gaming industry.

                  > I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after.

                  Bold claim that contradicts the entire history of social conflicts…

              • jeltz 3 days ago ago

                My emotional response to XSLT being removed was: "finally!". You would need some good arguments to convince me that despite my emotions applauding this descion it is actually a bad thing.

                • littlestymaar 3 days ago ago

                  You're simply not a good target to advocate to on this particular topic. And it's fine, actually.

      • ludicrousdispla 3 days ago ago

        >> You cannot “convince decision-makers” with a webpage anyway.

        They should probably be called "decision-maders"

    • IshKebab 3 days ago ago

      > but wildly misguided

      Why? Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.

      I'm full in favour of removing such insecure features that barely anyone uses.

      I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust. But good luck with that.

      • gucci-on-fleek 3 days ago ago

        > Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.

        Sure, I agree with you there, but removing XSLT support entirely doesn't seem like a very good solution. The Chrome developer who proposed removing XSLT developed a browser extension that embeds libxslt [0], so my preferred solution would be to bundle that by default with the browser. This would:

        1. Fix any libxslt security issues immediately, instead of leaving it enabled for 18 months until it's fully deprecated.

        2. Solve any backwards compatibility concerns, since it's using the exact same library as before. This would avoid needing to get "consensus" from other browser makers, since they wouldn't be removing any features.

        3. Be easy and straightforward to implement and maintain, since the extension is already written and browsers already bundle some extensions by default. Writing a replacement in Rust/another memory-safe language is certainly a good idea, but this solution requires far less effort.

        This option was proposed to the Chrome developers, but was rejected for vague and uncompelling reasons [1].

        > I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust.

        That's already been done [2], but maintaining that and integrating it into the browsers is still lots of work, and the browser makers clearly don't have enough time/interest to bother with it.

        [0]: https://github.com/mfreed7/xslt_extension

        [1]: https://github.com/whatwg/html/issues/11523#issuecomment-315...

        [2]: https://gitlab.gnome.org/World/Rust/markup-rs/xrust

        • chrismorgan 3 days ago ago

          From your [1] “rejected for vague and uncompelling reasons”:

          >>> To see how difficult it would be, I wrote a WASM-based polyfill that attempts to allow existing code to continue functioning, while not using native XSLT features from the browser.

          >> Could Chrome ship a package like this instead of using native XSLT code, to address some of the security concerns? (I'm thinking about how Firefox renders PDFs without native code using PDF.js.)

          > This is definitely something we have been thinking about. However, our current feeling is that since the web has mostly moved on from XSLT, and there are external libraries that have kept current with XSLT 3.0, it would be better to remove 1.0 from browsers, rather than keep an old version around with even more wrappers around them.

          The bit that bothers me is that Google continue to primarily say they’re removing it for security reasons, although they have literally made a browser extension which is a drop-in replacement and removes 100% of the security concerns. The people that are writing about the reasons know this (one of them is the guy that wrote it), which makes the claim a blatant lie.

          I want people to call Google specifically out on this (and Apple and Mozilla if they ever express it that way, which they may have done but I don’t know): that their “security” argument is deceit, trickery, dishonest, grossly misleading, a bald-faced lie. If they said they want to remove it because barely anyone uses it and it will shrink their distribution by one megabyte, I would still disagree because I value the ability to apply XSLT on feeds and other XML documents (my Atom and RSS feed stylesheets are the most comprehensive I know of), but I would at least listen to such honest arguments. But falsely hiding behind “security”? I impugn their honour.

          (If their extension is not, as their descriptions have implied, a complete, drop-in replacement with no caveats, I invite correction and may amend my expressed opinion.)

          • surajrmal 3 days ago ago

            You still need to maintain that sandbox. Ultimately no one wants to spend energy maintaining software that isn't used very heavily. That's why feature depreciation happens. If someone cares enough, they should step in an offer to take over long term maintenance and fix the problems. Ideally a group of people, and perhaps more ideally, a group with some financial backing (eg a company), otherwise it may be difficult to actually trust that they will live up to the commitment.

            Even projects like Linux deprecate old underused features all the time. At least the Internet has real metrics about API usage which allows for making informed decisions. Folks describing how they are part of that small fraction of users doesn't really change the data. What's also interesting is that a very similar group of people seem to lament about how it's impossible to write a new browser these days because there are too many features to support.

            • svieira 3 days ago ago

              "The sandbox" in this case is their ability to execute WASM securely. It's a necessary part of the "modern" web. If they were planning on also nuking WASM from orbit because it couldn't be made secure, this would be another topic entirely. There's nothing they're maintaining just-for-xslt-1.0-support beyond a simple build of libxslt to WASM, a copy block in their build code, and a line in a JSON list to load WASM provided built-ins (which they would want anyway for other code).

          • IshKebab 3 days ago ago

            I think their logic makes sense. They're removing support because of security concerns, and they're not adding support back using an extension because approximately nobody uses this feature.

            Adding the support back via an extension isn't cost free.

            • chrismorgan 3 days ago ago

              I suppose that’s a legitimate framing. But I will still insist that, at the very least, their framing is deliberately misleading, and that saying “you can’t have XSLT because security” is dishonest.

              But when it “isn’t cost-free”… they’ve already done 99.9% of the work required (they already have the extension, and I believe they already have infrastructure to ship built-in functionality in the form of Web Extensions—definitely Firefox does that), and I seem to recall hearing of them shifting one or two things from C/C++ to WASM before already, so really it’s only a question of whether it will increase installer/installed size, which I don’t know about.

              • IshKebab 2 days ago ago

                According to the extension's README there are still issues with it, so they definitely would have to do more work.

                And yeah Chrome is really strict about binary size these days. Every kB has to be justified. It doesn't support brotli compression because it would have added like 16kB to the binary size.

          • arccy 3 days ago ago

            an insecure mess contained in a sandbox is still an insecure mess

            it just has slightly less chance of affecting something else

            • lunar_mycroft 3 days ago ago

              "effecting something else" (i.e. escaping the sandbox) is the core issue. JavaScript (and WASM) engines have to be designed to defend against the user running outright malicious scripts without those scripts being able to gain access to the rest of the browser or the host system. By comparison, potentially exploitable but non-malicious, messy code is basically a non-issue. Any attacker that found a bug in a sandboxed XSLT polyfil that allowed them to escape the sandbox or do anything else malicious would be able to just ship the same code to the browser themselves to achieve the same effect.

      • Klonoar 3 days ago ago

        The easier thing might have been if Chrome & co opted to include any number of polyfills in JS bundled with the browser instead of making an odd situation where things just break.

        I think you can recognize that the burden of maintaining a proven security nightmare is annoying while simultaneously getting annoyed for them over-grabbing on this.

      • rhdunn 3 days ago ago

        libxslt != XSLT.

        It's like removing JPEG support because libjpg is insecure!

        • jeltz 3 days ago ago

          Which would be a totally sensible thing you do. Especially if jpeg was a rarely used image format with few libraries supporting it, the main one being unmaintained.

          • nflekkhnnn 3 days ago ago

            Google is on a trajectory to replace jpeg with webp, haven’t you noticed?

        • TingPing 3 days ago ago

          If this were true you could fix this today with the other library. That library is the only implementation used and it’s features are relied upon.

          • chrismorgan 3 days ago ago

            Firefox doesn’t use libxslt. I presume IE didn’t either. It’s only WebKit-heritage browsers that use libxslt.

            • TingPing 3 days ago ago

              TIL about Firefox. They have their own in-tree solution. Very interesting but not trivial to use for external projects.

      • panny 3 days ago ago

        >Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.

        Being this is HN, did anyone suggest rewriting it in rust? :)

      • righthand 3 days ago ago

        There is already a replacement in rust but people like you and the Google engineers have ignored that fact. “Good luck” they all say turning their nose away from reality so they can kill it. Thanks for your support.

  • eftpotrm 3 days ago ago

    I'm aware I'm in a minority, but I find it sad that XSLT stalled and is mostly dead in the market. The amount of effort put into replicating most the XML+XPath+XSLT ecosystem we had as open standards 25 years ago using ever-changing libraries with their own host of incompatible limitations, rather than improving what we already had, has been a colossal waste of talent.

    Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.

    RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.

    • jeltz 3 days ago ago

      There are still virtually zero good XML parsers but plenty of good JSON parsers so I do not buy your assertion. Writing a good JSON parser can be done by most good engineers, but I have yet to use a good XML parser.

      This is based on my personal experience of having to parse XML in Ruby, Perl, Python, Java and Kotlin. It is a pain every time and I have run into parser bugs at least twice in my career while I have never experience a bug in a JSON parser. Implementing a JSON parser correctly is way simpler. And they are also generally more user friendly.

      • gwbas1c 3 days ago ago

        Take a look at C# / dotnet. The XML parser that's been around since the early 2000s is awesome, but the JSON libraries are just okay. The official JSON library leaves so much to be desired that the older, 3rd party library is often better.

        • nflekkhnnn 3 days ago ago

          The old and new json lib is written by the same person. The newer one is a bit more low-level — that’s intentional, the old one was too bloated.

          • gwbas1c 3 days ago ago

            Oooh, then it makes sense why there isn't a good set of layers:

            XmlReader -> (XmlDocument or XmlSerializer) generally hits all use cases for serialization well. XmlReader is super-low-level streaming, when you need it. XmlDocument is great when you need to reason with Xml as the data structure, and XmlSerializer quickly translates between Xml and data structures as object serialization. There's a few default options that are wrong; but overall the API is well thought out.

            In Newtonsoft I couldn't find a low level JsonReader; then in System.Text.Json I couldn't find an equivalent of mutable JObject. Both are great libraries, but they aren't comprehensive like System.Text.Json.

      • taeric 3 days ago ago

        JSON parsing is pretty much guaranteed to be a nightmare if you try and use the numeric types. Or if you repeat keys. Neither of which are uncommon things to do.

        My favorite is when people start reimplementing schema ideas in json. Or, worse, namespaces. Good luck with that.

    • VMG 3 days ago ago

      > by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.

      Here is where you lose me

      The JSON spec fits on two screen pages https://www.json.org/json-en.html

      The XML spec is a book https://www.w3.org/TR/xml/

      • geocar 3 days ago ago

        > The JSON spec fits on two screen pages https://www.json.org/json-en.html

        It absolutely does not. From the very first paragraph:

        It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999.

        which is absolutely a book you can download and read here: https://ecma-international.org/publications-and-standards/st...

        Furthermore, JSON has so many dangerously-incompatible implementations the errata for JSON implementations fills in multiple books, such as advice to "always" treat numbers as strings, popular datetime "extensions" that know nothing of timezones and so on.

        > The XML spec is a book https://www.w3.org/TR/xml/

        Yes, but that's also everything you need to know in order to understand XML, and my experience implementing API is that every XML implementation is obviously-correct, because anyone making a serious XML implementation has demonstrated the attention-span to read a book, while every JSON implementation is going to have some fucking weird-thing I'm going to have to experiment with, because the author thought they could "get the gist" from reading two pages on a blog.

        • MrJohz 3 days ago ago

          I think you are misreading the phrase "based on". The author, I believe, intends it to mean something like "descends from", "has its origins in", or "is similar to" and not that the ECMAScript 262 spec needs to be understood as a prerequisite for implementing a JSON parser. Indeed, IIRC the JSON spec defined there differs in a handful of respects from how JavaScript would parse the same object, although these might since have been cleaned up elsewhere.

          JSON as a standalone language requires only the information written on that page.

          • geocar 3 days ago ago

            > JSON as a standalone language requires only the information written on that page.

                JSON.parse("{\"a\":9999999999999999.0}")
            
            Either no browsers implement JSON as written on that page, or you need to read ECMAScript-262 to understand what is going on.
            • MrJohz 2 days ago ago

              Well yes, if you're writing a JSON parser in a language based on ECMAScript-262, then you will need to understand ECMAScript-262 as well as the specification for the language you're working with. The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.

              If you write a JSON parser in Python, say, then you will need to understand how Python works instead.

              In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.

              • geocar 2 days ago ago

                > The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.

                Thankfully XML specifies what a number is and anything that gets this wrong is not implementing XML. Very simple. No wonder I have less problems with people who implement XML.

                > In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.

                I'm glad you noticed that after it was pointed out to you.

                The implications of JSON.parse() not being an implementation of JSON are serious though: If none of the browser vendors can get two pages right, what hope does anyone else have?

                I do prefer to think of them as the same thing, and JSON as more complicated than two pages, because this is a real thing I have to contend with: the number of developers who do not seem to understand JSON is much much more complicated than they think.

                • MrJohz 2 days ago ago

                  XML does not specify what a number is, I think you might be misinformed there. Some XML-related standards define representations for numbers on top what the basic XML spec defines, but that's true of JSON as well (e.g. JSON Schema).

                  If we go with the XML Schema definition of a number (say an integer), then even then we are at the mercy of different implementations. An integer according to the specification can be of arbitrary size, and implementations need to decide themselves which integers they support and how. The specification is a bit stricter than JSON's here and at least specifies a minimum precision that must be supported, and that implementations should clearly document the maximum precisions that they support, but this puts us back in the same place we were before, where to understand how to parse XML, I need to understand both the XML spec (and any additional specs I'm using to validate my XML), plus the specific implementation in the parser.

                  (And again, to clarify, this is the XML Schema specification we're talking about here — if I were to just use an XML-compliant parser with no extensions to handle XSD structures, then the interpretation of a particular block of text into "number" would be entirely implementation-specific.)

                  I completely agree with you that there are plenty of complicated edge cases when parsing both JSON and XML. That's a statement so true, it's hardly worth discussion! But those edge cases typically crop up — for both formats — in the places where the specification hits the road and gets implemented. And there, implementations can vary plenty. You need to understand the library you're using, the language, and the specification if you want to get things right. And that is true whether you're using JSON, XML, or something else entirely.

                • 2 days ago ago
                  [deleted]
        • chrismorgan 3 days ago ago

          > my experience implementing API is that every XML implementation is obviously-correct

          This is not my experience. Just this week I encountered one that doesn’t decode entity/character references in attribute values <https://news.ycombinator.com/item?id=45826247>, which seems a pretty fundamental error to me.

          As for doctypes and especially entities defined in doctypes, they’re not at all reliable across implementations. Exclude doctypes and processing instructions altogether and I’d be more willing to go along with what you said, but “obviously-correct” is still too far.

          Past what is strictly the XML parsing layer to the interpretation of documents, things get worse in a way that they can’t with JSON due to its more limited model: when people use event-driven parsing, or even occasionally when they traverse trees, they very frequently fail to understand reasonable documents, due to things like assuming a single text node, ignoring the possibilities of CDATA or comments.

          • drob518 3 days ago ago

            Exactly. In my experience, XML has thousands of ways to trip yourself while JSON is pretty simple. I always choose JSON APIs over XML if given the choice.

          • geocar 3 days ago ago

            > This is not my experience.

            Try not to confuse APIs that you are implementing for work to make money, with random "show HN AI slop" somebody made because they are looking for a job.

        • VMG 3 days ago ago

          The "References" section of the XML spec is almost longer than the JSON spec itself

          > [...] serious XML implementation [...]

          You are cherry-picking here

        • marcosdumay 3 days ago ago

          > advice to "always" treat numbers as strings

          FFS, have your parser fail on inputs it can not handle.

          Anyway, the book defining XML doesn't tell you how your parser will handle values you can't represent on your platform either. And it also won't tell you how our parser will read timestamps. Both are completely out of scope there.

          The only common issue in JSON that entire book covers is comments.

          The SOAP specification does tell you how to write timestamps. It's not a single book, and doesn't cover things like platform limitations, or arrays. If you want to compare, OpenAPI's spec fills a booklet:

          https://swagger.io/docs/specification/v3_0/about/

          • vbezhenar 3 days ago ago

            > FFS, have your parser fail on inputs it can not handle.

            I wish browser developers would understand that.

                JSON.parse("9007199254740993") === 9007199254740992
      • josefx 3 days ago ago

        > The JSON spec fits on two screen pages https://www.json.org/json-en.html

        The beloved minimalist spec. . No way anything could be wrong with that: https://seriot.ch/projects/parsing_json.html

        Turns out there are at least half a dozen more specs. trying and failing to clarify that mess.

      • Mikhail_Edoshin 3 days ago ago

        But the part of XML that is equivalent to JSON is basically five special symbols: angle brackets, quotes and ampersand. Syntax-wise this is less than JSON (and it even has two kinds of quotes). All the rest are extras: grammar, inclusion of external files (with name and position based addressing), things like element IDs and references, or a way to formally indicate that contents of an element are written in some other notation (e. g. "markdown").

      • eftpotrm 3 days ago ago

        Aside from the other commenter's point about this being a misleading comparison, you didn't need to reinvent the whole XML ecosystem from scratch, it was already there and functional. One of the big claims I've seen for JSON though is that it has array support, which XML doesn't. And which is correct as far as it goes, but also it would have been far from impossible to code up a serializer/deserializer that let you treat a collection of identically typed XML nodes as an array. Heck, for all I know it exists, it's not conceptually difficult.

        • vbezhenar 3 days ago ago

          You need to distinguish between the following cases: `{}`, `{a: []}`, `{a:[1]}`, `{a:[1, 2]}`, `{a: 1}`. It is impossible to express in XML in an universal way.

          • josefx a day ago ago

            Xsd lets you explicitly specify if you are dealing with one or more elements, no need to encode that information in the data itself. It also gives you access to concrete number types, so you don't have to rely on the implementation to actually support values like 1 and 2.

            • vbezhenar 17 hours ago ago

              Not every XML has associated XSD. You need to transfer XSD. You need to write code generator for that XSD or otherwise use it. A lot of work which is unnecessary when you can just write `JSON.parse(string)`.

              • 13 hours ago ago
                [deleted]
          • Mikhail_Edoshin 2 days ago ago

            XML is not a data serialisation tool, it is a language tool. It creates notations abd should be used to create phrase-like structures. So if a user needs these distinctions, he makes a notation that expresses them.

            • vbezhenar 2 days ago ago

              JSON is immediately usable without any notations.

              Basically the difference is that underlying data structures are different.

              JSON supports arrays of arbitrary items and dictionaries with string keys and arbitrary values. It aligns well with commonly used data structures.

              XML node supports dictionary with string keys and string values (attributes), one dedicated string attribute (name), array of nodes (child nodes). This is very unusual structure and requires dedicated effort to map to programming language objects and structures. There were even so-called "OXM" frameworks (Object-XML Mapper), similarly to ORM.

              Of course in the end it is possible to build a mapping between array, dictionary and DOM. But JSON is much more natural fit.

              • Mikhail_Edoshin 2 days ago ago

                XML is immediately usable if you need to mark up text. You can literally just write or edit it and invent tags as needed. As long as they are consistent and mark what needs to be marked any set of tags will do; you can always change them later.

                XML is meant to write phrase-like structures. Structures like this:

                    int myFunc(int a, void *b);
                
                This is a phrase. It is not data, not an array or a dictionary, although technically something like that will be used in the implementation. Here it is written in a C-like notation. The idea of XML was to introduce a uniform substrate for notations. The example above could be like:

                    <func name="myFunc">
                      <data type="int"/>
                      <args>
                        <data type="int"/>
                        <addr/>
                      </args>
                    </func>
                
                This is, of course, less convenient to write than a specific notation. But you don't need a parser and can have tools to process any notation. (And technically a parser can produce its results in XML, it is a very natural form, basically an AST). Parsers are usually a part of a tool and do not work on their own, so first there is a parser for C, then an indexer for C, then a syntax highlighter for C and so on: each does some parsing for its own purpose, thus doing the same job several times. With the XML processing scenario is not limited to anything: the above example can be used for documentation, indexing, code generation, etc.

                XML is a very good fit for niche notations written by few professionals: interface specifications, keyboard layouts, complex drawings, and so on. And it is being used there right now, because there are no other tool like it, aside from a full-fledged language with a parser. E.g. there is an XML notation that describes numerous bibliography styles. How many people need to describe bibliography styles? Right. With XML they start getting usable descriptions right away and can fine-tune them as they go. And these descriptions will be immediately usable by generic XML tools that actually produce these bibliographies in different styles.

                Processing XML is like parsing a language, except that the parser is generic. Assuming you have no text content it goes in two steps: first you get an element header (name and attributes), then the child elements. By the time you get these children they are no longer XML elements, but objects created by your code from these elements. Having all that you create another object and return it so that it will be processed by the code that handles the parent element. The process is two-step so that before parsing you could alter the parsing rules based on the element header. This is all very natural as long as you remember it is a language, not a data dump. Text complicates this only a little: on the second step you get objects interspersed with text, that's all.

                People cannot author data dumps. E.g. the relational model is a very good fit for internal data representation, much better than JSON. But there is no way a human could author a set of interrelated tables aside from tiny toy examples. (The same thing happens with state machines.) Yet a human can produce tons of phrase-like descriptions of anything without breaking a sweat. XML is such an authoring tool.

    • klodolph 3 days ago ago

      Having used XSLT, I remember hating it with the passion of a thousand suns. Maybe we could have improved what we had, but anything I wanted to do was better done somehow else.

      I'm glad to have all sorts of specialists on our team, like DBAs, security engineers, and QA. But we had XSLT specialists, and I thought it was just a waste of effort.

    • 3 days ago ago
      [deleted]
    • immibis 3 days ago ago

      Not the minority. People can be sad that XSLT failed and also recognize that removing it from browsers is quite sensible, given the current situation.

    • altmind 3 days ago ago

      You can do some cool stuff, like serving an RSS file that is also styled/rendered in the browser. A great loss for the 2010 idea of semantic web. One corporation is unhappy because it does not cover their use cases

    • theoryaway 3 days ago ago

      > RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.

      Hope I can quote it to Transofrmer architecture One day

    • gwbas1c 3 days ago ago

      IMO, XSLT seems like something that should be handled on the server, not in the browser.

  • shevy-java 3 days ago ago

    I don't really need or use XSLT (I think), so I am not really affected either way. But I am also growing mightily tired of Google thinking "I am the web" now. This is really annoying to no ends. I really don't want Google to didctate onto mankind what the web is or should be. Them killing off ublock origin also shows this corporate mindset at work.

    This is also why I dislike AI browsers in general. They generate a view to the user that may not be real. They act like a proxy-gate, intercepting things willy-nilly. I may be oldschool, but I don't want governments or corporations to jump in as middle-man and deny me information and opportunities of my own choosing. (Also Google Suck, I mean Google Search, sucks since at the least 5 years now. That was not accidental - that was deliberate by Google.)

    • alt187 3 days ago ago

      That sums up pretty much how I think about that. I don't have any opinion about XSLT either way... I'm just so tired. If Google decided to kill HTML tomorrow- who could stop them?

    • echelon 3 days ago ago

      Google needs to be broken up into three or more companies. Search, Android, Chrome, and AdSense should not live together.

      Lina Khan had the right idea and mandate, but she was too fucking slow.

      When the Dems swing back into power, the gutting of big tech needs to be swift and thorough. The backbone needs to be severed. I'm screaming at my representatives to do this.

      Google took over web tech, turned the URL bar into their Search product. They force brands to buy ads for their name brands - think about how much money they make by selling ads on the keywords "Airpods" or "Nintendo Switch". They forced removal of ad blocking tech unilaterally. They buy up all the panes of glass they don't already own. They don't allow you to install your own software on mobile anymore. And you have to buy ads for your app too, otherwise your competitor gets installed. If you develop software, you're perpetually taxed and have to do things their way. They're increasingly severing the customer relationship. They're putting themselves in as middle men in the payments industry, the automotive industry, the entertainment industry...

      Look at how many products they've built and thrown away in the game of trying to broker your daily life.

      I could go on and on and on... They're leeches. Giant, Galactus-sized leeches.

      The bulk of the money they make is from installing themselves as middlemen.

      And anyone thinking they're you're friends - they conspired to suppress wages, and they're actively cutting jobs and rebuilding the teams in India. Congrats, they love you. They're gutting America and are 100% anti-American. I love India and have nothing against its people, I'm just furious that this domestic company - this giant built on the backs of American labor and its population - hates its own country so much. (You know they hate us because they're still stuffing Corporate Memphis down our throat.)

      Edit: I have to say one thing positively because Google makes me so negative. This website is beautiful. I was instantly transported back in time. But it's also a nice modern reinterpretation of retro web design. I love it so much.

      • phantasmish 3 days ago ago

        Antitrust needs to make a comeback in general. I've been seeing meme-graphics about consolidation in various industries (like how most of the stuff on the shelves in grocery stores comes from like a half-dozen companies, even if there are 20 "brands" on the shelf making the market look healthier than it is, 18 of those 20 will actually be owned by those very-few companies; ditto media, telecom, et c.) my entire life.

        Why did everything consolidate terribly in the '80s and '90s? Because we basically stopped enforcing antitrust in the '70s, due to Chicago School jackasses influencing policy and jurisprudence.

        We need to undo their fake-pro-markets horse-shit and get back to having robust markets in every sector, not just software (but yes, certainly in software too). That'll require a spree of breaking up big companies across the economy.

      • gjvc 3 days ago ago

        what is "Corporate Memphis" ?

    • drob518 3 days ago ago

      I’m old-school right there with you. The fact that there are really only three browser codebases now is concerning.

      • fithisux 3 days ago ago

        Extremely concering.

        On the other hand ... with the danger of sounding funny...

        there are way more browsers for gemini or gopher!

        • drob518 3 days ago ago

          At this point, however, there are probably more browsers for gopher than there are servers for gopher. /sarc

    • shadowgovt 3 days ago ago

      What is the better alternative model?

      One of the things that startled me when working for Google is how much of their decisionmaking actually looks like "This sucks and we don't want to be responsible for it... But there isn't anyone else who can be, so I guess it's us."

      I'm not saying this is optimal or that it should be the way it is, but I am saying there are problems with alternative approaches that need to be addressed.

      To give a comparison: OpenGL tried a collaborative and semi-open approach to governance for years, and what happened was they got more-or-less curb-stomped by DirectX, so much so that it drove Windows adoption for years as "the architecture for playing videogames." The mechanism was simple: while OpenGL's committee tried to find common ground among disparate teams with disparate needs, Microsoft went

      1) we control this standard; here are the requirements you must adhere to

      2) we control the "DirectX" trademark, if you fail to adhere to the standards we decertify your product.

      As a result, you could buy a card with "DirectX" stamped on it, slap it into your Windows machine, and it would work. You couldn't do anything like that with OpenGL hardware; the standard was so loose (and enforcement so nonexistant) that companies could, via the "gestalt" feature-detection layer, claim a feature was supported if they had polyfilled a CPU-side software renderer for it. Useless for games (or basically any practical application), but who's gonna stop them from lying?

      Browsers aren't immune to market forces; a standard that is too inflexible or fails to reflect the actual implementation pressures and user needs will be undercut by alternative approaches.

      I'm not saying current governance of the web is that bad, but I bring up the history of OpenGL as an example of why an open, cooperative approach can fail and the pitfalls to watch out for. In the case of this specific decision regarding XSLT, it appears from the outside looking in that the decision is being made in consensus by the three largest browser engine developers and maintainers. What voice is missing from that table, and who should speak for them?

      (Quick side-note: Apple managed to dodge a lot of the OpenGL issues by owning the hardware stack and playing a similar card to Microsoft's with different carrots and sticks: "This is the kernel-level protocol you must implement in hardware. We will implement OpenGL in software. And if your stuff doesn't work we just won't sell laptops with your card in them; nobody in this ecosystem replaces their graphics hardware anyway").

      • Sidnicious 3 days ago ago

        Not suggesting an alternative model here, but I think that Google et. al (based on my own time working on Chrome) don't take that responsibility quite as seriously as they should. Being responsible may be an accident, but being dominant in any given area is not. The forces inside Google which take over parts of the world do so without really caring about the long term commitment.

        It is so possible to preserve XSLT and other web features e.g. by wrapping them in built-in (potentially even standardized) polyfills, but that kind of work isn't incentivized over new features and big flashy refactors.

        • shadowgovt 3 days ago ago

          Completely agree. Among the reasons I no longer work for Google is that I could not escape the perception that they were the 800-lb gorilla in the room and deeply uncomfortable with taking on any responsibiliy given that circumstance.

          When you are the biggest organization in a space, it's your space whether you feel qualified to lead or not. The right course of action is "get qualified, fast." The top-level leadership did not strike me as willing to shoulder that responsibility.

          My personal preferred outcome to address the security concerns with XSLT would probably be to replace the native implementation with a JavaScript-sandboxed implementation in-browser. This wouldn't solve all issues (such an implementation would almost certainly be operating in a privileged state, so there would still be security concerns), but it would take all the "this library is living at a layer that does direct unchecked memory manipulation, with all the consequences therein" off the table. There is, still, a case to be made perhaps that if you're already doing that, the next logical step is to make the whole feature optional by jettisoning the sandboxed implementation into a JavaScript library.

  • yoz-y 3 days ago ago

    With browser being as complicated as they are, I kind of support this decision.

    That said, I never used XSLT for anything, and I don’t see how is its support in browsers tied to RSS. (Sure you could render your page from your rss feed but that seems like a marginal use case to me)

    • randunel 3 days ago ago

      Would you be willing to entertain the idea that, perhaps, you haven't noticed you actually used XSLT during your mundane browsing? Sample page, how would you tell? https://www.europarl.europa.eu/politicalparties/index_en.xml

      • monerozcash 3 days ago ago

        There exists a much better html version of that page, which also comes up as the first google result and is easier to discover on the website. https://www.europarl.europa.eu/about-parliament/en/organisat...

        • 8organicbits 3 days ago ago

          The lack of the jump scare cookie banner on the XSLT version is certainly an improvement, but I otherwise agree. Google search burying XSLT driven pages isn't a surprise given their stance.

          • shadowgovt 3 days ago ago

            I don't think there's any evidence to suggest that Chromium's position on this impacts Google's Pagerank algorithm at all.

            • 8organicbits 2 days ago ago

              I think Google has a general philosophy of the web that promotes crawlable HTML over other formats. I noticed recently that traditional job aggregators like XML job feeds, yet Google promotes JobSchema as an incompatible standard. So less that Chromium directs pagerank, and more that Google's genreral view of the web is HTML over XML. I hope JobSchema fails because it is harder to aggregate, unless you already index web pages at scale.

              Although I don't have firm evidence, haven't worked at Google, and you likely know company dynamics better than I.

        • 3 days ago ago
          [deleted]
      • cedilla 3 days ago ago

        Sure there are examples of websites using XSLT, but so far I've only seen the dozen or maybe two dozen, and it really looks like they are extremely rare. And I'm pretty sure the EU parliament et. al. will find someone to rework their page.

        This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.

        • glenstein 3 days ago ago

          Those had good rationale for deprecating that I would say don't apply in this instance. Flash and Java applets were closed, insecure plugins outside the web's open standards, so removing them made sense. XSLT is a W3C standard built into the web's data and presentation layer. Dropping it means weakening the open infrastructure rather than cleaning it up.

        • gucci-on-fleek 3 days ago ago

          > This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.

          Sure, but Flash and Java were never standards-compliant parts of the web platform. As far as I'm aware, this is the first time that something has been removed from the web platform without any replacements—Mutation Events [0] come close, but Mutation Observers are a fairly close replacement, and it took 10 years for them to be fully deprecated and removed from browsers.

          [0]: https://developer.mozilla.org/en-US/docs/Web/API/MutationEve...

        • drob518 3 days ago ago

          They are definitely rare. And I suspect that if you eliminate government web sites where usage of standards is encouraged, if not mandated, the sightings “in the wild” are very low. My guess would be less than 1% of sites use XSLT.

        • efilife 3 days ago ago

          You ignored the argument (though probably not intentionally). You talk about how many you've seen. But you probably seen way more and never realized

          • cedilla 3 days ago ago

            If there were that many, why do people only list the same handful again and again? And where are all the /operators/ of those websites complaining? Is it possible that installing an XSLT processor on the server is not as big a hassle as everyone pretends?

            Again: this is nothing like Flash or Java applets (or even ActiveX). People were seriously considering Apple's decision to not support Flash on iPhone as a strategic blunder due to the number of sites using it. Your local news station probably had video or a stock market ticker using Flash. You didn't have to hunt for examples.

            • basscomm 3 days ago ago

              > If there were that many, why do people only list the same handful again and again? And where are all the /operators/ of those websites complaining?

              I've spent the last several years making a website based on XML and XSLT. I complain about the XML/XSLT deprecation from browsers all the time. And the announcements in August that Google was exploring getting rid of XSLT in the browser (which, it turned out, wasn't exploratory at all, it was a performative action that led to a foregone conclusion) was so full of blowback that the discussion got locked and Google forged ahead anyway.

              > Is it possible that installing an XSLT processor on the server is not as big a hassle as everyone pretends?

              This presumes that everyone interested in making something with XML and XSLT has access to configure the web server it's hosted on. With support in the browser, I can throw some static files up just about anywhere and it'll Just Work(tm)

              • shadowgovt 3 days ago ago

                If the server behavior can't be changed, there's a couple JavaScript engines to do the rendering client-side.

                • basscomm 3 days ago ago

                  Running a script that interprets a different script to transform a document just complicates things. What do I do when the transform fails? I have to figure out how to debug both XSLT and JavaScript to figure out what broke.

                  I don't have any desire to learn JavaScript (or use someone else's script) just to do some basic templating.

                  • shadowgovt 3 days ago ago

                    What does one do when transform fails right now? You have to debug both XSLT and a binary you don't have the source for; debugging JavaScript seems like a step up, right?

                    • basscomm 3 days ago ago

                      I used to be able to load the local XML and XSLT files in a browser and try it. When the XSLT blew up, I'd get a big ASCII arrow pointing to the part that went 'bang'. It still only kind of works in FireFox

                        XML Parsing Error: mismatched tag. Expected: </item>.
                        Location: https://example.org/rss.xml
                        Line Number 71, Column 3:
                        </channel>
                        --^
                      
                      Chrome shows a useless white void.

                      I enabled the nginx XSLT module on a local web server serve the files to myself that way. Now when it fails I can check the logs to see what instruction it failed on. It's a bad experience, and I'm not arguing otherwise, but it's just about the only workaround left.

                      It's a circular situation: nobody wants to use XSLT because the tools are bad and nobody wants to make better tools because XSLT usage is too low.

      • jeltz 3 days ago ago

        Battle.net's forums used to use XSLT and be a buggy mess, but not sure if that was related to their use of XSLT.

        • basscomm 3 days ago ago

          It's possible to write buggy software in every language.

          • javcasas 3 days ago ago

            Some programming languages and runtimes encourage writing more bugs.

          • DonHopkins 3 days ago ago

            Then write it in languages that have debuggers, instead of XSLT.

            • basscomm 3 days ago ago

              > Then write it in languages that have debuggers, instead of XSLT.

              Up until a few years ago, I could debug basic stuff in FireFox. If Firefox encountered an XSLT parsing error, it would show an error page with a big ASCII arrow pointing to the instruction that failed. That was a useful clue. Now it shows a blank page, which is not useful at all.

      • yoz-y 3 days ago ago

        Naturally I meant as a developer. I don’t doubt I came past xslt rendered pages.

    • Maxious 3 days ago ago

      If you view an RSS or Atom feed in chrome today you just get a screen of xml eg. https://developer.wordpress.org/news/feed/

      In the golden old days of 2018, browsers at least applied some styling https://evertpot.com/firefox-rss/

      You can still manually apply styling using xslt https://www.cedricbonhomme.org/blog/index.xml

      • yoz-y 3 days ago ago

        In Safari at least clicking a rss link prompts you to open it in a rss reader, which I think is a superior experience. Reading a rss feed in browser is not without use, but I’d argue that that’s mostly the job of the site itself.

        • zerkten 3 days ago ago

          The sites sometimes want to provide some special formatting on top of the RSS without modifying it. For example, you might point people to available RSS readers which may not be installed or provide other directions to end users. RSS feeds are used in places other than reading apps. I've seen people suggest that this transformation could be done server-side, but that would modify the RSS feed which needs to be consumed.

      • internetter 3 days ago ago

        > You can still manually apply styling using xslt

        Unless I'm using XSLT without knowing, you can do this with the xml-stylesheet processing instruction

        https://boehs.org/in/blog.xml

      • lifthrasiir 3 days ago ago

        But XSLT is not strictly required for styling. In fact, Firefox also supports an out-of-band stylesheet inclusion via the `Link` HTTP header [1]:

            Link: </style.css>; rel=stylesheet
        
        (Yes, this works even without <?xml-stylesheet?> PI others have mentioned.)

        I think the best strategy for Google is to support this and simultaneously ditch XSLT. This way nothing is truly lost.

        [1] You can test your browser from: https://annevankesteren.nl/test/html-element/style-header.ph...

        • kevin_thibedeau 3 days ago ago

          > nothing is truly lost.

          XSLT does much more than CSS.

          • lifthrasiir 3 days ago ago

            Technically yes, but so what? The RSS use case is almost the only thing XSLT can uniquely provide (at the moment). Every other use case of XSLT can be done in other ways, including the use of server-side XSLT processors.

    • sltkr 3 days ago ago

      For RSS feeds, XSLT stylesheets are used to display a human-readable version in the browser.

      Random example: https://lepture.com/en/feed.xml

      This is useful because feed URLs look the same as web page URLs, so users are inclined to click on them and open them in a web browser instead of an RSS reader. (Many users these days don't even know what an RSS reader is). The stylesheet allows them to view the feed in the browser, instead of just being shown the XML source code.

      • bawolff 3 days ago ago

        Why is this so critical? We dont due this for any other format. If you put an ms office document on a page, we dont have the browser render it, we download it and pass it off to a dedicated program. Why is RSS so special here?

        • Fileformat 3 days ago ago

          Because we want RSS to be friendly to new users. If you display a RSS feed as a wall of XML text, no new user will understand. If you just make it so clicking a RSS link brings up a blurb about RSS is & links on how to use, they might understand.

          And we have done it for other formats: PDF is now quite well supported in browsers without plugins/etc.

          • crazygringo 3 days ago ago

            An RSS feed is not a document meant for viewing. It's not like PDF or HTML or a video.

            It's a format intended for be consumed like an API call. It's like JSON. The link is something you import into an aggregator.

            RSS feeds shouldn't even be displayed as XML at all. They should just be download links that open in an aggregator application. The same way .torrent files are imported into a torrenting client, not viewed.

            • Fileformat 3 days ago ago

              Well, I do agree with you, but...

              1. This is pretty difficult for someone who doesn't know about RSS. How would they ever learn what to do with it?

              2. Browsers don't do that. There used to be an icon in the URL bar when they detected an RSS feed. It would be wonderful if browsers did support doing exactly what you suggest. I'm not holding my breath.

              I'm not looking to replicate my blog via XSLT of the RSS feed: that's what the blog's HTML pages are. I just don't want to alienate non-RSS users.

              • crazygringo 3 days ago ago

                People learn what to do with RSS the same as with anything else. They look it up or someone tells them. It's not like a .psd file tells you what it is, if you don't have Photoshop installed.

                I don't think you need to worry about "alienating" non-RSS users. If somebody clicks on an RSS link without knowing what RSS is and sees gibberish, that's not really on you. They can just look it up. Or if you want, you can put a little question-mark icon next to the RSS link if you want to educate people. But mostly, for feeds and social media links, people just ignore the icons/acronyms they don't recognize.

        • johannes1234321 3 days ago ago

          Because the "semantic web" was an interesting idea.

          And: Because it exists/existed and thus people relied upon it.

          With the amount of sites on the web, even a small number relying on features, each having just a bunch of users, it becomes a big number of impacted.

          • bawolff 3 days ago ago

            I dont see how xslt is connected to semantic web

            • Mikhail_Edoshin 3 days ago ago

              "Semantic" means making all distinctions you care about and not making any distinctions you do not care about. This means a custom notation for nearly every case. XML is such a tool. And XSLT is a key component to make all these notations compatible with each other.

              • bawolff 3 days ago ago

                That is not what "semantic web" means. Semantic web was a series of standards (rdf and friends) made by w3c from the early 2000s that didnt really catch on.

            • johannes1234321 3 days ago ago

              The GP asked "Why is RSS so special here?"

              And XSLT in that context is interesting as one can ship the RSS file, the web browser renders it with XSLT to human readable and a smart browser can do smart things with it. All from the same file.

          • hk__2 3 days ago ago

            Ok but maintaining a web browser that supports a ton of small features that nobody-except-me-and-my-cousin are using has a huge cost; you don’t support obscure features just because someone somewhere is relying on it (relevant: https://xkcd.com/1172/).

        • _heimdall 3 days ago ago

          Why would Google keep supporting AMP if the line is drawn only by use?

          They chose to kill off a spec and have it removed from every browser because they don't like it. They choose to keep maintaining AMP because its their pet project and spec. Its as simple as that, it has nothing to do with limited resources forcing them to trim features rather than maintain or improve them.

        • NoboruWataya 3 days ago ago

          Well, IMO it would be cool if we could do that, but the MS Office formats are a lot more complicated so it's a lot more work to implement. Also, quite often the whole point of sharing a file in MS Office format is so that the user can take it and edit it, which would require a dedicated program anyway.

        • ruszki 3 days ago ago

          If you think about it, basically nothing except HTML is a critical function of browsers. You can solve everything just with that. We don’t even need CSS, or any custom styling at all. JavaScript is absolutely not necessary.

          • yoz-y 3 days ago ago

            Yes and no.

            You can have a document without CSS but you can’t style it.

            You can have a document without JavaScript but only a static one (still interactive, but only though forms)

            On the other hand, you can replace XSLT with server side rendering, or JavaScript. It does not serve a truly unique function.

            • basscomm 3 days ago ago

              > You can have a document without CSS but you can’t style it.

              What? CSS didn't come around until several years after HTML did. And you can certainly style an HTML document without CSS.

              > On the other hand, you can replace XSLT with server side rendering, or JavaScript.

              You can also execute JavaScript on the server to make browsers more secure, but I don't see browser makers clamoring to remove JavaScript support.

              > It does not serve a truly unique function.

              It does, though. It lets someone do some basic programming of some web pages without having to become a developer

              • yoz-y 3 days ago ago

                Inline styles came and went and were replaced by CSS. (style attribute is still just CSS). font, color, and others are no longer in HTML5 spec.

                > You can also execute JavaScript on the server to make browsers more secure, but I don't see browser makers clamoring to remove JavaScript support.

                JS is not there just for client side static DOM rendering. Something like Google Maps or an IRC chat would be a much poorer experience without it.

                • basscomm 3 days ago ago

                  > font, color, and others are no longer in HTML5 spec.

                  Sometimes browsers are asked to render HTML documents that were written decades ago to conform to older specs and are still on the internet. That still works

                  > JS is not there just for client side static DOM rendering. Something like Google Maps or an IRC chat would be a much poorer experience without it.

                  Of course they would. That's most of the point. You can do a lot more damage with JavaScript than you currently can with XSLT, but XSLT has to go because of 'security concerns'

        • sltkr 3 days ago ago

          I don't think it's a critical feature, but it is nice-to-have.

          Imagine if you opened a direct link to a JPEG image and instead of the browser rendering it, you'd have to save it and open it in Photoshop locally. Wouldn't that be inconvenient?

          Many browsers do support opening web-adjacent documents directly because it's convenient for users. Maybe not Microsoft Word documents, but PDF files are commonly supported.

          • bawolff 3 days ago ago

            Yeah, but browsers actually make use of that format. And its not like you can add a special header to jpg files to do custom reformatting of the jpeg via a turing complete language. Browsers just display the file.

      • miki123211 3 days ago ago

        You can do the same by checking Accept headers, User-Agent if you truly must.

      • nomercy400 3 days ago ago

        Aren't there other ways to load and parse a technical format like RSS to a human-readable format? Like you would do with JSON.

        Or can't you polyfill this / use a library to parse this?

        • sltkr 3 days ago ago

          You can do the transformation server-side, but it's not trivial to set it up. It would involve detecting the web browser using the "Accept" header (hopefully RSS readers don't accept text/html), then using XSLT to transform the XML to XHTML that is sent to the client instead, and you probably need to cache that for performance reasons. And that's assuming the feed is just a static file, and not dynamically generated.

          In theory you could do the transformation client side, but then you'd still need the server to return a different document in the browser, even if it's just a stub for the client-side code, because XML files cannot execute Javascript on their own.

          Another option is to install a browser extension but of course the majority of users will never do that, which minimizes the incentive for feed authors to include a stylesheet in the first place.

          • nomercy400 3 days ago ago

            How about using Javascript to fetch the XML (like you would do with JSON), and then parse/transform it with a Javascript or wasm XSLT library? Just like you would do with JSON.

            You need a server to serve Json as well. Basically, see XML as data format.

            RSS readers are not chrome, so they have their own libraries for parsing/transforming with XSLT.

        • _heimdall 3 days ago ago

          Not without servers rendering the HTML or depending on client-side JS for parsing and rendering the content.

          Its also worth noting that the latest XSLT spec actually supports JSON as well. Had browsers decided to implement that spec rather than remove support all together you'd be able to render JSON content to HTML entirely client-side without JS.

  • pseudosavant 3 days ago ago

    This site is a bit of a Rorschach test as it plays both sides of this argument: bad Google for killing XSLT, and the silliness of pushing for XSLT adoption in 2025.

    "Tell your friends and family about XSLT. Keep XSLT alive! Add XSLT to your website and weblog today before it is too late!"

    • karel-3d 3 days ago ago

      It's clearly making fun of the hyperbole.

    • James_K 3 days ago ago

      I already have XSLT in my website because I have an Atom feed and XSLT is the only way to serve formatted Atom/RSS feeds in a static site. Perhaps you have never considered the idea that someone might want to purchase some cheap static hosting to serve their personal website, but it is a fine way to do things. This change pries the web ever further out of the hands of common people and into the big websites that just want the browser to serve their apps.

      • javcasas 3 days ago ago

        There are plenty of cheap hosting out there, most cases with PHP support for the odd dynamic thing.

        • James_K 3 days ago ago

          How do you intend to put PHP in an RSS document? If it serves an HTML one instead, then the RSS will no longer be available. You could try checking the HTTP headers to determine if the page is being fetched by an RSS reader or a browser, but such an approach is much more brittle than XSLT, which solves the problem exactly and easily. Not to mention it allows users to download browser extensions that override the provided formatting of XSLT documents with a custom standard one if they desire.

          • javcasas 2 days ago ago

            You know, the thing about standards is that there are many.

            https://www.rfc-editor.org/rfc/rfc7231#section-5.3.2

            Checking the HTTP headers is the HTTP standard.

            • James_K 2 days ago ago

              Not every application will set these correctly. It is less reliable than simply serving a static page. And that's the core of this issue. Before XSLT was removed, your website could be a directory of static content. You can put it in a zip file and send it anywhere you want. Now, even the most basic website (blog + feed) will require some dynamic content to work properly. We go from a world where static hosting is possible to one where it's less possible, and all because some browser implementors couldn't be bothered to upgrade a library to a safe version.

              This would also break the workflow I have for my site, where I build it as a static directory locally during development and point Python's trivial HTTP server at it to access the content over localhost.

              And it's totally insulting because the people removing this have created a (memory safe!) browser extension that lets you view XSLT documents, and put special logic in the browser to show users a message telling them to download that extension when an XSLT-styled document is loaded. They should bundle the extension with the browser instead of breaking my website and telling users where to fix it.

      • DonHopkins 3 days ago ago

        It's perfectly reasonable (and much more maintainable and powerful) to use client side JavaScript on a static site to transform Atom or RSS into HTML.

        If your argument is that you don't want to use JavaScript because it's Turing complete and insecure and riddled with bugs and security holes, then why the fuck are you using XSLT?

        • James_K 3 days ago ago

          RSS documents do not support JavaScript. Also, XSLT is not Turing complete as far as I know, though some implementations extend the spec to become Turing complete. Even if it is, a potentially Turing complete XSLT document does not present the same kinds of risks as JavaScript does. Do you think someone will be able to fingerprint your browser using XSLT? I'll file that under “highly unlikely”. Specter and meltdown also aren't exactly going to work in XSLT. There are memory-safe XSLT parsers available and existing parsers can be run in a memory-safe WASM sandbox, so that's not really a concern either.

          • DonHopkins 3 days ago ago

            But as you obviously know, HTML documents do support JavaScript, and there's no reason to link to a raw XML RSS or Atom document directly, so problem solved. If you're so cautious you refuse to enable JavaScript, then you have absolutely no justification for enabling XSLT.

            Handwaving that vulnerabilities are "highly unlikely" is dangerous security theater. It doesn't matter how unlikely you guess and wish they are, they just have to be possible. And the fact that the XSLT 1.0 implementations built into browsers are antique un-sandboxed memory-unsafe C++ code make vulnerabilities "highly likely", not "highly unlikely", which the record clearly proves.

            Browsers only natively support the ancient version of XSLT 1.0, so if you need a less antiquated version, you should use a modern memory safe sandboxed polyfill, or process it on the server side, or more safely not use XSLT at all and simply use JavaScript instead (simply transforming RSS to HTML directly with JavaScript is a MUCH smaller and harder attack surface than the massive overkill of including an entire sandboxed general purpose Turing complete XSLT processor), instead of foolishly relying on non-sandboxed old untrustworthy poorly maintained C++ code built into the browser.

            Of course all versions of XSLT are Turing complete, as you can easily confirm on Wikipedia, and which is quite obvious if you have ever read the manual and used it. It has recursive template calls, conditionals, variables and parameters, pattern matching and selection, text and node construction, unbounded input and recursion depth, etc. So how could it possibly not be Turing complete, since it has the same expressive power of functional programming languages? And that should be quite obvious to anyone who knows XSLT and basic CS101, at a glance, without a formal proof.

            https://en.wikipedia.org/wiki/XSLT

            >While XSLT was originally designed as a special-purpose language for XML transformation, the language is Turing-complete, making it theoretically capable of arbitrary computations.

            Do you recall the title of Chrome's web page explaining why they're removing XSLT? "Removing XSLT for a more secure browser" (aka "Bin Ladin Determined To Strike in XSLT" ;). Didn't you read that article, and the recent HN discussion about it? You can't just claim nobody warned you, like GW Bush tried to do.

            https://news.ycombinator.com/item?id=45823059

            https://developer.chrome.com/docs/web-platform/deprecating-x...

            >Why does XSLT need to be removed?

            >The continued inclusion of XSLT 1.0 in web browsers presents a significant and unnecessary security risk. The underlying libraries that process these transformations, such as libxslt (used by Chromium browsers), are complex, aging C/C++ codebases. This type of code is notoriously susceptible to memory safety vulnerabilities like buffer overflows, which can lead to arbitrary code execution. For example, security audits and bug trackers have repeatedly identified high-severity vulnerabilities in these parsers (e.g., CVE-2025-7425 and CVE-2022-22834, both in libxslt). Because client-side XSLT is now a niche, rarely-used feature, these libraries receive far less maintenance and security scrutiny than core JavaScript engines, yet they represent a direct, potent attack surface for processing untrusted web content. Indeed, XSLT is the source of several recent high-profile security exploits that continue to put browser users at risk. The security risks of maintaining this brittle, legacy functionality far outweighs its limited modern utility. [...]

            Your overconfidence in XSLT's security in browsers is unjustified and unsupported by its track record and reputation, its complexity is extremely high, it's written in unsafe un-sandboxed C/C++, it gets vastly less attention and hardening and use than JavaScript, and its vulnerabilities are numerous and well documented.

            Examples:

            CVE‑2025‑7425: A heap use-after-free in libxslt caused by corruption of the attribute type (atype) flags during key() processing and tree-fragment generation. This corruption prevents proper cleanup of ID attributes, enabling memory corruption and possibly arbitrary code execution.

            CVE‑2024‑55549: Another use-after-free in libxslt (specifically xsltGetInheritedNsList) disclosed via a Red Hat advisory.

            CVE‑2022‑22834: An XSLT injection vulnerability in a commercial application (OverIT Geocall) allowing remote code execution from a “Test Trasformazione XSL” feature. Shows how XSLT engines/processors can be attack surfaces in practice.

            CVE­-2019-18197: (libxslt 1.1.33) In the function xsltCopyText (file transform.c) a pointer variable isn’t reset in certain flows; if the memory area was freed and reused, a bounds check could fail and either write outside a buffer or disclose uninitialised memory.

            CVE-2008-2935: buffer overflows in crypto.c for libexslt.

            CVE-2019-5815: type confusion in xsltNumberFormatGetMultipleLevel, repeated memory safety flaws (heap/stack corruption, improper bounds checks, pointer reuse) in the library over many years.

            • James_K 3 days ago ago

              >and there's no reason to link to a raw XML RSS or Atom document directly

              What's the point of having an Atom feed if I can't give people a link to it? Do you just expect me to write “this website has an atom feed” and have only the <link> element invisibly pointing at it? That is terrible UX. And then what if I want to include a link to my feed in a message to share it with someone?

              >Handwaving that vulnerabilities are "highly unlikely" is dangerous security theater

              No it isn't. There are memory safe XSLT implementations. Not so for JavaScript. This is because XSLT is a simple language and JavaScript a complicated one. You are trying to make the case that XSLT is inherently unsafe because poor implementations of it exist, yet it is actually much safer because safe implementations exist and are easy to write. It can initiate no outgoing internet connections, cannot read from memory directly, cannot do any of the things that makes JavaScript inherently dangerous.

              >simply transforming RSS to HTML directly with JavaScript is a MUCH smaller and harder attack surface than the massive overkill of including an entire sandboxed general purpose Turing complete XSLT processor

              Firstly, you can't include JavaScript tags in RSS or Atom so my website would not conform to any web standard. Secondly, by using JavaScript, I'm demanding that my users enable a highly dangerous web feature that has been the basis for many attacks. By using XSLT, I'm giving them the option to use a much smaller interface with safer implementations available. How many CVEs have their been in JavaScript runtimes compared with XSLT? And finally, browser developers should just bundle one of these JavaScript polyfills and activate it for documents with stylesheets if they are so easy to use. Demanding that users deviate from web standards to get simple features like XML styling is ridiculous and it would clearly be little effort for them to silently append a polyfill script to documents with XSLT automatically. If that's the only way they can make it secure, that's what they should do.

              >Your overconfidence in XSLT's security in browsers is unjustified and unsupported by its track record and reputation, its complexity is extremely high, it's written in unsafe un-sandboxed C/C++, it gets vastly less attention and hardening and use than JavaScript, and its vulnerabilities are numerous and well documented.

              I have no confidence at all in browsers' implementations of XSLT because they admit they use a faulty library. I have absolute confidence that it would be little effort to replace the faulty library with a correct one, and that doing so would be miles safer than expecting users to enable JavaScript.

              >Of course all versions of XSLT are Turing complete, as you can easily confirm on Wikipedia

              Do not quote Wikipedia as a source. In this case, the provided source in the Wikipedia page claims only that version 2.0 is Turing complete, and this claim is erroneous, based on a proprietary extension of certain XSLT processors but not that used in Chrome.

              http://tkachenko.com/blog/archives/000275.html

              It is quite frankly ridiculous to me that people are bending over backwards to suggest that XSLT is somehow an inherent security risk when you can include a JavaScript fragment in pages to trigger an XSLT processor. Whatever risk is posed by XSLT is a clear subset of that posed by JavaScript for this reason alone. You will never see a complete JavaScript implementation in XSLT because it isn't possible. One language is given greatly more privileged access to the resources and capabilities of the user's computer than the other.

              The decision of web-browser to include faulty XSLT libraries when safe ones exist is the source of risk here, and not these same people who have been putting users at risk in a billion different ways over the years come to me and suggest that I have to remove a completely innocuous feature from my website and replace it with a more dangerous one while breaking standards compliance because they can't be bothered to switch from an unsafe implementation to a safe one.

  • charcircuit 3 days ago ago

    > XSLT will soon enter the Google graveyard.

    The google graveyard is for products Google has made. It's not for features that were unshipped. XSLT will not enter the Google graveyard for that reason.

    >We must conclude Google hates XML & RSS!

    Google reader was shutdown due to usage declining and lack of willingness for Google to continue investing resources into the product. It's not that Google hate XML and RSS. It's that end users and developers don't use XSLT and RSS enough to warrant investing into it.

    >by killing [RSS] Google can control the media

    The vast majority of people in the world do not get their news by RSS. It's never would have taken over the media complex. There are other surfaces for news like X which Google is not able to control. Google is not the only surface where news can surface.

    > Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?

    It is quite a reach to say that Google removing XSLT will give them control over government legislation. They are completely unrelated.

    >How much did Google pay for this support?

    Google is not paying for support. These browsers have essentially a revenue sharing agreements with the traffic they provide Google with. The payments are for the traffic to Google.

  • susam 3 days ago ago

    End of an era! I remember going through XSLT tutorials many decades ago and learning everything there was to learn about this curious technology that could make boring XML documents come 'alive'. I still use it to style my RSS feeds, for example, <https://susam.net/feed.xml>. It always felt satisfying that an XML file with a stylesheet could serve as both data and presentation.

    Keeping links to the original announcements for future reference:

    1) <https://groups.google.com/a/chromium.org/g/blink-dev/c/CxL4g...>

    2) <https://developer.chrome.com/docs/web-platform/deprecating-x...>

    I know that every such feature adds significant complexity and maintenance burden, and most people probably don't even know that many browsers can render XSLT. Nevertheless, it feels like yet another interesting and niche part of the web, still used by us old-timers, is going away.

  • redbell 3 days ago ago

    IMHO, Google had become the most powerful tech company out there! It has a strong monopoly in almost every aspect of our lives and it is becoming extremely difficult to completely decouple from it. My problem with this is that it now dictates and influences what can be done, what is allowed and what not, and, with its latest Android saga (https://news.ycombinator.com/item?id=45017028), it's become worrying.

    I strongly encourage building a website entitled, something like keepXSLTAlive.tld to advocate for XSLT as the other guys did https://keepandroidopen.org/ for Android (https://news.ycombinator.com/item?id=45742488), or keep this current site (https://xslt.rip/) but update the UI a little bit to better reflect the protest vibe.

    • rahkiin 3 days ago ago

      What you say about google might be true. And its changes to android might be bad…

      But that does not mean xslt should be kept alive just because of that. It should be judged on its own merits

      • _heimdall 3 days ago ago

        And thats part of the problem, they didn't judge it on its merits.

        Google judged a 25 year old spec that is now 2 major versions out of date.

        • jeltz 3 days ago ago

          So why is almost nobody here actually defending it on its own merits? In my opinion XSLT was a bad idea ~20 years ago when I started in web development. It was convoluted, not nice to work with and the implementations buggy.

          Most people seem to think it is bad because it is Google who want to remove it. Personally I just see Google finally doing something good.

          • _heimdall 3 days ago ago

            There are plenty of defenders of XSLT around here. More importantly though, this thread isn't focused on debating the pros and cons the tech.

          • righthand 3 days ago ago

            There is so much defense of XSLT it’s crazy you assume no one is here defending it. This thread isn’t the single defense point against Google.

            Not only that Google engineers Mason Freed has shown pretty forcefully that he will not listen to defense, reason or logic. This further evidenced by Google repeatedly trying to kill it for 25 years.

            Personally I just see you licking Google’s boot.

  • tuveson 3 days ago ago

    If Google cured cancer tomorrow, there's someone that would be complaining about it and adding "cancer" to the "killed by Google" list. I would be very surprised if smaller browser vendors were happy about having to maintain ancient XSLT code, and I doubt new vendors were planning on ever adding support. Good riddance.

    • hrimfaxi 3 days ago ago

      Smaller browser vendors already pick and choose the features they support. Which companies do you have in mind that are cheering for this initiative?

      • tuveson 3 days ago ago

        The post specifically calls out Apple and Mozilla as wanting to get rid of XSLT support, but just insinuates that this is because Google is paying them off. Obviously I think Google's monopoly position and backroom dealings are bad, but I also think that's completely unrelated, and that the more likely explanation for the other mainstream vendors wanting to get rid of XSLT is that it's a feature virtually no one uses and is likely a maintenance burden for the other non-Chromium browsers.

        > Smaller browser vendors already pick and choose the features they support.

        If there weren't a gazillion features to support, maybe there would be more browsers. I think criticizing Google and other vendors for _adding_ tons of bloat would be a better use of time.

    • mouse_ 3 days ago ago

      > smaller browser vendors

      Such as???

  • SvenL 3 days ago ago

    Boy is this an awesome web page. Suddenly I have the urge to create an html page with ifames, blink, marquee and table tags (for layout of course)

    • altfredd 3 days ago ago

      You can always render blink and marquee with Canvas.

      Just kidding, Canvas is obsolete technology, this should obviously be done with WebGPU

      • paavohtl 3 days ago ago

        I know you're being sarcastic, but to be pedantic WebGPU (usually) uses canvas. Canvas is the element, WebGPU is one of the ways of rendering to a canvas, in addition to WebGL and CanvasRenderingContext2D.

        • lukan 3 days ago ago

          And also don't expect smooth sailing with WebGPU yet, unless all your users have modern mainstream browsers with up to date hardware.

          • paavohtl 3 days ago ago

            And even that isn't enough; no browser supports WebGPU on all platforms out of the box. https://caniuse.com/webgpu

            Chrome supports it on Windows and macOS, Linux users need to explicitly enable it. Firefox has only released it for Windows users, support on other platforms is behind a feature flag. And you need iOS 26 / macOS Tahoe for support in Safari. On mobile the situation should be a bit better in theory, though in my experience mobile device GPU drivers are so terrible they can't even handle WebGL2 without huge problems.

    • blitzar 3 days ago ago

      Needs an "under construction" banner

      • xpe 3 days ago ago

        And webring buttons at the bottom!

        • blitzar 3 days ago ago

          also lacking a visitor counter and guestbook

          • bean469 a day ago ago

            There actually is a guestbook on the bottom of the page

            • blitzar a day ago ago

              amazing! i was so fixated by the explosion animation I missed it. sad it leads to a contact me page instead.

    • ctm92 3 days ago ago

      Recently had to grab content from a page that was layouted with tables. Just nested tables over tables, not even ids for the elements.

      • sethaurus 3 days ago ago

        I invite you to view the source of the very page we're on right now.

        • znort_ 3 days ago ago

          thanks for this, you made my day! i never bothered to look.

          i still remember when tables were forced out of fashion by hordes of angry div believers! they became anathema and instantly made you a pariah. the arguments were very passionate but never made any sense to me: the preaching was separating structure from presentation, mostly to enable semantics, and then semantics became all swamped with presentation so you could get those damned divs aligned in a sensible way :-)

          just don't use (or abuse) them for layout but tables still seem to me the most straightforward way to render, well, tabular content.

      • VerifiedReports 3 days ago ago

        laid out

  • GaryBluto 3 days ago ago

    While I agree with the sentiment, I loathe these "retro" websites that don't actually look like how most websites looked back then. It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.

    • coldtea 3 days ago ago

      >While I agree with the sentiment, I loathe these "retro" websites that don't actually look like how most websites looked back then.

      Countless websites on Geocities and elsewhere looked just like that. MY page looked like that (but more edgy, with rotating neon skull gifs). All those silly GIFs were popular and there were sites you could find and download some for personal use.

      >It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.

      In North Platte or Yorkshire maybe. Otherwise plenty of neon blue and pink in the 80s. Starting from video game covers, arcades, neon being popular with bars and clubs, far more colorful clothing being popular, "Memphis" style graphic design, etc.

      • NoGravitas 3 days ago ago

        The brown, beige, and dark orange were extremely prevalent in the 80s --- but a lot of that was a result of the fact that most things in your environment are never brand new; the first half of the 80s was mostly built in the second half of the 70s.

    • cpach 3 days ago ago

      This look with animations and bright text on dark repeated backgrounds was definitely popular for a while in the late 90s. You wouldn’t see it on larger sites like Yahoo or CNN, but it was definitely not unheard of for personal sites.

      Gray backgrounds where also popular, with bright blue for unvisited links and purple for visited links. IIRC this was inspired by the default colors of Netscape Navigator 2.

      • johannes1234321 3 days ago ago

        > IIRC this was inspired by the default colors of Netscape Navigator 2.

        "Inspired" is an interesting word for "didn't set custom values." And I believe Mosaic used the same colors before. I'm not even sure when HTML introduced the corresponding attributes (this was all before CSS ...)

    • arcanemachiner 3 days ago ago

      Now that you mention it, something did seem a little off about the thinking-butt emoji...

    • Moosturm 3 days ago ago

      My old website from the 90s looks disturbingly similar to this one.

    • jameslk 3 days ago ago

      You’re right, there isn’t even any marquee or blinking text

      • mrspuratic 3 days ago ago

        A marquee, animated work-in-progress GIF and a visit counter CGI would have nailed it.

    • antonvs 3 days ago ago

      Could just be the author’s personal style?

      I once got into a cab in NYC on Halloween and the driver said to me, hey, you really nailed that 80s hairstyle, thinking I had styled it for Halloween. I had to tell him dude, I’m from the 80s.

    • psychoslave 3 days ago ago

      Well, the page don't tell anything about it's style, so all these impressions are really what people interpret.

    • rusk 3 days ago ago

      > it was more of a brownish beige.

      Bleed through from the 70s

    • 3 days ago ago
      [deleted]
    • themafia 3 days ago ago

      > don't actually look like how most websites looked back then

      https://geocities.restorativland.org/Area51/

      > was more of a brownish beige.

      Did you never watch MTV?

    • dist-epoch 3 days ago ago

      Maybe not most, but there were plenty of black/blue/dark sites.

    • galkk 3 days ago ago

      Exactly.

      If there is no white 1x1 pixel that is stretched in an attempt to make something that resembles actual layout, or multiple weird tables, I always ask: are they even trying.

      In all seriousness- they got quite a good run with xslt. Time to let it rest.

      • mickeyp 3 days ago ago

        1x1 pixels for padding and aligning? That came later. Your memory is off.

        In the 90s, sites did kinda look like that.

        • coldtea 3 days ago ago

          1x1 pixels for padding and aligning were absolutely a thing in the late 90s (1997+). Don't know what alternative history you have in mind, but it was used at the "table layout" era.

          What came later was the float layout hell- sorry, "solution".

        • bazoom42 3 days ago ago

          The 1x1 pixel gif hack arrived shortly after Netscape 1.1 introduced tables. I belive this was before colored text and tiled backgrounds became available. So the hack is definitely part of the “golden age” of web design.

  • tannhaeuser 3 days ago ago

    Worth noting XSLT is actually based on DSSSL, the Scheme-based document transformation and styling language of SGML. Core SGML already has "link processes" as a means to associate simple transforms/renames reusing other markup machinery concepts such as attributes, but is also introducing a rather low-level automaton construct to describe context-dependent and stateful transformations (the kind of which would've be used for recto/verso rendering on even/odd print pages).

    I think it's interesting because XSLT, based on DSSSL, is already Turing-complete and thus the XML world lacked a "simple" sub-Turing transformation, templating, and mapping macro language that could be put in the hands of power users without going all the way to introduce a programming language requiring proper development cycles, unit testing, test harnesses, etc. to not inevitably explode in the hands of users. The idea of SGML is very much that you define your own little markup vocabulary for the kind of document you want to create at hand, including powerful features for ad-hoc custom Wiki markup such as markdown, and then create a canonical mapping to a rendering language such as HTML; a perspective completely lost in web development with nonsensical "semantic HTML" postulates and delivery of absurd amounts of CSS microsyntax.

    • necovek 3 days ago ago

      As a youngster entering the IT professional circles, I was enamoured with SGML: creating my own DTDs for humane entry for my static site generator, editing my SGML source document with Emacs sgml-mode. I worked on TEI and DocBook documents too (and was there something related to Dewey coding system for libraries?).

      However, processing fully compliant SGML, before you even introduce DSSSL into the picture, was a nightmare. With only one open source and at the same time the only fully compliant parser (nsgml), which was hard to build on contemporary systems, let alone run, really using SGML for anything was an exercise in frustration.

      As an engineering mind, I loved the fact you could create documents that are concise yet meaningful, and really express the semantics of your application as efficiently as possible. But I created my own parsers for my subset, and did not really support all of the features.

      HTML was also redefined to be an SGML application with 4.0.

      I originally frowned on XML as a simplification to make it work for computers vs for humans, but with XML, XSLT, Xpath... specs, even that was too complex for most. And I heavily used libxml2 and libxslt to develop some open source tooling for documentation, and it was full of landmines.

      All this to say that SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity. And going for "semantic HTML" in lieu of SGML + DSSSL or XML + XSLT was really an attempt to find that balance of meaning and simplicity.

      It's the common cycle as old as software engineering itself.

      • tannhaeuser 3 days ago ago

        > HTML was also redefined to be an SGML application with 4.0

        Nope, it was intended as SGML from the get go; cf [1].

        > SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity

        HTML (and thus SGML) is the most used document language there ever has been, by far.

        [1]: https://info.cern.ch/hypertext/WWW/MarkUp/MarkUp.html

        • necovek 3 days ago ago

          I stand corrected: HTML was defined as an SGML application from the very first published version in 1993 (https://www.w3.org/MarkUp/draft-ietf-iiir-html-01.txt), but I know the original draft in 1990-91 was heavily SGML inspired even if it didn't really conform to the spec (nor provide a DTD). Thanks for pointing this out, it's funny how memory can play games on us :)

          While HTML is clearly the most used document markup language there has ever been, almost nobody is using an SGML-compliant parser to parse and process it, and most are not even bothering with the DTD itself; not to mention that HTML5 does not provide a DTD and really can't even be expressed with an SGML DTD.

          So while HTML used to be one of SGML "applications" (document types, along with a formal definition), on the web it was never treated as such, but as a very specific language that is inspired by SGML and only inspired by the spec too (since day 1, all browsers accepted "invalid" HTML and they still do).

          Ascribing the success to SGML is completely backwards, IMHO: HTML was successful despite it being based on SGML, and for all intents and purposes, majority never really cared about the relationship.

    • user3939382 3 days ago ago

      Completely correct and the operative phrase here is “absurd amounts” which actually captures our entire contemporary computing stack in almost every dimension that matters.

      • tannhaeuser 3 days ago ago

        The entire point of markup attributes is to contain rendering hints that themselves aren't rendered to the user as such. Hell, angle-bracket markup itself was introduced to unify and put a limit to syntactic proliferation. But somehow "we" arrived at creating the monstrosity that is CSS and then even to put CSS and JS into inline element content with bogus special escaping and comment parsing rules rather than into attributes and external resources.

        The enormous proliferation of syntax and super-complicated layout models doesn't stop markup haters to cry wolf because entities (text macros) represent a security risk in markup however; go figure.

    • jeltz 3 days ago ago

      But did it ever actually work in practice? As I remember it the XSLT backed websites still needed "absurd amounts of CSS microsyntac". You could not do everything you needed with XSLT so you needed to use both XSLT and CSS. Also coding in XSLT was generally painful, even more so than writing CSS (which I think is another poorly designed language).

      It is all well and good to talk about theoretical alternatives that would have been better but we are talking here about a concrete attempt which never worked beyond trivial examples. Why should we keep that alive because of something theoretical which in my opinion never existed?

      • vbezhenar 3 days ago ago

        XSLT is template language. CSS is styling language. They have nothing to do with each other. You have data in some XML-based format. You write template using XSLT to transform that data into HTML. And then you use CSS to make that HTML look pretty. These technologies work very well with each other.

  • 8organicbits 3 days ago ago

    It's interesting that we don't have a replacement for this use case. For me, XSLT hits a sweet spot where I can send a machine-parsable XML document and a small XSLT sheet from dirt cheap static web hosting (where I cannot perform server-side transforms, or control HTTP headers). This is fairly minimal and avoids needing to keep multiple files in sync.

    I could add a polyfill, but that adds multiple MB, making this approach heavyweight.

  • vbezhenar 3 days ago ago

    XSLT was the only convenient way to create a static website without JS. Other ways either require build step or server-side applications. With XSLT, you could write data into XML files, templating into XSL files and it'll just work.

    Of course you can achieve similar effects with JS, by downloading data files and rendering them into whatever HTML you want. But that cuts users without enabled JS.

    Not a huge loss, I guess, given the lack of popularity of these technologies. But loss nonetheless. One more step to bloated overengineered web.

    • DonHopkins 3 days ago ago

      > But that cuts users without enabled JS.

      Users who disable JS are insane and hypocritical if they don't also disable XSLT, which is even worse. So I wouldn't bend over too far backwards to support insane hypocrites. There aren't enough of them to matter, they enjoy having something to complain about, and they're much louder and more performative than the overwhelming majority of users. Not a huge loss cutting them out at all.

      • vbezhenar 3 days ago ago

        I don't think you can disable XSLT without patching browser. For disabling JavaScript, there are dedicated checkboxes in every browser.

      • 3 days ago ago
        [deleted]
  • conartist6 3 days ago ago

    I haven't been too chatty about it but the furor over this being removed has, I suspect, everything to do with there being no real plan to replace what it does. No I don't just mean styling RSS feeds. I mean writing websites as semantic documents!! The whole thing the web is (was) about!

  • NoboruWataya 3 days ago ago

    > Tell your friends and family about XSLT.

    I had a good chuckle at the idea of sitting around the dinner table at Christmas telling my parents and in-laws all about XSLT.

    • alfiedotwtf 3 days ago ago

      Don’t… you’re forgetting the Christmas of ’02 when cousin Marvin brought up the issue of Tabs vs Spaces!! Uncle Frank still holders a grudge and he’s still not on speaking terms with Adam

  • sdovan1 3 days ago ago

    I've worked with a hospital, their electric medical records are written in XML, and use XSLT to render HTML.

    • coldtea 3 days ago ago

      They will be able to do that in perpetuity.

      It's just direct browsing support for rendering using XSLT that's removed.

    • LumielGR 3 days ago ago

      XSLT is terrible though, at least XQuery is a nice language.

    • atemerev 3 days ago ago

      Which is one excellent use of XSLT. It is not that useful for general web.

      • CaliforniaKarl 3 days ago ago

        From https://chromeenterprise.google:

        > For over a decade, Chrome has supported millions of organizations with more secure browsing – while pioneering a safer, more productive open web for all.

        … and …

        > Our commitment to Chromium and open philosophy to integration means Chrome works well with other parts of your tech stack, so you can continue building the enterprise ecosystem that works for you.

        Per the current version of https://developer.chrome.com/docs/web-platform/deprecating-x..., by August 17, 2027, XSLT support is removed from Chrome Enterprise. That means even Chrome's enterprise-targeted, non-general-web browser is going to lose support for XSLT.

        • bawolff 3 days ago ago

          Most people who use xslt like the grandparent described were never using it on the client side but on the server side. Nothing google chrone does will effect the server side.

      • atemerev 3 days ago ago

        To clarify: initially, the first web browser evolved from a SGML-based documentation browser at CERN. This was the first vision of the web: well-structured content pages, connected via hyperlinks (the "hyper" part meaning that links could point beyond the current set of pages). So, something like a global library. Many people are still nostalgic to this past.

        Surprisingly, the "hyperlinked documents" structure was universal enough to allow rudimentary interactive web applications like shops or reservation forms. The web became useful to commerce. At first, interactive functionality was achieved by what amounted to hacks: nav blocks repeated at every page, frames and iframes, synchronous form submissions. Of course, web participants pushed for more direct support for application building blocks, which included Javascript, client-side templates, and ultimately Shadow DOM and React.

        XSLT is ultimately a client-side template language too (can be used at the server side just as well, of course). However, this is a template language for a previous era: non-interactive web of documents (and it excels at that). It has little use for the current era: web of interactive applications.

        • eftpotrm 3 days ago ago

          What makes XSLT inherently unsuitable for an interactive application in your mind? All it does is transform one XML document into another; there's no earthly reason why you can't ornament that XML output in a way that supports interactive JS-driven features, or use XSLT to built fragments of dynamically created pages that get compiled into the final rendered artifact elsewhere.

        • the_other 3 days ago ago

          My only use of XSLT (2000-2003) was to make interactive e-learning applications. I'd have used it in 2014 too, for an interactive "e-brochure", if I could have worked out a cross-browser solution for runtime transformation of XML fragments. (I suspect it was possible then but I couldn't work it out in the time I had for the job...)

          If you can use it to generate HTML, you can use it to generate an interactive experience.

        • cluckindan 3 days ago ago

          What if you used JS to make XSLT interactive? :-)

  • beardyw 3 days ago ago

    XSLT has a life outside the browser and remains valuable where XML is the way data is exchanged. And RSS does not demand XSLT in the browser so far as I know. I think RIP is a bit excessive.

  • eterevsky 3 days ago ago

    In all seriousness, XSLT looked stillborn even 25 years ago when it was introduced.

    • alexdowad 3 days ago ago

      Agree. It always seemed like a strange and poorly conceived technology to me.

      • DonHopkins 3 days ago ago

        It was just castrated DSSSL.

  • 3 days ago ago
    [deleted]
  • zkmon 3 days ago ago

    Looks like more of a retro-fun site, than a protest. Most serious websites of 90's had more like light brownish background with black text with occasional small image on the side, double borders for table cells, Times font, horizontal rules, links with bold font in blue color, side-bar with navigation links, bread-crumbs at the top telling where you are now, may be also next-prev links at the bottom, and a title banner at the top.

    Game sites and other "desperate-for-attention" sites have the animated gifs all over, scrolling or blinking text, dark background with bright multi-colored text with different font sizes and types and sound as well, looking pretty chaotic.

  • cm-t 3 days ago ago

    Killing RSS = killing decentralized internets (blogs, podcasts, etc) = empowering centralized plateform such as youtube, spotify (etc)

    • jeroenhd 3 days ago ago

      Youtube has pretty much always supported RSS and still does. Google killed their RSS reader, but if they wanted to kill RSS they wouldn't put it in their video platform.

      When it comes to killing web technology, Google is mostly killing their own weird APIs that nobody ended up using or pruning away code that almost nobody uses according to their statistics.

      • themafia 3 days ago ago

        > Youtube has pretty much always supported RSS and still does.

        It has RSS feeds for individual channels. It does not _support_ RSS in any meaningful way.

        • pvdebbe 3 days ago ago

          Can you please clarify? For me, maintaining my own watch lists, that is, per channel RSS feeds, all neatly organized in my RSS aggregator's folders, is the only way to fly.

  • boesboes 3 days ago ago

    Got to love the github issue, show exactly the sad state of things. Google owns the internet now and we are all chumps for even thinking there is anything open left.

    Disenting opinions will be marked as abuse!

  • lopsotronic 3 days ago ago

    As a man locked inside of a closet made mostly of XSL, my only regret is that I can't drown it in a bathtub myself.

    The XML Priesthood will immediately jump down your throat about "XSL 3 Fixes All Things" or "But You're Not Doing It Correctly", and then point towards a twenty year old project that has five different proprietary dependencies, only two of which even still have a public cost. "Email Jack for Pricing".

    And all this time, the original publishing requirement for these stone age pipelines is completely subsumed by the lightweight markup ecosystem of the last decade, or, barring that, that of TeX. So much complexity for no reason whatsoever, I am watching man-centuries go up in frickin' smoke, to satisfy a clique of semantic academics who think all human thought is in the form of a tree.

    The horror, the horror.

  • littlecranky67 3 days ago ago

    Website is overly dramatic. Google doesn't hate XSLT, it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money. If the author wants to raise money to pay a developer willing to maintain libxslt, Google might revise the decision.

    • troupo 3 days ago ago

      > it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money.

      As for money: Remind me what was Google's profit last year?

      As for usage: XSLT is used on about 10x more sites [1] than Chrome-only non-standards like USB, WebTransport and others that Google has no trouble shoving into the browser

      [1] Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with USB https://chromestatus.com/metrics/feature/timeline/popularity... or WebTransport: https://chromestatus.com/metrics/feature/timeline/popularity... or even MIDI (also supported by Firerox) https://chromestatus.com/metrics/feature/timeline/popularity...

      • vladms 3 days ago ago

        For me the usage argument sounds like an argument to kill the other standards rather than to keep this one.

        Browsers should try things. But if after many years there is no adoption they should also retire them. This would be no different if the organization is charity or not.

        • troupo 3 days ago ago

          > For me the usage argument sounds like an argument to kill the other standards rather than to keep this one.

          Google themselves have a document on why killing anything in the web platform is problematic: e.g. Chrome stats severely under-report corporate usage. See "Blink principles of web compatibility" https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

          It has great examples for when removal didn't break things, and when it did break things etc.

          I don't know if anyone pays attention to this document anymore. Someone from Chrome linked to this document when they wanted to remove alert/prompt, and it completely contradicted their narrative.

      • bawolff 3 days ago ago

        > Remind me what was Google's profit last year?

        Last i checked, google isn't a charity.

        • basscomm 3 days ago ago

          > Last i checked, google isn't a charity.

          Last I checked, Google isn't supposed to be able to unilaterally decide how the World Wide Web is supposed to work

        • mschuster91 3 days ago ago

          Their products are built on open source. Android and Chrome come to my mind, but also their core infrastructure, it's all Linux and other FOSS under the hood.

          Besides, xkcd #2347 [1] is talking about precisely that situation - there is a shitload of very small FOSS libraries that underpin everything and yet, funding from the big dogs for whom even ten fulltime developer salaries would be a sneeze has historically lacked hard.

          [1] https://xkcd.com/2347/

          • bawolff 3 days ago ago

            The thing is, xslt isn't underpinning much of anything, that is why google is removing it instead of fixing it.

            Google does contribute to software that it uses. When i say google is not a charity, i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.

            • mschuster91 3 days ago ago

              > The thing is, xslt isn't underpinning much of anything

              An awful lot of stuff depends on xslt under the hood. Web frontend, maybe not much any more, that ship has long since sailed. But anything Java? Anything XML-SOAP? That kind of stuff breathes XML and XSLT. And, at least MS Office's new-generation file formats are XML... and I'm pretty sure OpenOffice is just the same.

              • bawolff 3 days ago ago

                Let me rephrase that, client side xslt in browser isn't underpinning much of anything. I agree there are more uses in the enterprise world, although i think most of your examples are more XML not XSLT (people really shouldn't comflate the two. XML underpins half the world). I've never heard of anyone using xslt on a microsoft office docx file.

                I'd also assume the java world is using xalan-j or saxon, not libxslt.

            • troupo 3 days ago ago

              > The thing is, xslt isn't underpinning much of anything

              Neither do huge complicated standards that Chrome pushed in recent years.

              > that is why google is removing it instead of fixing it.

              And yet Google has no issues supporting, deploying and fixing features that see 10x less usage. Also, see this comment: https://news.ycombinator.com/item?id=45874740

              > i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.

              They took upon themselves the role of benevolent stewards of the web. According to their own principles they should exercise extreme care when adding or removing features to the web.

              However, since they dominate the browser market, and have completely subsumed all web-related committees, they have turned into arrogant uncaring dictators.

              • bawolff 3 days ago ago

                > However, since they dominate the browser market, and have completely subsumed all web-related committees, they have turned into arrogant uncaring dictators.

                Apple and firefox agree with them. They did not do this unilaterally. By sone accounts it was actually firefox originally pushing for this.

          • maple3142 3 days ago ago

            To be honest, there are two ways to solve the problem of xkcd 2347, either putting efforts into the very small library or just stop depending on it. Both solutions are fine to me and Google apparent just choose the latter one here.

            • bawolff 3 days ago ago

              If not depending on a library is an option, then you dont really have an xkcd 2347 problem. The entire point of that comic is that some undermaintained dependencies are critical, without reasonable alternatives.

            • 1718627440 3 days ago ago

              Except it's not Google whose "products" stop working by removing that dependency.

    • verytrivial 3 days ago ago

      Full of security issues is similarly overly dramatic, Haha. Fil-c appears to already compile libxml2[1] so I wonder how far off libxslt would be?

      [1] https://github.com/pizlonator/fil-c/tree/deluge/projects/lib...

      • JimDabell 3 days ago ago

        > Full of security issues is similarly overly dramatic

        It doesn’t seem dramatic at all:

        > Finding and exploiting 20-year-old bugs in web browsers

        > Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.

        https://www.offensivecon.org/speakers/2025/ivan-fratric.html

        https://www.youtube.com/watch?v=U1kc7fcF5Ao

        > libxslt -- unmaintained, with multiple unfixed vulnerabilities

        https://vuxml.freebsd.org/freebsd/b0a3466f-5efc-11f0-ae84-99...

    • themafia 3 days ago ago

      > no one wants to maintain libxslt

      For $0? Probably not. For $40m/year, I bet you could create an entire company that just maintains and supports all these "abandoned" projects.

      • ExoticPearTree 3 days ago ago

        > or $0? Probably not. For $40m/year, I bet you could create an entire company

        No sane commercial entity will dump even a cent into supporting an unused technology.

        You have better luck pitching this idea to your senator to set up an agency for dead stuff - it will create tens or hundreds of jobs. And what's $40mm in the big picture?

        • themafia 3 days ago ago

          > your senator

          Funny you should mention that. US Title Code uses XSLT.

          https://simonwillison.net/2025/Aug/19/xslt/

          • ExoticPearTree 3 days ago ago

            I know it is there. I am more curious as to why no one updated all that to modern browser technology.

            • phantasmish 3 days ago ago

              Until these recent rumblings out of Google, it was modern browser technology.

              • ExoticPearTree 3 days ago ago

                > Until these recent rumblings out of Google, it was modern browser technology.

                It is supported technology. That's all it is. And it will be no more.

                No one is stopping you from rendering your XML to HTML server side using XSLT.

            • wpm 3 days ago ago

              Why change it? It’s only an update it if improves something.

    • basscomm 3 days ago ago

      > Google doesn't hate XSLT, it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money. If the author wants to raise money to pay a developer willing to maintain libxslt, Google might revise the decision.

      Counterpoint: google hates XML and XSLT. I've been working on a hobby site using XML and XSLT for the last five years. Google refused to crawl and index anything on it. I have a working sitemap, a permissive robots.txt, a googlebot html file proving that I'm the owner of the site, and I've jumped through every hoop I can find, and they still refused to crawl or index anything except a snippet of the main index.xml page and they won't crawl any links on that.

      I switched everything over to a static site generator a few weeks ago, and Google immediately crawled the whole thing and started showing snippets of the entire site in less than a day.

      My guess is that their usage stats are skewed because they've designed their entire search apparatus to ignore it.

    • cubefox 3 days ago ago

      Why not switch the browser to use a JavaScript implementation internally instead of the old C++ implementation?

    • testdelacc1 3 days ago ago

      I think they’re being dramatic for laughs.

    • yxhuvud 3 days ago ago

      Honestly, let it die. Perhaps the standard will die or perhaps someone will make an open source solution that actually support XSLT 2 and 3.

  • dinkelberg 3 days ago ago

    If you want to keep XSLT in browsers alive, you should develop an XSLT processor in Rust and either integrate it into Blink, Webkit, Gecko directly, or provide a compatible API to what they use now (libxslt for Blink/Webkit, apparently; Firefox seems to have its own processor).

    • krackers 3 days ago ago

      There's no need; they already have a polyfill for XSLT I believe, they could ship that as part of the browser. Or compile libxslt to webassembly

    • sedatk 3 days ago ago

      > you should

      or a multi-trillion dollar company should.

      • dinkelberg 3 days ago ago

        Fair point. But probably not going to happen...

  • aaronrobinson 3 days ago ago

    Google isn’t killing XSLT. They just don’t want to support it in their browser any more. The site is misleading.

    • itsgrimetime 3 days ago ago

      When you have 70+% browser market share, stopping support for something _is_ killing it.

      • eXpl0it3r 3 days ago ago

        It is misleading in so far that XSLT is an independent standard [1] and isn't owned by Google, so they cannot "kill it", or rather they'd have to ask W3C to mark it as deprecated.

        What they can do is remove support for XSLT in Chrome and thus basically kill XSLT for websites. Which until now I didn't even know was supported and used.

        XSLT can be used in many other areas as well, e.g. for XSL-FO [2]

        [1] https://www.w3.org/TR/xslt-30/ [2] https://en.wikipedia.org/wiki/XSL_Formatting_Objects

        • James_K 3 days ago ago

          You say they cannot kill it, and yet they are about to. We'll see who wins, reality or your word games.

          • aaronrobinson 2 days ago ago

            What are you talking about? They can’t kill it. Most use is outside the browser. It’s not about mixing words.

      • mortarion 3 days ago ago

        I don't think XSLT was invented for the purpose of rendering XML into HTML in the first place. Perhaps it never should have been introduced in browsers to begin with?

        • vbezhenar 3 days ago ago

          XSLT was invented to transform one XML document to another XML document.

          Browser can render XHTML which is also a valid XML.

          So it's pretty natural to use XSLT to convert XML into XHTML which is rendered by browser. Of course you can do it on the server side, but client side support enables some interesting use-cases.

        • righthand 3 days ago ago

          Wrong. Can you dissenters at least provide proof of your nonsense lies? XSLT is apart of the HTML standard.

      • aaronrobinson 2 days ago ago

        Are you aware that XSLT is mostly used outside the browser?

  • lambdaone 3 days ago ago

    There is absolutely nothing to prevent anyone from generating arbirary DOM content from XML using JS; indeed, there's nothing stopping them from creating a complete XSLT implementation. There's just no need to have it in the core of the browser.

    • phantasmish 3 days ago ago

      You don’t need to generate anything with JavaScript, aside from one call to build an entire DOM object from your XML document. Boom, whole thing’s a DOM.

      I guess the fact that it’s obscure knowledge that browsers have great, fast tools for working directly with XML is why we’re losing nice things and will soon be stuck in a nothing-but-JavaScript land of shit.

      Lots of protocols are based on XML and browsers are (though, increasingly, “were”) very capable of handling them, with little more than a bridge on the server to overcome their inability to do TCP sockets. Super cool capability, with really good performance because all the important stuff’s in fast and efficient languages rather than JS.

  • troupo 3 days ago ago

    The web site should also use terms like "arrogant priests rule the web" from browsers' attempt to kill alert/prompt: https://www.quirksmode.org/blog/archives/2021/08/breaking_th...

    Also: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones": https://dev.to/richharris/stay-alert-d

  • creatonez 3 days ago ago

    The RSS argument makes no sense to me. Viewing styled RSS feeds in your browser is not a conventional way to use RSS, and is not what hardcore RSS users actually want (which is, a unified UI for all their news, without any fancy style, and without any place to even put ads). The styled version of an RSS feed, in the rare circumstances it even exists, is specifically for the non-technical users, who will be perfectly happy with a polyfilled or backend implementation.

  • zgk7iqea 3 days ago ago

    Why not just write an XSLT implementation in JS/WASM, or compile the existing one to WASM? This is the same approach that Firefox uses for PDFs and Ruffle for Flash. That way it is still supported by the browser and sandboxed.

    • gucci-on-fleek 3 days ago ago

      This already exists, and I agree that it's the best solution here, but for some reason this was rejected by the Chrome developers. I discussed this solution a little more elsewhere in the thread [0].

      [0]: https://news.ycombinator.com/item?id=45874461

      • zgk7iqea 3 days ago ago

        Very interesting, thanks.

        One point from one of the linked threads I find particularly puzzling:

        > I think the issue with XSLT isn't necessarily the size of the attack surface, it's the lack of attention and usage.

        > I.e. nearly 100% of sites use JS, while 1/10000 of those use XSLT. So all of the engineering energy (rightfully) goes to JS, not XSLT.

        XSLT is a finished standard. Not everything needs to evolve. If the implementation works and is safe, what speaks against keeping it?

  • jll29 3 days ago ago

    Google cannot kill anything on its own.

    If people continue to use XML-supporting technology, these open standards will continue to thrive.

    I'm sure this site will be supported eventually by the Ladybird Web browser - can't wait to switch to it next August.

    • 0x073 3 days ago ago

      Google = Chrome = they can for most users

  • zerkten 3 days ago ago

    Did a JS polyfill ever go anywhere? There is a comment on https://groups.google.com/a/chromium.org/g/blink-dev/c/zIg2K... which suggests that it might be possible, but a lot has changed. I suspect any effort died with continued availability after the first attempt to kill XSLT.

  • eversor1 3 days ago ago

    XSLT was once described to me as "Pain wrapped in Hate", and I fully agree. I'm truly shocked that there is ANY opposition to it's removal and retirement.

    • nashashmi 3 days ago ago

      Stockholm Syndrome: we went thru the torture of learning it. And now we love it

      • wpm 3 days ago ago

        Or more charitably: this hard to master tool is hard to master but incredibly useful once you learn it

  • joeturki 3 days ago ago

    This is unfortunate and sad but understandable. Slightly off-topic: a friend dared me to look for a sandbox CSP bypass and I discovered one using XSLT. I reported it to Mozilla few months ago, CVE-2025-8032. https://www.mozilla.org/en-US/security/advisories/mfsa2025-5...

  • tomaytotomato 3 days ago ago

    My first graduate job at a large British telco involved a lot of XML...

    - WSDL files that were used to describe Enterprise services on a bus. These were then stored and shared in the most convoluted way in a Sharepoint page <shudders>

    - XSD definitions of our custom XML responses to be validated <grimace>

    - XSLTs to allow us to manipulate and display XML from other services, just so it would display properly on Oracle Siebel CRM <heavy sweats>

  • shadowgovt 3 days ago ago

    Poe's Law fully in effect here. Given the 90s-era eye-gouge layout, I can't tell if the author endorses continued support of XSLT or is doing a "Modest Proposal"-style satire by conflating those who support continued native implementation of XSLT with those who pine for the days when most of the web looked like this.

    • 3 days ago ago
      [deleted]
  • paulirish 3 days ago ago

    A counterpoint to the idea that this is entirely Google's doing: https://meyerweb.com/eric/thoughts/2025/08/22/no-google-did-...

    • jannes 3 days ago ago

      I think you should disclose that you work on the Google Chrome team in a post like this.

      • paulirish 3 days ago ago

        Yeah my bad; I was on the go. I'm on the Chrome team, I work on DevTools.

    • righthand 3 days ago ago

      This is no way a counter point. You don’t get to be a billion dollar company that can fix XSLT and ignore other libraries without security issues and tell us it’s broken.

      Fuck Google you tyrants, all the dissenting opinions in this thread about XSLT are clearly Google employees.

  • Tepix 3 days ago ago

    Since the XSLTProcessor feature can be realized with a Polyfill (https://github.com/mfreed7/xslt_polyfill), I find myself agreeing with Google.

    Btw, I love this page! Highly entertaining, yet at the same time use of XSLT.

    • NoGravitas 3 days ago ago

      If they were going to ship the xslt polyfill by default with Chrome, I wouldn't disagree.

    • axus 3 days ago ago

      Does it uh work with RSS?

  • supermatt 3 days ago ago

    > XSLT will soon enter the Google graveyard.

    AFAIK the "google graveyard" is just for google products they have killed off.

    • altfredd 3 days ago ago

      Given that Google owns Web, it can be argued that any web tech killed by Google is a part of Google Graveyard

  • rpigab 3 days ago ago

    If they have security in mind, they should intend to deprecate and remove HTML. The benefits of keeping it are slowly disappearing as AI content on the web is taking over, and HTML contains far more quirks than XSLT, and let's not talk about aging C codebases about HTML...

    • jeltz 3 days ago ago

      What security vulnerabilities do you think of? Modern html5 parsers are really good and secure. The html5 standard largely solved the issues.

  • LtdJorge 3 days ago ago

    Cool usage of XSLT:

    https://tomi.vanek.sk/ a WSDL viewer implemented as a set of XSLT transformations that translate the original XML definitions into HTML.

  • righthand 3 days ago ago

    This is about forcing everyone into Json. Incredibly sad the amount of “just take Google’s word for it” in this thread. We have truly lost our way as a tech embraced society and eschew reason.

    There is a reason the lead Google engineers initials are “MF”.

  • pipeline_peak 3 days ago ago

    >RSS is used to syndicate NEWS and by killing it Google can control the media

    They can also avoid wasting resources on a format only used by “Raspberry Pi guys”.

    “I just made an app that tracks local tandem bikes in the San Francisco Bay area”

  • richard_todd 3 days ago ago

    I think the problem with XSLT is that it's only a clear win to represent the transform in XML to the extent that it is declarative. But, as transformations get more complex, you are going to need functions/variables/types/loops, etc. These got better support in XSLT 2 and 3, but it's telling that many xslt processors stuck with 1.0 (including libxslt and the Microsoft processors). I think most people realized that once they need a complex, procedural transformation, they'd prefer using a traditional language and a good XML library to do it.

    I don't like seeing any backward compatibility loss on the web though, so I do wish browsers would reconsider and use a js-based compatibility shim (as other comments have mentioned).

  • Devasta 3 days ago ago

    I know that XSLT can be implemented in JS (and I have used Saxon-JS, its good!) but the loss of functionality for the XML processing instruction will be a shame.

    There is nothing like in the modern web stack, such a pity.

  • j45 3 days ago ago

    It's hard enough to get all browsers to agree on standards in the same way, let alone staying on top of removing things that are there and might get discovered as being useful.

  • lloydatkinson 3 days ago ago

    Given that XSLT transforms XML into HTML, why has no one simply built a server side XSLT system? So these existing sites that use XSLT can just adopt that, and not need to rely on browser support.

    • pferde 3 days ago ago

      I remember Gentoo Linux had all its official documentation in a system just like that, maybe 15-20 years ago. It was written and stored as XML, XSLT-processed and rendered into HTML on the webservers.

      They moved everything into a wiki later.

      EDIT: Oh, their developers' manual is still done like that: https://github.com/gentoo/devmanual into https://devmanual.gentoo.org/

    • Fileformat 3 days ago ago

      I want to use it on an RSS feed: to make it sensible when a new users clicks on an RSS link.

      I specifically want it to be served as XML so it can still be an RSS feed: I don't even need the HTML to look that great: I have the actually website for that.

      Example: https://www.fileformat.info/news/rss.xml

    • JimDabell 3 days ago ago

      Server-side XSLT tools have existed for 25 years or so. The people complaining about this want existing websites using XSLT on the client to continue to work without changes.

      • mmis1000 3 days ago ago

        It's actually possible to support it by re-implement it by js or compile to wasm and running on client side. There are extensions to support pdf(pdf.js), flash(Ruffle), mht(UnMHT). So it should be possible to do the same thing for XSLT. The real question is "Who want to"? Does xslt have a large user base like pdf, flash, mht?

  • guerrilla 3 days ago ago

    So sad. I love XSLT. I wish XML had been the thing instead of JSON.

  • 01-_- 3 days ago ago

    What a beautiful look. I really like websites with this design :)

  • insin 3 days ago ago

    Show people what looping over a range looks like in XSLT, you cowards!

    I used to generate a blog and tumblelog entirely from XML files using an XSLT processor, it will not be missed.

  • mvdtnz 3 days ago ago

    It has been absolutely bizarre to watch people here pretend they like or even care about XSLT just because the big bad google is killing it.

  • juliangmp 3 days ago ago

    Hearing about this again and again and I still need to ask: who actually uses that, and for what?

    And how does it break RSS? (Which I at least heard of people using it before)

    • jeltz 3 days ago ago

      Some people used XSLT to style their RSS feeds when displaying them in the browser. An alternative is to use CSS to style the feeds. Personally I don't see why I would want styled feeds.

  • egorfine 3 days ago ago

    I truly loved XSLT back in the day and I strongly believe it to be an ingenious technology.

    And I truly believe it's time to retire this monstrosity.

  • rob 3 days ago ago

    I tried to use a PHP CMS called Symfony that used XSLT back in the early to mid 2000s. Was definitely interesting and a learning curve.

  • James_K 3 days ago ago

    To make the web safer, they will replace simple static web pages with remote code execution on the user's machine. Yet another “fuck you” to people who don't want to shove JavaScript in everything. God forbid I serve a simple static site to people. Nonono. XSLT is fantastic for people who actually want to write XML documents like the good old days, or add styling to Atom feeds.

    Edit: and for a slightly calmer response: Google has like, a bajillion dollars. They could address any security issues with XSLT by putting a few guys on making a Rust port and have it out by next week. Then they could update it to support the modern version in two weeks if it being out of date is a concern. RSS feeds need XSLT to display properly, they are a cornerstone of the independent web, yet Google simply does not care.

  • 3 days ago ago
    [deleted]
  • notepad0x90 3 days ago ago

    OP, I love the site. teach me this dead art of HTML! :)

    Also, doesn't Excel use XSLT or am I thinking of something else?

  • imiric 3 days ago ago

    It's truly troubling to see a trillion dollar corporation claim that the reason for removing a web browser feature that has existed since the 90s is because the library powering it was unmaintained for 6 months, and has security issues. The same library that has been maintained by a single developer for years, without any corporate support, while corporations reaped the benefits of their work.

    Say what you will about how this is technically allowed in open source, it is nothing short of morally despicable. A real https://xkcd.com/2347/ situation.

    It would cost Google practically nothing to step up and fix all security issues, and continue maintenance if they wanted to. To say nothing of simply supporting the original maintainer financially.

    But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?

    • bawolff 3 days ago ago

      > Say what you will about how this is technically allowed in open source, it is nothing short of morally despicable. A real https://xkcd.com/2347/ situation.

      I think https://xkcd.com/1172/ is more fitting.

      > But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?

      No, because xml has meaningful usage on the web. The situations are very different.

      • imiric 3 days ago ago

        > No, because xml has meaningful usage on the web. The situations are very different.

        They're really not. If "meaningful usage" was a factor, Google should stop maintaining AMP, USB, WebTransport, etc.[1]

        If security and maintenance are a concern, then they should definitely also remove XML, since libxml2 has the same issues as libxslt.

        [1]: https://news.ycombinator.com/item?id=45873787

        • jll29 3 days ago ago

          Google says:

          > Similar to the severe security issues in libxslt, severe security issues were recently reported against libxml2 which is used in Chromium for parsing, serialization and testing the well-formedness of XML. To address future security issues with XML parsing In Chromium we plan to phase out the usage of libxml2 and replace XML parsing with a memory-safe XML parsing library written in Rust

          Perhaps there are some Rust gurus out there that can deliver a XSLT crate in a similar fashion, which other folks can then integrate?

          The problem seems to be that the current libxslt library is buggy due to the use of C++, an unsafe language (use after free etc.).

          [BTW, Chris Hanson's old book "C: Interfaces and Implementations" demonstrated how to code in C in a way that avoids use after free: use pointers to pointers instead of pointers and set them to zero upon free-ing memory blocks; e.g.

            /* source: https://github.com/drh/cii/blob/master/src/arena.c */
            void Arena_dispose(T *ap) {
              assert(ap && *ap);
              Arena_free(*ap);
              free(*ap);
              *ap = NULL; /* avoid use after free */
            }
          
          ]
          • bawolff 3 days ago ago

            > Perhaps there are some Rust gurus out there that can deliver a XSLT crate in a similar fashion, which other folks can then integrate?

            Even if one existed right now, i would be surprised if that changed googles mind.

            • jll29 3 days ago ago

              Just fork out an OpenChromium branch that adds in the new implementation. Whoever will want to remain compatible to open Web W3C recommendations can develop that branch.

            • imiric 3 days ago ago

              Agreed. Because this decision has nothing to do with safety or low usage, like they claim. It's just another example of a corporation abusing their dominance to shape the web according to their interests.

              • bawolff 3 days ago ago

                At the request of their competitor...

          • jll29 3 days ago ago

            Porting libxslt to Fil-C could also be an option: https://fil-c.org/invisicaps

        • bawolff 3 days ago ago

          > They're really not. If "meaningful usage" was a factor, Google should stop maintaining AMP, USB, WebTransport, etc.[1]

          Meaningful usage being a factor does not mean it is the only factor.

          I think it goes without saying that google isn't going to remove support for xml (including things like SVG) anytime soon.

        • TingPing 3 days ago ago

          There are more xml parsers than just that and it’s a smaller scope to rewrite or maintain.

    • ExoticPearTree 3 days ago ago

      > Will we also see support for XML removed?

      Hopefully YES.

      Let the downvotes come, I know there are XML die hard fans here on HN.

  • 3 days ago ago
    [deleted]
  • nflekkhnnn 3 days ago ago

    Is it possible to disable xslt rendering today already, perhaps with a browser flag? For security.

  • exploderate 3 days ago ago

    I wish there was a native XSLT library for Golang. Every model wants to shell out or requires CGO.

  • ivolimmen 3 days ago ago

    Humm I agree with the statement but why does the website need to look like it is from early the 90's?

  • johndubchak 3 days ago ago

    I knew XSLT was just a passing fad...it only took 30 years of my career for it to pass...lol.

  • ChrisMarshallNY 3 days ago ago

    That’s a “classic”-looking site!

    Lots of Comic Sans and animated GIFs (which means that I still have XSLT, I guess).

  • neilv 3 days ago ago

    I have a little bit of skepticism about the move by Google (and you should usually be very skeptical, any time Web standards or Web "security" are talked about, lately), but...

    The gaudy retro amateur '95 design of this page might suggest the idea "anyone only cares about this for strange nostalgia reasons".

    Content-wise, I think this argument is missing a key piece:

    > Why does Google hate XML?

    > RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by [multiple government sites](https://github.com/whatwg/html/issues/11582). Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?

    Google wanting RSS/Atom dead, presumably for control/profit reasons, is very old news. And it's old news that Big Tech eventually started playing ball with US-style lobbying (to influence legislation) after resisting for a long time.

    But what does the writer think is Google's motivation for temporarily breaking access to US Congress legislative texts and misc. other gov't sites in this way (as alleged by that `whatwg` issues link)? What do they want, and how does this move advance that?

    We can imagine conspiracy theories, including some that would be right at home on a retro site with animated GIFs and a request to sign their guestbook, but the author should really spell out what they are asserting.

  • skrebbel 3 days ago ago

    I love everything about this site. The design, the vibe, the rhetoric.. It’s a work of art!

  • SuperHeavy256 3 days ago ago

    But what is XSLT? Why is it important?

    These points should be addressed first on the website.

  • jedimastert 2 days ago ago

    Wow, what an absolute blast from the past.

    The page styling is harkening back to the style of some EARLY early personal amateur niche sites. It reminds me of like Time Cube <https://web.archive.org/web/20150506055228/http://www.timecu...> or like Neocity pages, even TempleOS in the earlier days. T

    It's really taking me back, I'm actually getting a little emotional...

  • nashashmi 3 days ago ago

    Does anyone know of any XML viewing apps that support XSLT 3.0?

  • AlienRobot 3 days ago ago

    I have never seen a more trustworthy website in my life.

  • bravetraveler 3 days ago ago

    Great neuron exercise seeing Flaming Text again

  • saltysalt 3 days ago ago

    That website is delightfully old-school. Love it.

  • criticalfault 3 days ago ago

    > Google pays Mozilla up to $420 million per year...

    What the hell is Mozilla doing with that money? How useless are all those people?

  • layer8 3 days ago ago

    Oh man, it even has a custom mouse cursor!

  • gregjw 3 days ago ago

    they are playing us for fools!

  • postepowanieadm 3 days ago ago

    Love the aesthetics.

  • xg15 3 days ago ago

    Now that XSLT has the power of Comic Sans on its side, I don't know what could possibly go wrong anymore.

  • rurban 3 days ago ago

    It's not dead yet, a new maintainer showed up. But, Google Chrome decided to ditch it, which is fine by me. It was a cluster fuck, similar to libxml2, but even worse.

    Good old DSSSL days, sigh.

  • lloydatkinson 3 days ago ago

    Who on earth approved .rip as a TLD? Stupid

  • 6thbit 3 days ago ago

    OP, how dare you make the guestbook button fake :(

  • adzm 3 days ago ago

    I can't even tell if this is satire or just hyperbole.

  • codeulike 3 days ago ago

    Add XSLT to your website and weblog today before it is too late!

    I cannot tell if this is satire or not, very well done

  • NooneAtAll3 3 days ago ago

    I wish websites like these would actually explain what the thing being talked about it

    wtf is XSLT?

  • raminf 3 days ago ago

    Many years ago, I was leading a team that implemented a hyperfast XML, XSLT, XPath parser/processor from the ground up in C/C++. This was for a customer project. It also pulled some pretty neat optimizations, like binary XSLT compilation, both in-mem and FS caching, and threading. On the server-side, you could often skip the template file loading and parsing stages, parallelize processing, and do live, streaming generation. There was also a dynamic plugin extension system and a push/pull event model. The benchmarks were so much better than what was out there. Plus, it was embeddable in both server and client app code.

    Would have been great if it had been open-sourced, but they paid for all the development and owned the codebase. They wanted to use it to dynamically generate content for every page for every unique device and client that hit their server. They had the infrastructure to do that for millions of users. The processing could be done on the server for plain web browsers or embedded inside a client binary app, so live rendering to native could be done on-device.

    Back then, it was trivial to generate XML on-the-fly from a SQL-based database, then send that back, or render it to XHTML or any other custom presentation format via XSLT. Through XSD schema, the format was self-documenting and could be validated. XSLT also helped push the standardizing on XHTML and harness the chaos of mis-matched HTML versions in each browser. It was also a great way to inject semantic web tags into the output.

    But I always thought it got dragged down with the overloaded weight of SOAP. Once REST and Node showed up, everyone headed for new pastures. Then JS in browsers begat SPAs, so rendering could be done in the front-end. Schema validation moved to ad-hoc tools like Swagger/OpenAPI. Sadly, we don't really have a semantic web alternative now and have to rely on best guesses via LLMs.

    For a brief moment, it looked like the dream of a hyper-linked, interconnected, end-to-end structured, realtime semantic web might be realizable. Aaand, then it all went poof.

    TL;DR: The XML/XSLT stack nailed a lot of the requirements. It just got too heavy and lost out to lighter-weight options.

  • thro1 3 days ago ago

    For me, it happened in that moment when XMLHttpRequest was the only working common denominator for few "new" techniques browsers - as iframe was over everything you couldn't just load some content into target like in Netscape - but you had to use JS anyway after that to move it out of first plane.

    Because I wasted my time to reach some working ways to get the results by scripting, it leave me no time actually to think about it in any other way (like to prove for it the next few things I saw coming to the client side soon after, which I used to know from earlier thanks eXist-db). I took me some time, much later, to learn about such few incredible things - that if working, would make my job so.. basic - just, if, again few things described as bugs, were fixed at that time.

    Without that, just that happen: you wanted the results you have code it yourself - regarding or regardless of few bugs making simple things being hard corner cases with interoperability problems that can't be solved.

    Since then, I understand that with JavaScript it's just easier to keep fixing things ad hoc not worrying to much about standards, implementations

    .

    - than, actually to keep asking for few things or key bugs to be fixed, for more than 20 years - and to not see that ever.

    .

    The legacy is that, we can no longer get there where simple things can just interoperate (is it old school now ?) - but some generation later actually not aware why, has such imperative mindset of micromanagement that they can not even imagine self not implementing repetitively something just because in some other world after long way it was already abstracted once - but just not ever implemented once to work in same consistent way and as intended between browsers.

    From that point of view it's quite easy to not worry about or to abolish standards - you can't do much about implementations elsewere or bugs - but you can do whatever you want with your code (so long no one will remind you - will it last when other things change ?).

    That's sad actually, as I se it, that Javascript Document Programmers keep repeating and will be repeating same works, unaware of reasons for that - few bugs here and there, for 20 years not fixed once or in same common way.

    But how "random" were all that things leading to that point: with JavaScript all is possible and everything else is redundant ? ( only a hammer can work ?) - then look at example: https://news.ycombinator.com/item?id=45183624 - what's there look like simplest abstract form and what's like redundant ?

    P.S. RIP WWW

    (?) (JS is not a W3 standard)

  • szundi 3 days ago ago

    [dead]

  • lmm 3 days ago ago

    Meh. RSS was great. XSLT was always awful. Javascript does everything XSLT did, so much better. Let it die.

    • silon42 3 days ago ago

      JS should die too. XSLT was better for some things.

      • jeltz 3 days ago ago

        XSLT was always a bad idea. JS is a mixed bag of good and bad ideas.

      • selectnull 3 days ago ago

        The difference being that JS is used a lot and XSLT not so much.

    • tedk-42 3 days ago ago

      Wow you got negged so hard, likely by people that have never really written XSLT code.

      I have and I've always hated it. I still to this day will never touch an IBM DataPower appliance, though I'm more than capable because of XSLT.

      They (IBM) even tried to make it more appealing by allowing Javascript to run on DataPower instead of XSLT to process XML documents.

      It's a crap language designed for XML (which is too verbose) and there are way better alternatives.

      Javascript and JSON won because of their simplicity. The Javascript ecosystem however (nodejs, npm, yarn etc) are what take away from an otherwise excellent programming language.

  • tolerance 3 days ago ago

    This is propaganda.

  • hollowturtle 3 days ago ago

    Please kill it, and then let's sit on a table with all adults people and decide what else should be killed. Maybe specify a minimum subset of modern feature a browser must support, please let's do it, it could light on again browser competition, projects like lady browser should not implement obscure backwrads compatible layout spec... What about the not modern web sites? The browser will ask to download an extra wasm module for opening something like https://www.spacejam.com/1996/

    • hollowturtle 3 days ago ago

      Instead of just downvoting why don't you reply with your argument?