2 comments

  • wryoak 12 hours ago ago

    I don’t think rendering malformed HTML is the difficult part of a browser’s rendering engine. Rendering well-formed XHTML is little different. It’s going to construct a tree either way, with erroneous HTML the common solutions include treating unmatched terminal tags as text nodes, unquoted attributes as whitespace-terminated, and certain unmatched opening tags as line-terminated. I would guess JS is a harder beast but the rapid proliferation over the past decade of web apis for graphics, audio, storage, GPS, Bluetooth, cameras, game controllers and so on is probably a bigger impediment. But the reason nothing has really risen to stop the monopoly is just that getting word out about new browsers, including why the same tool with a new rendering engine should appeal to the average person, is a difficult sell, and so expending resources on a new one is a tough sell. IE didn’t dominate for so long because of its features, it was the ubiquity. Chrome has risen to prominence because of the ubiquity of google’s tools and auth which “just happen” to work better in Chrome, along with mobile Safari because of the preeminence of the iPhone. Monopoly is its own momentum. Get something in everybody’s hands and they have no real reason to switch. If there’s no reason for customers to switch, there’s very little incentive to compete with the dominant paradigm.

  • zzo38computer 2 hours ago ago

    Making the pieces (HTML parsing, HTML rendering, HTTP, etc) to be independent would be helpful because some pieces can be reused in a different implementation (possibly with modifications if the author of them wants them modified), and an implementation can use some of its own pieces and some from other implementations.