HTML as an Accessible Format for Papers

(info.arxiv.org)

156 points | by el3ctron 6 hours ago ago

88 comments

  • ComputerGuru 2 hours ago ago

    If the Unicode consortium would spend less time and effort on emoji and more on making the most common/important mathematical symbols and notations available/renderable in plain text, maybe we could move past the (LA)TeX/PDF marriage. OpenType and TrueType now (edit: for well over a decade, actually) support the necessary conditional rendering required to perform complicated rendering operations to get sequences of Unicode code points to display in the way needed (theoretically, anyway) and with fallback missing-glyph-only font family substitution support available pretty much everywhere allowing you to seamlessly display symbols not in your primary font from a fallback asset (something like Noto, with every Unicode symbol supported by design, or math-specific fonts like Cambria Math or TeX Gyre, etc), there are no technical restrictions.

    I’ve actually dug into this in the past and it was never lack of technical ability that prevented them from even adding just proper superscript/subscript support before, but rather their opinion that this didn’t belong in the symbolic layer. But since emoji abuse/rely on ZWJ and modifiers left and right to display in one of a myriad of variations, there’s really no good reason not to allow the same, because 2 and the squares symbol are not semantically the same (so it’s not a design choice).

    An interesting (complete) tangent is that Gemini 3 Pro is the only model I’ve tested (I do a lot of math-related stuff with LLMs) that absolutely will not under any circumstances respect (system/user) prompt requests to avoid inline math mode (aka LATeX) in the output, regardless of whether I asked for a blanket ban on TeX/MathJax/etc or when I insisted that it use extended unicode codes points to substitute all math formula rendering (I primarily use LLMs via the TUI where I don’t have MathJax support, and as familiar as I once was with raw TeX mathematical notations and symbols, it’s still quite easy to confuse unrendered raw output by missing something if you’re not careful). I shared my experiment and results here – Gemini 3 Pro would insist on even rendering single letter constants or variables as $k$ instead of just k (or k in markdown italics, etc) no matter how hard I asked it not to (which makes me think it may have been overfit against raw LATeX papers, and is also an interesting argument in favor of the “VL LLMs are the more natural construct”): https://x.com/NeoSmart/status/1995582721327071367?s=20

    • crazygringo 29 minutes ago ago

      I don't understand. No matter what fancy things you do with superscripts and subscripts, you're not going to be able to do even basic things you need for equations like use a fraction bar, or parentheses that grow in height to match the content inside them.

      At a fundamental level, Unicode is for characters, not layout. Unicode may abuse the ZWJ for emoji, but it still ultimately results in a single emoji character, not a layout of characters. So I don't really understand what you're asking for.

    • hannahnowxyz an hour ago ago

      Have you tried a two-pass approach? For example, where prompt #1 is "Which elliptic curves have rational parameterizations?", and then prompt #2 (perhaps to a smaller/faster model like Gemma) is "In the following text, replace all LaTeX-escaped notation with Markdown code blocks and unicode characters. For example, $F_n = F_{n - 1} + F_{n - 2}$ should be replaced with `Fₙ = Fₙ₋₁ + Fₙ₋₂`. <Response from prompt #1>". Although it's not clear how you would want more complex things to be converted.

      • baby 37 minutes ago ago

        I've done latex -> mathml -> markdown and it works quite well

      • yannis 34 minutes ago ago

        It is actually quicker to ask using LaTeX markup!

    • moelf 43 minutes ago ago
  • DominikPeters 2 hours ago ago

    As an arXiv author who likes using complicated TeX constructions, the introduction of HTML conversion has increased my workload a lot trying to write fallback macros that render okay after conversion. The conversion is super slow and there is no way to faithfully simulate it locally. Still I think it's a great thing to do.

  • ForceBru 5 hours ago ago

    Is this new or somehow updated? HTML versions of papers have been available for several years now.

    EDIT: indeed, it was introduced in 2023: https://blog.arxiv.org/2023/12/21/accessibility-update-arxiv...

    • Tagbert 5 hours ago ago

      From the paper...

      Why "experimental" HTML?

      Did you know that 90% of submissions to arXiv are in TeX format, mostly LaTeX? That poses a unique accessibility challenge: to accurately convert from TeX—a very extensible language used in myriad unique ways by authors—to HTML, a language that is much more accessible to screen readers and text-to-speech software, screen magnifiers, and mobile devices. In addition to the technical challenges, the conversion must be both rapid and automated in order to maintain arXiv’s core service of free and fast dissemination.

      • ForceBru 5 hours ago ago

        No I mean _arXiv_ has had experimental support for generating HTML versions of papers for years now. If you visit arXiv, you'll see a lot of papers have generated HTML alongside the usual PDF, so I'm trying to understand whether the article discussed any new developments. It seems like it's not new at all

      • daemonologist 2 hours ago ago

        There are pretty often problems with figure size and with sections being too narrow or wide (for comfortable reading). The PDF versions are more consistently well-laid-out.

    • inglor 3 hours ago ago
  • ekjhgkejhgk 3 hours ago ago

    I wish epub was more common for papers. I have no idea if there's any real difficulties with that, or just not enough demand.

    • mmooss 3 hours ago ago

      epub is html, under the hood

      Is there an epub reader that can format text approximately as usably and beautifully as pdf? What I've seen makes it noticeably harder to read longer texts, though I haven't looked around much.

      epub also lacks annotation, or at least annotation that will be readable across platforms and time.

    • hombre_fatal 2 hours ago ago

      Because what makes epub a format on top of html is just that someone QA'ed it and wrote the html/css with it in mind. Especially considering things like diagrams and tables.

      Not really what you want researchers to waste their time doing.

      But you can use any of the numerous html->epub packagers yourself.

    • pspeter3 3 hours ago ago

      Why epub? Isn’t it just HTML under the hood?

      • ekjhgkejhgk 2 hours ago ago

        Because I can open it on my ereader.

  • el3ctron 6 hours ago ago

    Accessibility barriers in research are not new, but they are urgent. The message we have heard from our community is that arXiv can have the most impact in the shortest time by offering HTML papers alongside the existing PDF.

    • lalithaar 5 hours ago ago

      Hello, I was going through html versions of my preprints on Arxiv, thank you for all that you guys do Please do let me know if the community could contribute through any means for the same

  • percentcer an hour ago ago

    Dumb question but what stops browsers from rendering TeX directly (aside from the work to implement it)? I assume it's more than just the rendering

    • bo1024 an hour ago ago

      You mean a display engine that works like an HTML renderer, except starting from TeX source instead of HTML source? I think you could get something that mostly works, but it would be a pain and at the end you wouldn't have CSS or javascript, so I don't think browser makers are interested.

    • pwdisswordfishy an hour ago ago

      For starters, TeX is Turing-complete, and the tokenizer is arbitrarily reprogrammable at runtime.

      • gbear605 5 minutes ago ago

        Browsers already support JavaScript anyway, so why not add another Turing-complete language into the mix? (Not even accounting for CSS technically being Turing-complete, or WASM, or …)

      • ErroneousBosh 43 minutes ago ago

        Okay then, what would stop you rendering TeX to SVG and embedding that?

        Edit: Genuine question, not rhetorical - I don't know how well it would work but it sounds like it should.

        • fooofw 12 minutes ago ago

          That would (mostly if not always) work in the sense of reproducing the layout of the pages, but would defeat the purpose of preserving the semantic information present in the TeX file (what is a heading, a reference and to what, a specific math environment, etc.) which is AFAIK already mostly dropped on conversion to PDF by the latex compiler.

  • leobg 3 hours ago ago

    It must have been around 1998. I was editor of our school’s newspaper. We were using Corel Draw. At some point, I proposed that we start using HTML instead. In the end, we decided against it, and the reasons were the same that you can read here in the comments now.

  • Barbing 5 hours ago ago

    >Did you know that 90% of submissions to arXiv are in TeX format, mostly LaTeX? That poses a unique accessibility challenge: to accurately convert from TeX—a very extensible language used in myriad unique ways by authors—to HTML, a language that is much more accessible to screen readers and text-to-speech software, screen magnifiers, and mobile devices.

    Challenging. Good work!

  • sega_sai 5 hours ago ago

    Unfortunately I didn't see the recommendation there on what can be done for old papers. I checked, and only my papers after 2022 have an HTML version. I wish they'd make some kind of 'try html' button for those.

    • sundarurfriend 5 hours ago ago

      Do the older papers work via [Ar5iv](https://ar5iv.labs.arxiv.org/) ?

      > View any arXiv article URL [in HTML] by changing the X to a 5

      The line

      > Sources upto the end of November 2025.

      sounds to me like this is indeed intended for older articles.

  • jas39 5 hours ago ago

    Pandoc can convert to svg. It can then be inlined in html. Looks just like latex, though copy/paste isn't very useful

    • stephenlf 3 hours ago ago

      That doesn’t solve the accessibility issue, though. You need semantic tags.

  • sundarurfriend 5 hours ago ago

    [Sept 2023] as per the wayback machine.

  • billconan 4 hours ago ago

    I don't think HTML is the right approach. HTML is better than PDF, but it is still a format for displaying/rendering.

    the actual paper content format should be separated from its rendering.

    i.e. it should contain abstract, sections, equations, figures, citations etc. but it shouldn't have font sizes, layout etc.

    the viewer platforms then should be able to style the content differently.

    • cluckindan 3 hours ago ago

      HTML alone is in fact not a format for displaying/rendering. Done properly, it is a structural representation of the content. (This is often called ”semantic HTML”.)

      They are converting to HTML to make the content more accessible. Accessibility in this context means a11y, in effect ”more accessible” equates to ”more compatible with screen readers”.

      While PDF documents can be made accessible, it is way easier to do it in HTML, where browsers build an actual AOM (accessibility object model) tree and expose it to screen readers.

      >it should contain abstract, sections, equations, figures, citations etc.

      So <article>, <section>, <math>, <figure>, <cite>, etc.

      • benatkin 3 hours ago ago

        Much of it is a structural representation of how to display the content.

      • Theodores 36 minutes ago ago

        I like Arxiv and what they are doing, however, do the auto-generated HTML files contain nothing more than a sea of divs dressed with a billion classes?

        I would be delighted if they could do better than that, with figcaptions as well as figures, and sections 'scoped' with just one <h2-6> heading per section. They could specify how it really should be done, the HTML way, with a well defined way of doing the abstract and getting the cited sources to be in semantic markup yet not in some massive footer at the back.

        There should also be a print stylesheet so that the paper prints out elegantly on A4 paper. Yes, I know you can 'print to PDF' but you can get all the typesetting needed in modern CSS stylesheets.

        Furthermore, they need to write a whole new HTML editor that discards WYSIWYG in favour of semantic markup. WYSIWYG has held us back by decades as it is useless for creating a semantic document. We haven't moved on from typewriters and the conventions needed to get those antiques to work, with word processors just emulating what people were used to at the time. What we really need is a means to evolve the written word, so that our thinking is 'semantic' when we come to put together documents, with a 'document structure first' approach.

        LaTeX is great, however, last time I used it was many decades ago, when the tools were 'vi' (so not even vim) and GhostScript, running on a Sun workstation with mono screen. Since then I have done a few different jobs and never have I had the need to do anything in LaTex or even open a LaTeX file. In the wild, LaTeX is rarer than hen's teeth. Yet we all read scientific papers from time to time, and Arxiv was founded on the availability of Tex files.

        The lack of widespread adoption of semantic markup has been a huge bonus to Google and other gatekeepers that have the money to develop their own heuristics to make sense of 'seas of divs'. As it happens, Google have also been somewhat helpful with Chrome and advancing the web, even if it is for their gatekeeping purposes.

        The whole world of gatekeeping is also atrocious in academia. Knowledge wants to be free, but it is also big business to the likes of Springer, who are already losing badly to open publishing.

        As you say, in this instance, accessibility means screen readers, however, I hope that we can do better than that, to get back to the OG Tim Berners Lee vision of what the web should be like, as far as structuring information is concerned.

    • m-schuetz 3 hours ago ago

      That's a purist stance that's never going to work out in praxtice. Authors will always want to adjust the presentation of content, and html might be even better suited for that than Latex, which as bad at both.

    • dimal 4 hours ago ago

      Perfect is the enemy of good. HTML is good enough. Let’s get this done.

      And as another commenter has pointed out, HTML does exactly what you ask for. If it’s done correctly, it doesn’t contain font sizes or layout. Users can style HTML differently with custom CSS.

      • billconan 4 hours ago ago

        mixing rendering definitions with content (PDF) is something from the printer era, that is unsuitable for the digital era.

        HTML was a digital format, but it wanted to be a generic format for all document types, not just papers, so it contains a lot of extras that a paper format doesn't need.

        for research papers, since they share the same structure, we can further separate content from rendering.

        for example, if you want to later connect a paper with an AI, do you want to send <div class="abstract"> ... ?

        or do some nasty heuristic to extract the abstract? like document. getElementsByClassName("abstract")[0] ?

        • simonw 4 hours ago ago

          All of the interesting LLMs can handle a full paper these days without any trouble at all. I don't think it's worth spending much time optimizing for that use-case any more - that was much more important two years ago when most models topped out at 4,000 or 8,000 tokens.

    • bob1029 4 hours ago ago

      > HTML is better than PDF

      I disagree. PDF is the most desirable format for printed media and its analogues. Any time I plan to seriously entertain a paper from Arxiv, I print it out first. I prefer to have the author's original intent in hand. Arbitrary page breaks and layout shifts that are a result of my specific hardware/software configuration are not desirable to me in this context of use.

      • ACCount37 4 hours ago ago

        I agree that PDF is best for things that are meant to be printed, no questions. But I wonder how common actually printing those papers is?

        In research and in embedded hardware both, I've met some people who had entire stacks of papers printed out - research papers or datasheets or application notes - but also people who had 3 monitors and 64GB of RAM and all the papers open as browser tabs.

        I'm far closer to the latter myself. Is this a "generational split" thing?

        • pfortuny 3 hours ago ago

          Possibly, but then again, when I need to study a paper, I print it, when I need just to skim it and use a result from it, it is more likely that I just read it on a screen (tablet/monitor). That is the difference for me.

      • s0rce 4 hours ago ago

        I used to print papers, probably stopped about 10 years ago. I now read everything in Zotero where I can highlight and save my annotations and sync my library between devices. You can also seamlessly archive html and pdfs. I don't see people printing papers in my workplace that often unless you need to read them in a wet lab where the computer is not convenient.

    • afavour 4 hours ago ago

      Wouldn’t that be CSS?

      • billconan 4 hours ago ago

        no

        <div class="abstract-container">

        <div class="abstract">

        <pre><code> abstract text ... </code></pre>

        </div>

        <div class="author-list">

        <ol>

        <li>author one</li>

        <li>author two</li>

        <ol>

        </div>

        should be just:

        [abstract]

        abstract text

        [authors]

        author one | email | affiliation

        author two | email | affiliation

        • afavour 4 hours ago ago

          Sounds like XML and XSL would be a great fit here. Shame it’s being deprecated.

          But you could still use HTML. Elements with a dash in are reserved for custom elements (that is, a new standardised element will never take that name) so you could do:

              <paper-author-list>
                <paper-author />
              </paper-author-list>
          
          And it would be valid HTML. Then you’d style it with CSS, with

              paper-author {
                display: list-item;
              }
          
          And so on.
          • bawolff 4 hours ago ago

            Nothing is stopping you from using server side XSL. I personally dont think its a great fit, but people need to stop acting like xsl has been wiped from the face of the earth.

            • afavour 4 hours ago ago

              Yes but we’re specifically talking about a display format here. Something requiring a server side transform before being viewable by a user is a clear step backwards.

              • bawolff 2 hours ago ago

                How so? I can't think of any advantage to having client side xsl over outputting two files, in this context.

                • afavour 2 hours ago ago

                  The discussion is about the form in which you share papers. With HTML you just share the HTML file, it opens instantly on basically any device.

                  If you distribute the paper as XML with an XSLT transform you need to run something that’ll perform that transform before you can read the paper. No matter whether that transform happens on the server or on the client it’s still an extra complication in the flow of sharing information.

          • xworld21 2 hours ago ago

            Indeed, LaTeXML (the software used by arXiv) converts LaTeX to a semantic XML document which is turned to HTML using primarily XSLT!

        • panzi 4 hours ago ago

          There is <article> <section> <figure> <legend>, but yes, <abstract> and <authors> is missing as such. But there are meta tags for such things. Then there is RDF and Thing. Not quite the same, I know, but it's not completely useless.

          • kevindamm 4 hours ago ago

            and you could shim these gaps with custom components, hypothetically

  • nateroling 5 hours ago ago

    Seeing the Gemini 3 capabilities, I can imagine a near future where file formats are effectively irrelevant.

    • qart 3 hours ago ago

      I have family members with health conditions that require periodic monitoring. For some tests, a phlebotomist comes home. For some tests, we go to a hospital. For some other tests, we go to a specialized testing center. They all give us PDFs in their own formats. I manually enter the data to my spreadsheet, for easy tracking. I use LLMs for some extraction, but they still miss a lot. At least for the foreseeable future, no LLM will ever guarantee that all the data has been extracted correctly. By "guarantee", I mean someone's life may depend on it. For now, doctors take up the responsibility of ensuring the data is correct and complete. But not having to deal with PDFs would make at least a part of their job (and our shared responsibilities) easier.

    • s0rce 3 hours ago ago

      Can you elaborate? Are you never reading papers directly but only using Gemini to reformat or combine/summarize?

      • nateroling 3 hours ago ago

        I mean that when a computer can visually understand a document and reformat and reinterpret it in any imaginable way, who cares how it’s stored? When a png or a pdf or a markdown doc can all be be read and reinterpreted into an infographic or a database or an audiobook or an interactive infographic the original format won’t matter.

    • DANmode 5 hours ago ago

      Files.

      Truth in general, if we aren't careful.

    • sansseriff 4 hours ago ago

      Seriously. More people need to wake up to this. Older generations can keep arguing over display formats if they want. Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume. Why would research papers be any different.

      • JadeNB 3 hours ago ago

        > Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume.

        Well, that's terrifying. I mean, I knew it about undergrads, but I sure hoped people going into grad school would be aware of the dangers of making your main contact with research, where subtle details are important, through a known-distorting filter.

        (I mean, I'd still be kinda terrified if you said that grad students first encounter papers through LLMs. But if it is the front end for all knowledge they consume? Absolutely dystopian.)

        • sansseriff 2 hours ago ago

          I admit it has dystopian elements. It’s worth deciding what specifically is scary though. The potential fallibility or mistakes of the models? Check back in a few months. The fact they’re run by giant corps which will steal and train on your data? Then run local models. Their potential to incorporate bias or persuade via misalignment with the reader’s goals? Trickier to resolve, but various labs and nonprofits are working on it.

          In some ways I’m scared too. But that’s the way things are going because younger people far prefer the interface of chat and question answering to flipping through a textbook.

          Even if AI makes more mistakes or is more misaligned with the reader’s intentions than a random human reviewer (which is debatable in certain fields since the latest models game out), the behavior of young people requires us to improve the reputability of these systems. (Make sure they use citations, make sure they don’t hallucinate, etc). I think the technology is so much more user friendly that fixing the engineering bugs will be easier than forcing new generations to use the older systems.

  • ashleyn 5 hours ago ago

    Can't help but wonder if this was motivated in part by people feeding papers into LLMs for summary, search, or review. PDF is awful for LLMs. You're effectively pigeonholed into using (PAYING for) Adobe's proprietary app and models which barely hold a candle to Gemini or Claude. There are PDF-to-text converters, but they often munge up the formatting.

    • jrk 5 hours ago ago

      Not sure when you last tried, but Gemini, Claude, and ChatGPT have all supported pretty effective PDF input for quite a while.

  • _dain_ 2 hours ago ago

    Wasn't the World Wide Web invented at CERN specifically for sharing scientific papers? Why are we still using PDFs at all?

    • fsh 2 hours ago ago

      No, it wasn't. Scientists at CERN used DVI and later PDF like everyone else. HTML has no provisions for typesetting equations and is therefore not suitable for physics papers (without much newer hacks such as MathML).

  • teddy-smith 4 hours ago ago

    It's extremely easy to convert HTML/CSS to a PDF with the print to PDF feature of the browser.

    All papers should be in HTML/CSS or Tex then just simply converted to PDF.

    Why are we even talking about this?

    • crazygringo 35 minutes ago ago

      Have you ever written a paper for publication?

      HTML doesn't support the necessary features. Citations in various formats, footnotes, references to automatically numbered figures and tables, I could go on and on.

      HTML could certainly be extended to support those, but it hasn't been. That's why we're talking about this.

    • tefkah 4 hours ago ago

      What are you talking about? No one’s writing their paper in HTML.

      The problem is having the submissions be in TeX and converting that to HTML, when the only output has been PDF for so long.

      The problem isn’t converting HTML to PDF, it’s making available a giant portion of TeX/pdf only papers in HTML.

      If you’re arguing that maybe TeX then shouldn’t be the source format for papers then I agree, but other than Typst (which also isn’t perfect about HTML output yet) there aren’t that many widely accepted/used authoring formats for physics/math papers, which is what ArXiV primarily hosts.

    • ekjhgkejhgk 3 hours ago ago

      LOL what. You're either trolling, or you've never written a paper in your life.

    • nkrisc 4 hours ago ago

      So, uh, where do the HTML versions of the papers come from?

    • benatkin 3 hours ago ago

      It's easy to convert PDF to HTML/CSS, with similar results.

      Either way it gets shoehorned.

    • carlosjobim 3 hours ago ago

      Except you can't have page breaks, three links in a row, anchor links.

  • cubefox 4 hours ago ago

    This is not new, the title should say (2023). They have shipped the HTML feature with "experimental" flag for two years now, but I don't know whether there is even any plan to move out of the experimental phase.

    It's not much of an "experiment" if you don't plan to use some experimental data to improve things somehow.

  • lalithaar 5 hours ago ago

    I was reading through this article too, glad to have found it on here

  • rootnod3 5 hours ago ago

    Maybe unpopular, but papers should be in n markdown flavor to be determined. Just to have them more machine readable.

    • xigoi 5 hours ago ago

      Compared to HTML, Markdown is very bad at being mahcine-readable.

  • vatsachak 4 hours ago ago

    Why do we like HTML more than pdfs?

    HTML rendering requires you to be connected to the internet, or setting up the images and mathJax locally. A PDF just works.

    HTML obviously supports dynamic embedding, such as programs, much better but people just usually post a github.io page with the paper.

    • devnull3 4 hours ago ago

      > HTML rendering requires you to be connected to the internet

      Not really. One can always generate a self-contained html. Both CSS and JS (if needed) can be inline.

      • vatsachak 2 hours ago ago

        True but the webdev idiom is injecting things such as mathjax from a cdn. I guess one can pre-render the page and save that, but that's kind of like a PDF already

    • mmooss 3 hours ago ago

      epub 'just works' locally, and it's html under the hood.

    • nine_k 3 hours ago ago

      Try opening a PDF on a phone screen.

      • vatsachak 3 hours ago ago

        I do it all the time to read papers. It's easy

    • recursive 4 hours ago ago

      Why would html rendering require a network connection? It doesn't seem to on my machine.

      • vatsachak 2 hours ago ago

        Things like LaTeX equation rendering are hosted on a cdn

        • krapp 2 hours ago ago

          They can be but don't need to be. Any javascript can be localized like HTML and CSS.

          • vatsachak 2 hours ago ago

            That's fair, but imagine trying to get the average reader up to speed with something like npm.