I think there is a problem of incentive here. When we made our websites Search Engine Optimized, the incentive was for google to understand our content, and bring traffic our way. When you make your content optimized for LLM, it only improves their product, and you get nothing in return.
I do dev work for a marketing dept of a large company and there is a lot of talk about optimizing for LLMs/AI. Chatgpt can drive sales in the same way a blog post indexed by Google can.
If a customer asks the AI what product can solve their problem and it replies with our product that is a huge win.
If your business is SEO spam with online ads, chatgpt might eat it. But if your business is selling some product, chatgpt might help you sell it.
I think it’s going to be even worse - companies are going to go to ChatGPT with lawyers and say you are making false/unfair claims about our product. We should be able to give it this copy with correct information to consume.
They have many ways to manipulate the LLM's results, for example they can use a lot of the same mechanisms that are used to block or filter out inappropriate material.
But software documentation is a prime example of when the incentives don't have any problems. I want my docs to be more accessible to LLMs, so more people use my software, so my software gets more mindshare, so I get more paying customers on my enterprise support plan.
This isn't true. ChatGPT and Gemini link to sites in a similar way to how search engines have always done it. You can see the traffic show up in ahrefs or semrush.
Yes, they show a tiny link behind a collapsed menu that very few people bother clicking. For example, my blog has gone from being prominently taking first spot on Google for some queries. Now with AI overviews, there is a sharp drop in traffic. However, it still showed higher impressions then ever. This means I'm appearing in search, even in AI overview, it's just that very few people click.
As of last week, impressions have also dropped. Maybe people not clicking on my links anymore is the result?
Maybe it's about adding knowledge to LLMs, and not how many people read your website? I would be very happy if i had a simple way to get my insights, knowledge and best practices into the next version of an LLM so I have a way to improve it.
I had a call with a new user for a SaaS product that I sell recently. During the call he mentioned that he found it by typing what he was looking for into Gemini, and it recommended my app. I don't do anything special for llms, and the public-facing part of the website has been neglected for longer than I like to admit, so I was delighted. I had never considered that AI could send new users to me rather than pull them away. It felt like I'd hacked the system somehow, skipped through all the SEO best practices of yesteryear and had this benevolent bullshit machine bestow a new user on me at the cost of nothing.
If you are selling advertising, then I agree. However, if you are selling a product to consumers then no. Ask an LLM "What is the best refrigerator on the market." You will get various answers like:
> The best refrigerator on the market varies based on individual needs, but top brands like LG and Samsung are highly recommended for their innovative features, reliability, and energy efficiency. For specific models, consider LG's Smart Standard-Depth MAX™ French Door Refrigerator or Samsung's smart refrigerators with internal cameras.
Optimizing your site for LLM means that you can direct their gestalt thinking towards your brand.
And neither of those two ultimately help the humans who are actually looking for something. You have a finite amount of time to spend on optimising for humans, or for search engines (and now LLMs), and unfortunately many chose the latter and it's just lead to plenty of spam in the search results.
Yes, SEO can bring traffic to your site, but if your visitors see nothing of value, they'll quickly leave.
Browsers can take standards position on CommonMark extensions and decide on a baseline that goes into the W3C spec? It will just converge on the lowest common denominator and that’s good enough for the vast majority of content reading usecases.
> I’ve been asking for browser-native markdown support for years now. A clean web is not that far, if browsers support more than just HTML.
You can always do the markdown -> DOM conversion on the client. Sure, there's a bit of latency there, but it means easier deployment (no build step involving pandoc or similar).
Browser-native markdown support would be better though; you'd get ability to do proper contenteditable divs with bold, italic, etc done via markdown
To get broad support from the server side, you’ll need to showcase high browser support. We need Wordpress and Wikipedia and Ghost to support this, and that won’t happen without native browser support.
> We need Wordpress and Wikipedia and Ghost to support this, and that won’t happen without native browser support.
It can. Unlikely but possible. A good first step would be to have a well-written web component to be used like this: `<markdown>...</markdown>`, with no support at all for a build-step. The .js file implementing this should be included directly in the `<head>`.
If that gets traction (unlikely, but possible) then the standards would sooner or later introduce a tag native to the browser that does the same thing.
There was a lot of conversation about this on X over the last couple days and the `Accept` request header including "text/markdown, text/plain" has emerged as kind of a new standard for AI agents requesting content such that they don't burn unnecessary inference compute processing HTML attributes and CSS.
The concept is called content negotiation. We used to do this when we wanted to serve our content as XHTML to clients preferring that over HTML. It's nice to see it return as I always thought it was quite cool.
I don’t understand why the agents requesting HTML can’t extract text from HTML themselves. You don’t have to feed the entire HTML document to your LLM. If that’s wasteful, why not have a little bit of glue that does some conversion?
I learned that the golang CLI[1] is the best through my work simplifying Firecrawl[2]. However, in this case I used one available through npmjs such that it would work with `npx` for the CF worker builds.
It's always better for the agent to have fewer tools and this approach means you get to avoid adding a "convert HTML to markdown" one which improves efficiency.
Also, I doubt most large-scale scrapers are running in agent loops with tool calls, so this is probably necessary for those at a minimum.
This does not make any sense to me. Can you elaborate on this?
It seems “obvious” to me that if you have a tool which can request a web page, you can make it so that this tool extracts the main content from the page’s HTML. Maybe there is something I’m missing here that makes this more difficult for LLMs, because before we had LLMs, this was considered an easy problem. It is surprising to me that the addition of LLMs has made this previously easy, efficient solution somehow unviable or inefficient.
I think we should also assume here that the web site is designed to be scraped this way—if you don’t, then “Accept: text/markdown” won’t work.
If you have a website and you're optimizing it for GEO, you can't assume that the agents are going to have the glue. So as the person maintaining the website you implement as much of the glue as possible.
That sounds completely backwards. It seems, again, obvious to me that it would be easier to add HTML->markdown converters to agents, given that there are orders of magnitude more websites out there compared to agent.
If your agent sucks so bad that it isn’t capable of consuming HTML without tokenizing the whole damn thing, wouldn’t you just use an agent that isn’t such a mess?
This whole thing kinda sounds crazy inefficient to me.
Well that's what I implemented. There are markdown docs for every HTML file and the proxy decides to serve either markdown or HTML based on the Accept header.
I think GP meant on the client, i.e. agent side. As in, you could deploy this kind of proxy in a forward/non-reverse way inside the agent system, so the LLM always gets markdown, regardless of what the site supports.
There is no real reason to pass HTML with tags and all to the LLM - you can just strip the tags beforehand.
Or one can just use semantic HTML; it's easy enough to convert semantic HTML into markdown with a tool like pandoc. That would also help screen readers, browser "reader modes", text-based web browsers, etc.
OpenAI cookbook says LLMs understand XML better than Markdown text, so maybe that also? Although, it should be more specified and structured, but not HTML.
OpenAI cookbook says LLMs understand XML better than Markdown text.
Yes, for prompts. Given how little XML is out on the public internet it'd be surprising if it also applies to data ingestion from web scraping functions. It'd be odd if Markdown works better than HTML to be honest, but maybe Markdown also changes the content being served e.g. there's no menu, header, or footer sent with the body content.
I think there is a problem of incentive here. When we made our websites Search Engine Optimized, the incentive was for google to understand our content, and bring traffic our way. When you make your content optimized for LLM, it only improves their product, and you get nothing in return.
I do dev work for a marketing dept of a large company and there is a lot of talk about optimizing for LLMs/AI. Chatgpt can drive sales in the same way a blog post indexed by Google can.
If a customer asks the AI what product can solve their problem and it replies with our product that is a huge win.
If your business is SEO spam with online ads, chatgpt might eat it. But if your business is selling some product, chatgpt might help you sell it.
And what that means is the usefulness of LLms in recommending products is about to jump off a cliff.
This is what everybody should have expected.
I think it’s going to be even worse - companies are going to go to ChatGPT with lawyers and say you are making false/unfair claims about our product. We should be able to give it this copy with correct information to consume.
Neat up until the "customer ask" is "What, in X space, is the worst product you can purchase?" Something you have no ability to manipulate.
Why would a customer ask that? If I'm looking for something, why would I waste time with the worst version of it? I'd just go straight for the best.
That is at most temporary. I expect within the next 5 year "partner products" and "LLM-optpmized content" will take the place of SEO.
The economic dynamics did not change and the methods will adapt.
Why wouldn't Google sell advertisers a prominent spot in the AI summary. That's their whole deal. Why wouldn't OpenAI do the same with (free) users.?
Because that’s not how LLMs work.
They have many ways to manipulate the LLM's results, for example they can use a lot of the same mechanisms that are used to block or filter out inappropriate material.
Given that there are entire forums devoted to successfully doing just that (easily) my point stands.
>Something you have no ability to manipulate.
What makes you think this?
Because I built an LLM, I know how they work.
just add an arbiter layer on top for the possibility of advertising and modifying the output. not rocket science
But software documentation is a prime example of when the incentives don't have any problems. I want my docs to be more accessible to LLMs, so more people use my software, so my software gets more mindshare, so I get more paying customers on my enterprise support plan.
Oh hey, I work at Mintlify! We shipped this as a default feature for all of our customers.
This isn't true. ChatGPT and Gemini link to sites in a similar way to how search engines have always done it. You can see the traffic show up in ahrefs or semrush.
Yes, they show a tiny link behind a collapsed menu that very few people bother clicking. For example, my blog has gone from being prominently taking first spot on Google for some queries. Now with AI overviews, there is a sharp drop in traffic. However, it still showed higher impressions then ever. This means I'm appearing in search, even in AI overview, it's just that very few people click.
As of last week, impressions have also dropped. Maybe people not clicking on my links anymore is the result?
Maybe it's about adding knowledge to LLMs, and not how many people read your website? I would be very happy if i had a simple way to get my insights, knowledge and best practices into the next version of an LLM so I have a way to improve it.
I had a call with a new user for a SaaS product that I sell recently. During the call he mentioned that he found it by typing what he was looking for into Gemini, and it recommended my app. I don't do anything special for llms, and the public-facing part of the website has been neglected for longer than I like to admit, so I was delighted. I had never considered that AI could send new users to me rather than pull them away. It felt like I'd hacked the system somehow, skipped through all the SEO best practices of yesteryear and had this benevolent bullshit machine bestow a new user on me at the cost of nothing.
Exactly! It can actually be a positive thing, might as well make it easy for LLMs to read.
> benevolent bullshit machine bestow a new user on me at the cost of nothing
that's awesome. i love this line.
How many users do actually visit these links?
I usually do, as a data point.
I've found it's extremely important to, because you will get some results that are already AI slop optimized to show in LLM research searches.
and like Google - but much, much worse - they bring back enough content to keep users in the chat interface; they never visit your site.
If you are selling advertising, then I agree. However, if you are selling a product to consumers then no. Ask an LLM "What is the best refrigerator on the market." You will get various answers like:
> The best refrigerator on the market varies based on individual needs, but top brands like LG and Samsung are highly recommended for their innovative features, reliability, and energy efficiency. For specific models, consider LG's Smart Standard-Depth MAX™ French Door Refrigerator or Samsung's smart refrigerators with internal cameras.
Optimizing your site for LLM means that you can direct their gestalt thinking towards your brand.
And neither of those two ultimately help the humans who are actually looking for something. You have a finite amount of time to spend on optimising for humans, or for search engines (and now LLMs), and unfortunately many chose the latter and it's just lead to plenty of spam in the search results.
Yes, SEO can bring traffic to your site, but if your visitors see nothing of value, they'll quickly leave.
You get to live in a world where other people are slightly more productive.
Really cool idea
Humans get HTML, bots get markdown. Two tiny tweaks I’d make...
Send Vary: Accept so caches don’t mix Markdown and HTML.
Expose it with a Link: …; rel="alternate"; type="text/markdown" so it’s easy to discover.
Would be nice for humans to get the markdown version too. Once it's rendered you get a clean page.
I’ve been asking for browser-native markdown support for years now. A clean web is not that far, if browsers support more than just HTML.
Markdown is not standardized, so every browser would render the page differently and you’d get the same problems as with pre-standard HTML.
Browsers can take standards position on CommonMark extensions and decide on a baseline that goes into the W3C spec? It will just converge on the lowest common denominator and that’s good enough for the vast majority of content reading usecases.
> I’ve been asking for browser-native markdown support for years now. A clean web is not that far, if browsers support more than just HTML.
You can always do the markdown -> DOM conversion on the client. Sure, there's a bit of latency there, but it means easier deployment (no build step involving pandoc or similar).
Browser-native markdown support would be better though; you'd get ability to do proper contenteditable divs with bold, italic, etc done via markdown
To get broad support from the server side, you’ll need to showcase high browser support. We need Wordpress and Wikipedia and Ghost to support this, and that won’t happen without native browser support.
> We need Wordpress and Wikipedia and Ghost to support this, and that won’t happen without native browser support.
It can. Unlikely but possible. A good first step would be to have a well-written web component to be used like this: `<markdown>...</markdown>`, with no support at all for a build-step. The .js file implementing this should be included directly in the `<head>`.
If that gets traction (unlikely, but possible) then the standards would sooner or later introduce a tag native to the browser that does the same thing.
[dead]
This person hypermedias
There was a lot of conversation about this on X over the last couple days and the `Accept` request header including "text/markdown, text/plain" has emerged as kind of a new standard for AI agents requesting content such that they don't burn unnecessary inference compute processing HTML attributes and CSS.
- https://x.com/bunjavascript/status/1971934734940098971
- https://x.com/thdxr/status/1972421466953273392
- https://x.com/mintlify/status/1972315377599447390
keep us posted on how this change impacts your GEO!
The concept is called content negotiation. We used to do this when we wanted to serve our content as XHTML to clients preferring that over HTML. It's nice to see it return as I always thought it was quite cool.
Agreed! I love that such a tried and true web standard is making a comeback because of AI.
Content negotiation is also good for choosing human languages, unfortunately the browser interfaces for it are terrible.
I don’t understand why the agents requesting HTML can’t extract text from HTML themselves. You don’t have to feed the entire HTML document to your LLM. If that’s wasteful, why not have a little bit of glue that does some conversion?
Converting HTML into Markdown isn't particularly hard. Two methods I use:
1. The Jina reader API - https://jina.ai/reader/ - add r.jina.ai to any URL to run it through their hosted conversion proxy, eg https://r.jina.ai/www.skeptrune.com/posts/use-the-accept-hea...
2. Applying Readability.js and Turndown via Playwright. Here's a shell script that does that using my https://shot-scraper.datasette.io tool: https://gist.github.com/simonw/82e9c5da3f288a8cf83fb53b39bb4...
I learned that the golang CLI[1] is the best through my work simplifying Firecrawl[2]. However, in this case I used one available through npmjs such that it would work with `npx` for the CF worker builds.
[1]: https://github.com/JohannesKaufmann/html-to-markdown
[2]: https://github.com/devflowinc/firecrawl-simple
A lightweight alternative to Playwright, which starts a browser instance, is using an HTML parser and DOM implementation like linkedom.
This is much cheaper to run on a server. For example: https://github.com/ozanmakes/scrapedown
It's always better for the agent to have fewer tools and this approach means you get to avoid adding a "convert HTML to markdown" one which improves efficiency.
Also, I doubt most large-scale scrapers are running in agent loops with tool calls, so this is probably necessary for those at a minimum.
This does not make any sense to me. Can you elaborate on this?
It seems “obvious” to me that if you have a tool which can request a web page, you can make it so that this tool extracts the main content from the page’s HTML. Maybe there is something I’m missing here that makes this more difficult for LLMs, because before we had LLMs, this was considered an easy problem. It is surprising to me that the addition of LLMs has made this previously easy, efficient solution somehow unviable or inefficient.
I think we should also assume here that the web site is designed to be scraped this way—if you don’t, then “Accept: text/markdown” won’t work.
If you have a website and you're optimizing it for GEO, you can't assume that the agents are going to have the glue. So as the person maintaining the website you implement as much of the glue as possible.
That sounds completely backwards. It seems, again, obvious to me that it would be easier to add HTML->markdown converters to agents, given that there are orders of magnitude more websites out there compared to agent.
If your agent sucks so bad that it isn’t capable of consuming HTML without tokenizing the whole damn thing, wouldn’t you just use an agent that isn’t such a mess?
This whole thing kinda sounds crazy inefficient to me.
I don't think it's about including this as a tool, just as general preprocessing before the agent even gets the text.
Well that's what I implemented. There are markdown docs for every HTML file and the proxy decides to serve either markdown or HTML based on the Accept header.
I think GP meant on the client, i.e. agent side. As in, you could deploy this kind of proxy in a forward/non-reverse way inside the agent system, so the LLM always gets markdown, regardless of what the site supports.
There is no real reason to pass HTML with tags and all to the LLM - you can just strip the tags beforehand.
We’re doing this on https://rivet.dev now. I did not realize how much context bloat we had since we were using Tailwind.
It is crazy how badly Tailwind bloats HTML. Tradeoffs!
Or one can just use semantic HTML; it's easy enough to convert semantic HTML into markdown with a tool like pandoc. That would also help screen readers, browser "reader modes", text-based web browsers, etc.
OpenAI cookbook says LLMs understand XML better than Markdown text, so maybe that also? Although, it should be more specified and structured, but not HTML.
OpenAI cookbook says LLMs understand XML better than Markdown text.
Yes, for prompts. Given how little XML is out on the public internet it'd be surprising if it also applies to data ingestion from web scraping functions. It'd be odd if Markdown works better than HTML to be honest, but maybe Markdown also changes the content being served e.g. there's no menu, header, or footer sent with the body content.
Maybe adopt the existing Gemini Protocol instead? It's already a nice very simple markdown-like.
https://toffelblog.xyz/blog/gemini-overview/ https://news.ycombinator.com/item?id=23730408
https://gemini.circumlunar.space/ https://news.ycombinator.com/item?id=23042424
FYI both the link to toffelblog and circumlunar.space are broken with ssl errors.