Install.md: A standard for LLM-executable installation

(mintlify.com)

75 points | by npmipg 13 hours ago ago

49 comments

  • petekoomen 10 hours ago ago

    I'm seeing a lot of negativity in the comments. Here's why I think this is actually a Good Idea. Many command line tools rely on something like this for installation:

      $ curl -fsSL https://bun.com/install | bash
    
    This install script is hundreds of lines long and difficult for a human to audit. You can ask a coding agent to do that for you, but you still need to trust that the authors haven't hidden some nefarious instructions for an LLM in the middle of it.

    On the other hand, an equivalent install.md file might read something like this:

    Install bun for me.

    Detect my OS and CPU architecture, then download the appropriate bun binary zip from GitHub releases (oven-sh/bun). Use the baseline build if my CPU doesn't support AVX2. For Linux, use the musl build if I'm on Alpine. If I'm on an Intel Mac running under Rosetta, get the ARM version instead.

    Extract the zip to ~/.bun/bin, make the binary executable, and clean up the temp files.

    Update my shell config (.zshrc, .bashrc, .bash_profile, or fish http://config.fish depending on my shell) to export BUN_INSTALL=~/.bun and add the bin directory to my PATH. Use the correct syntax for my shell.

    Try to install shell completions. Tell me what to run to reload my shell config.

    It's much shorter and written in english and as a user I know at a glance what the author is trying to do. In contrast with install.sh, install.md makes it easy for the user to audit the intentions of the programmer.

    The obvious rebuttal to this is that if you don't trust the programmer, you shouldn't be installing their software in the first place. That is, of course, true, but I think it misses the point: that coding agents can act as a sort of runtime for prose and as a user the loss in determinism and efficiency that this implies is more than made up for by the gain in transparency.

    • cuu508 5 hours ago ago

      IMO it's completely the other way around.

      Shell scripts can be audited. The average user may not do it due to laziness and/or ignorance, but it is perfectly doable.

      On the other hand, how do you make sure your LLM, a non-deterministic black box, will not misinterpret the instructions in some freak accident?

      • nobodywillobsrv 4 hours ago ago

        How about both worlds?

        Instead of asking the agent to execute it for you, you ask the agent to write an install.sh based on the install.md?

        Then you can both audit whatever you want before running or not.

        • chme 2 hours ago ago

          So... What you are saying is that we don't need 'install.md'. Because a developer can just use a LLM to generate a 'install.sh', validate that, and put it into the repo?

          Good idea. That seems sensible.

          Bonus: LLM is only used once, not every time anyone wants to install some software. With some risks of having to regenerate, because the output was nonsensical.

        • franga2000 an hour ago ago

          And since LLM tokens are expensive and generation is slow, how about we cache that generated code on the server side, so people can just download the pre-generated install.sh? And since not everyone can be bothered to audit LLM code, the publisher can audit and correct it before publishing, so we're effectively caching and deduplicating the auditing work too.

        • catlifeonmars 3 hours ago ago

          This is much better. Plus you get reproducibility and can leverage the AI for more repeat performances without expending more tokens.

    • jedwhite 9 hours ago ago

      Thanks for posting the original ideas that led to all this. "Runtime for prose" is the new "literate programming" - early days but a pointer to some pretty cool future things, I think.

      It's already made a bunch of tasks that used to be time-consuming to automate much easier for me. I'm still learning where it does and doesn't work well. But it's early days.

      You can tell something is a genuinely interesting new idea when someone posts about it on X and then:

      1. There are multiple launches on HN based on the idea within a week, including this one.

      2. It inspires a lot of discussion on X, here and elsewhere - including many polarized and negative takes.

      Hats off for starting a (small but pretty interesting) movement.

    • catlifeonmars 4 hours ago ago

      This seems less auditable though, because now there is more variability in the way something is installed. Now there are two layers to audit:

      - What the agent is told to do in prose

      - How the agent interprets those instructions with the particular weights/contexts/temperature at the moment.

      I’m all for the prose idea, but wouldn’t want to trade determinism for it. Shell scripts can be statically analyzed. And also reviewed. Wouldn’t a better interaction be to use an LLM to audit the shell script, then hash the content?

      • petekoomen 3 hours ago ago

        Yes, this approach (substituting a markdown prompt for a shell script) introduces an interesting trade-off between "do I trust the programmer?" and "do I trust the LLM?" I wouldn't be surprised to see prompt-sharing become the norm as LLMs get better at following instructions and people get more comfortable using them.

        • catlifeonmars 3 hours ago ago

          The tradeoff is kind of like asking what flavor of bubblegum you would rather be chewing when you get hit by a bus.

          I hear you, and I can see the pragmatism of your approach. I’m just not convinced that it’s better.

    • smaudet 10 hours ago ago

      > This install script is hundreds of lines long

      Any script can be shortened by hiding commands in other commands.

      LLMs run parameters in the billions.

      Lines of code, as usual, is an incredibly poor metric to go by here.

      • petekoomen 10 hours ago ago

        My point is not that LLMs are inherently trustworthy. It is that a prompt can make the intentions of the programmer clear in a way that is difficult to do with code because code is hard to read, especially in large volumes.

        • catlifeonmars 4 hours ago ago

          I’m not sure I agree with you that code is hard to read. I usually tend to go straight to the source code as it communicates precisely how something will behave. Well written code, like well written prose can also communicate intent effectively.

          • chme 2 hours ago ago

            TBH. I never read prose that couldn't be in some way misinterpreted or misunderstood. Because much of it is context sensitive.

            That is why we have programming languages, they, coupled with a specific interpreter/compiler, are pretty clear on what they do. If someone misunderstands some specific code segment, they can just test their assumptions easily.

            You cannot do that with just written prose, you would need to ask the writer of that prose to clarify.

            And with programming languages, the context is contained, and clearly stated, otherwise it couldn't be executed. Even undefined behavior is part of that, if you use the same interpreter/compiler.

            Also humans often just read something wrong, or skip important parts. That is why we have computers.

            Now, I wouldn't trust a LLM to execute prose any better then I trust a random human of reading some how-to guide and doing that.

            The whole idea that we now add more documentation to our source code projects, so that dumb AI can make sense of it, is interesting... Maybe generally useful for humans as well... But I would instead target humans, not LLMs. If the LLMs finds it useful as well, great. But I wouldn't try to 'optimize' my instructions so that every LLM doesn't just fall flat on its face. That seems like a futile effort.

    • PunchyHamster 4 hours ago ago

      you assume 2 things: that the instructions will be followed correctly, and that the way they will be followed won't change with agent change

      Neither of those things is actually true

      People that got their home dir removed by AI agent did not ask for their home dir being removed by AI

    • blast 9 hours ago ago

      Why the specific application to install scripts? Doesn't your argument apply to software in general?

      (I have my own answer to this but I'd like to hear yours first!)

      • petekoomen 9 hours ago ago

        It does, and possibly this launch is a little window into the future!

        Install scripts are a simple example that current generation LLMs are more than capable of executing correctly with a reasonably descriptive prompt.

        More generally, though, there's something fascinating about the idea that the way you describe a program can _be_ the program that tbh I haven't fully wrapped my head around, but it's not crazy to think that in time more and more software will be exchanged by passing prompts around rather than compiled code.

        • chme an hour ago ago

          TBH, I doubt that this will happen...

          It is much easier to use LLMs to generate code, validate that code as a developer, fix it, if necessary, and check it into the repo, then if every user has to send prompts to LLMs in order to get the code they can actually execute.

          While hoping it doesn't break their system and does what they wanted from it.

          Also... that just doesn't scale. How much power would we need, if everyday computing starts with a BIOS sending prompts to LLMs in order to generate a operating system it can use.

          Even if it is just about installing stuff... We have CI runners, that constantly install software often on every build. How would they scale if they need LLMs to generate install instructions every time?

        • 4b11b4 8 hours ago ago

          > "the way you describe a program _can_ be the program"

          One follow-up thought I had was... It may actually be... more difficult(?) to go from a program to a great description

          • dang 7 hours ago ago

            That's a chance to plump for Peter Naur's classic "Programming as Theory Building"!

            https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

            https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

            What Naur meant by "theory" was the mental model of the original programmers who understood why they wrote it that way. He argued the real program was is theory, not the code. The translation of the theory into code is lossy: you can't reconstruct the former from the latter. Naur said that this explains why software teams don't do as well when they lose access to the original programmers, because they were the only ones with the theory.

            If we take "a great description" to mean a writeup of the thinking behind the program, i.e. the theory, then your comment is in keeping with Naur: you can go one way (theory to code) but not the other (code to theory).

            The big question is whether/how LLMs might change this equation.

            • chme 2 hours ago ago

              Even bringing down the "theory" to paper in prosa will be lossy.

              And natural languages are open to interpretation and a lot of context will remain unmentioned. While programming languages, together with their tested environment, contain the whole context.

              Instrumenting LLMs will also mean, doing a lot of prompt engineering, which on one hand might make the instructions clearer (for the human reader as well), but on the other will likely not transfer as much theory behind why each decision was made. Instead, it will likely focus on copy&pasta guides, that don't require much understanding on why something is done.

        • blast 9 hours ago ago

          That's basically what I was thinking too: installation is a constrained domain with tons of previous examples to train on, so current agents should be pretty good at it.

    • Szpadel 5 hours ago ago

      imagine such support ticket:

      I used minimax M2 (context it's very unreliable) for installation and it didn't work and my document folder is missing, help

      how do you even debug this? imagine you some path or behaviour is changed in new os release and model thinks it knows better? if anything goes wrong who is responsible?

      • chme 2 hours ago ago

        Maybe that is a reason for this approach. It changes the responsibility of errors from the person writing that code, to the one executing it.

        Pretty brilliant in a way.

    • jen20 4 hours ago ago

      This seems like an incredibly long winded, risky and inefficient way to install bun.

      I've never actually (knowingly) run Bun before, but decided to give it a try - below is my terminal session to get it running (on macOS):

          $ nix-shell -p bun
          
          [nix-shell:~]$ bun
          Bun is a fast JavaScript runtime, package manager, bundler, and test
          runner. (1.3.5+1e86cebd7)
          
          Usage: bun <command> [...flags] [...args]
          
          Commands:
            run       ./my-script.ts       Execute a file with Bun
                      lint                 Run a package.json script
          ... (rest of output trimmed)...
      
      
      (Edited to wrap a long preformatted line)
  • boesboes 3 hours ago ago

    This seems like a very, very bad idea. If we don’t like curling into bash, then this is infinitely worse imo. Just use package management and/or some proper dependency management system

  • jedwhite 10 hours ago ago

    I shared a repo on HN last week that lets you use remote execution with these kinds of script files autonomously - if you want to. It had some interesting negative and positive discussion.

    The post mentioned Pete Koomen's install.md idea as an example use case. So now with this launch you can try it with a real intstallation script!

    I think it's a really interesting idea worth experimentation and exploration. So it's a positive thing to see Mintlify launch this, and that it's already on Firecrawl.dev's docs!

    We can all learn from it.

    Show HN discussion of executable markdown here:

    https://news.ycombinator.com/item?id=46549444

    The claude-run tool lets you execute files like this autonomously if you want to experiment with it.

        curl -fsSL https://docs.firecrawl.dev/install.md | claude-run --permission-mode bypassPermissions
    
    Github repo:

    https://github.com/andisearch/claude-switcher

    This is still a very early-stage idea, but I'm really stoked to see this today. For anyone interested in experimenting with it, it's a good idea to try in a sandboxed environment.

  • andai 10 hours ago ago

    I'm thinking isn't that what a readme is? But I guess these days due to GitHub, the readme is the entire project homepage, and the install instructions are either hidden somewhere there (hopefully near the top!) or in a separate installation.md file.

  • dddrh 7 hours ago ago

    Hey I had a similar idea around skipping the “brew/bun install” copy+paste on a site and instead just give a short prompt to have the LLM do the work.

    I like the notion of having install.md be the thing that is referenced in Prompt to Install on the web.

    Edit: forgot my link https://dontoisme.github.io/ai/developer-tools/ux/2025/12/27...

  • oftenwrong 12 hours ago ago

    What is the benefit of having this be a standard? Can't an agent follow a guide just as easily in document with similar content in a different structure?

    • skeptrune 12 hours ago ago

      Primarily this being a predictable location for agents. AI not having to fetch the sitemap or llms.txt and then a bunch of subsequent queries saves a lot of time and tokens. There's an advantages section[1] within the proposal docs.

      [1]: https://www.installmd.org/#advantages

  • bigbuppo 12 hours ago ago

    I feel like I should create a project called 'Verify Node.js v20.17.0+' that is totally not malware.

  • utopiah 3 hours ago ago

    Yes... yes let's make tasks we rely on LESS predictable.

    Sorry but what the heck?

    We should NOT standardize irresponsible behavior, in particular for repeatable tasks. This is particularly maddening when solutions like dependency resolution, containers, distribution of self-contained and binaries DO exist.

    I understand that the hype machine must feed on yet another idea to keep its momentum but this is just ridiculous.

  • 0o_MrPatrick_o0 12 hours ago ago

    Author should explore Ansible/Puppet/Chef.

    I’m not sure this solution is needed with frontier models.

    • skeptrune 12 hours ago ago

      Can you explain more? I see how those relate to a very limited extent, but I'm not getting your entire vision.

      • verdverm 4 hours ago ago

        Installing software should be deterministic and auditable. We have many decades of tool building in devops to facilitate this. It's bonkers to throw that all out for Markdown and LLMs.

        Instead, have your LLMs write inputs to those tools. It's an easier task for them anyway and they only have to do it once, then you just run it

  • ollien 10 hours ago ago

    I don't love the concept, but I do wonder if it could be improved by using a skill that packages and install script, and context for troubleshooting. That way you have the benefits of using an install script, and at least a way to provide pointers for those unfamiliar with the underlying tooling.

  • _pdp_ 24 minutes ago ago

    I mean this is what? feeding a prompt to claude. It could be any other file.

    llms.txt makes sense as a standard but this is unnecessary.

  • JoshPurtell 12 hours ago ago

    At some point in the future (if not already), claude will install malware less often on average. Just like waymos crash less frequently.

    Once you accept that installation will be automated, standardized formats make a lot of sense. Big q is will this particular format, which seems solid, get adopted - probably mostly a timing question

  • rarisma 11 hours ago ago

    Great, I can now combine the potential maliciousness of a script with the potential vulnerabilities of an AI Agent!

    Jokes aside, this seems like a really wierd thing to leave to agents; I'm sure its definitely useful but how exactly is this more secure, a bad actor could just prompt inject claude (an issue I'm not sure can ever be fixed with our current model of LLMs).

    And surely this is significantly slower than a script, claude can take 10-20 seconds to check the node version; if not longer with human approval for each command, a script could do that in miliseconds.

    Sure it could help it work on more environments, but stuff is pretty well standardised and we have containers.

    I think this part in the FAQ wraps it up neatly:

    """ What about security? Isn't this just curl | bash with extra steps? This is a fair concern. A few things make install.md different:

        Human-readable by design. Users can review the instructions before execution. Unlike obfuscated scripts, the intent is clear.
    
        Step-by-step approval. LLMs in agentic contexts can be configured to request approval before running commands. Users see each action and can reject it.
    
        No hidden behavior. install.md describes outcomes in natural language. Malicious intent is harder to hide than in a shell script.
    
    Install.md doesn't eliminate trust requirements. Users should only use install.md files from sources they trust—same as any installation method. """

    So it is just curl with extra steps; scripts aren't obfuscated, you can read them; if they are obfuscated then they aren't going to use a Install.md and you (the user) should really think thrice before installing.

    Step by step approval also sorta betrays the inital bit about leaving installing stuff to ai and wasting time reading instructions.

    Malicious intent is harder to hide, but really if you have any doubt in your mind about an authors potential malefeasance you shouldn't be running it, wrapping claude around this doesn't make it any safer really when possible exploits and malware are likely baked into the software you are trying to install, not the install.

    tldr; why not just have @grok is this script safe?

    Ten more glorious years to installer.sh

    • skeptrune 9 hours ago ago

      This is some really fantastic feedback, thank you!

      I personally think that prose is significantly easier to read than complex bash and there are at least some benefits to it. They may not outweigh the cons, but it's interesting to at least consider.

      That said, this is a proposal and something we plan to iterate on. Generating install.sh scripts instead of markdown is something we're at least thinking about.

  • arianvanp 10 hours ago ago
  • imiric 12 hours ago ago

    Here's a proposal: app.md. A structured text file with everything you want your app to do.

    That way we can have entire projects with nothing but Markdown files. And we can run apps with just `claude run app.md`. Who needs silly code anyway?

    • forgotpwd16 10 minutes ago ago

      Some are already doing it with scripts: https://github.com/andisearch/claude-switcher. Install.md is just specialization of this.

    • chme an hour ago ago

      Well... Maybe just have a BIOS on your system that fetches a markdown, pushes it to a LLM to generate a new and exciting operating system for you on every boot.

      Wouldn't that be nice?

    • lcnmrn 5 hours ago ago

      It will produce a different app every single time. :)

      • gitaarik 4 hours ago ago

        Sounds like fun!

    • Eisenstein 3 hours ago ago

      Why bother with the app at all? Just ask for the end result.

  • dang 7 hours ago ago

    [stub for offtopicness]

    Since the article has been changed to tone down its provocative opener, which clearly had a kicking-the-anthill effect, I'm moving those original reactions to this subthread.