The Heart of Unix (2018)

(ericnormand.me)

113 points | by kugurerdem 12 hours ago ago

55 comments

  • chubot 3 hours ago ago

    I generally agree with this article in that PROGRAMMABILITY is the core of Unix, and it is why I've been working on https://www.oilshell.org/ for many years

    However I think the counterpoint is maybe a programming analog of Doctorow's "Civil War on General Purpose Computing"

    I believe the idea there was that we would all have iPads and iPhones, with content delivered to us, but we would not have the power to create our own content, or do arbitrary things with computers

    I think some of that has come to pass, at least for some fairly large portions of the population

    (though people are infinitely creative -- I found this story of people writing novels their phone with Google Docs, and selling them via WhatsApp, interesting and cool - https://theweek.com/culture-life/books/the-rise-of-the-whats... )

    ---

    The Unix/shell version of that is that valuable and non-trivial logic/knowledge will be hidden in cloud services, often behind a YAML interface.

    And your job is now to LLM the YAML that approximates what you want to do

    Not actually do any programming, which can lead to adjacent thoughts that the cloud/YAML owners didn't think of

    In some cases there is no such YAML, or it's been trained out of the LLM, so you can't think that thought

    ---

    There's an economic sense to this, in some ways, but personally I don't want to live in that world :)

    • syndicatedjelly 2 hours ago ago

      I see your concern, but don't think it's anything to be worried about. Is an electrician's job at risk because homeowners can purchasing wiring and outlets from a big box store and tap a new outlet in their home? Are mechanics worried about people who do oil changes at home?

      There will always be a demand for skilled labor, but the definition of "skilled" is going to continue changing over time. That's a good sign, it means that the field is healthy and growing.

      • pjmlp 2 hours ago ago

        Usually in many countries, insurances won't pay if something went bad by not being done by a professional electrician or mechanic.

  • coliveira 9 hours ago ago

    The biggest disadvantage of the shell is that, by exchanging data using text, you lose opportunities to check for errors in the output. If you call a function in a programming language and an erroneous output happens, you get a crash or exception. In a shell, you'll get empty lines or, worse, incorrect lines, that will propagate to the rest of the script. This makes it impractical to write large scripts and debugging them gets more and more complicated. The shell works well for a few lines of script, any more than that and it becomes a frustrating experience.

    • throw10920 4 hours ago ago

      > The biggest disadvantage of the shell is that, by exchanging data using text, you lose opportunities to check for errors in the output.

      That's pretty bad, but isn't the complete lack of support for structured data an even bigger one? After all, if you can't even represent your data, then throwing errors is kind of moot.

      • chubot 4 hours ago ago

        Oils/YSH has structured data and JSON! This was finished earlier this year

        Garbage Collection Makes YSH Different (than POSIX shell, awk, cmake, make, ...) - https://www.oilshell.org/blog/2024/09/gc.html

        You need GC for arbitrary recursive data structures, and traditionally Unix didn't have those languages.

        Lisp was the first GC language, and pre-dated Unix, and then Java made GC popular, and Java was not integrated with Unix (it wanted to be its own OS)

        ----

        So now you can do

            # create some JSON
            ysh-0.23.0$ echo '{"foo":[1,2,3]}' > x.json
        
            # read it into the variable x -- you will get a syntax error if it's malformed
            ysh-0.23.0$ json read (&x) < x.json
        
        
            # pretty print the resulting data structure, = comes from Lua
            ysh-0.23.0$ = x
            (Dict)  {foo: [1, 2, 3]}
        
            # use it in some computation
            ysh-0.23.0$ var y = x.foo[1]
            ysh ysh-0.23.0$ = y
            (Int)   2
        • throw10920 3 hours ago ago

          Structured shells are neat and I love them, but the Unix philosophy is explicitly built around plain text - the "structured" part of structured shells isn't Unixy.

          • chubot 3 hours ago ago

            It's not either-or -- I'd think of it as LAYERED

            - JSON denotes a data structure, but it is also text - you can use grep and sed on it, or jq

            - TSV denotes a data structure [1], but it is also text - you can use grep on it, or xsv or recutils or ...

            (on the other hand, protobuf or Apache arrow not text, and you can't use grep on them directly. But that doesn't mean they're bad or not useful, just not interoperable in a Unix style. The way you use them with Unix is to "project" onto text)

            etc.

            That is the layered philosophy of Oils, as shown in this diagram - https://www.oilshell.org/blog/2022/02/diagrams.html#bytes-fl...

            IMO this is significantly different and better than say PowerShell, which is all about objects inside a VM

            what I call "interior vs. exterior"

            processes and files/JSON/TSV are "exterior", while cmdlets and objects inside a .NET VM are "interior"

            Oils Is Exterior-First (Code, Text, and Structured Data) - https://www.oilshell.org/blog/2023/06/ysh-design.html

            ---

            [1] Oils fixes some flaws in the common text formats with "J8 Notation", an optional and compatible upgrade. Both JSON and TSV have some "text-y" quirks, like UTF-16 legacy and inablity to represent tabs

            So J8 Notation cleans up those rough edges, and makes them more like "real" data structures with clean / composable semantics

      • enriquto an hour ago ago

        > if you can't even represent your data

        any data can be represented as text

    • chasil 7 hours ago ago

      At the same time, the POSIX shell can be implemented in a tiny binary (dash compiles to 80k on i386).

      Shells that implement advanced objects and error handling cannot sink this low, and thus the embedded realm is not accessible to them.

      • pjmlp 2 hours ago ago

        Sure they can, Smalltalk and Lisp environments didn't had the luxury of 80k when they were invented.

        • amszmidt an hour ago ago

          No, they had the luxury of having much more.

      • kragen 5 hours ago ago

        that's dramatically larger than any pdp-11 executable, including the original bourne shell, and also, for example, xlisp, which was an object-oriented lisp for cp/m

        advanced objects and error handling do not require tens of kilobytes of machine code. a lot of why the bourne shell is so error-prone is just design errors, many of them corrected in es and rc

        • ronjakoi 4 hours ago ago

          What are es and rc? Can you give some links?

          • chubot 3 hours ago ago

            Search for "rc shell" and "es shell"

            https://en.wikipedia.org/wiki/Rc_(Unix_shell)

            https://wryun.github.io/es-shell/

            They are alternative shells, both from the 90's I believe. POSIX was good in some ways, but bad in that it froze a defective shell design

            It has been acknowledged as defective for >30 years

            https://www.oilshell.org/blog/2019/01/18.html#slogans-to-exp...

            ---

            es shell is heavily influenced by Lisp. And actually I just wrote a comment that said my project YSH has garbage collection, but the es shell paper has a nice section on garbage collection (which is required for Lisp-y data structures)

            And I took some influence from it

            Trivia: one of the authors of es shell, Paul Haahr, went on to be a key engineer in the creation of Google

    • userbinator 8 hours ago ago

      The && and || operators let you branch on errors.

    • theamk 9 hours ago ago

      that's why the rule #1 of robust shell scripts is "set -e", exit on any error. This is not perfect, but helps with most of the errors.

      • mananaysiempre 8 hours ago ago

        set -euo pipefail, if you're OK with the Bashism.

        • thristian 6 hours ago ago

          Since POSIX 2024, `set -o pipefail` is no longer a bashism!

    • hggigg 9 hours ago ago

      It's even worse than that. Most non-trivial, and some trivial scripts and one liners rely on naive parsing (regex/cut etc) because that's the only tool in the toolbox. This resulted in some horrific problems over the years.

      I take a somewhat hard line that scripts and terminals are for executing sequential commands naively only. Call it "glue". If you're writing a program, use a higher level programming language and parse things properly.

      This problem of course does tend to turn up in higher level languages but at least you can pull a proper parser in off the shelf there if you need to.

      Notably if I see anyone parsing CSVs with cut again I'm going to die inside. Try unpicking a problem where someone put in the name field "Smith, Bob"...

      • Bluecobra 5 hours ago ago

        > Notably if I see anyone parsing CSVs with cut again I'm going to die inside. Try unpicking a problem where someone put in the name field "Smith, Bob"...

        How do you tackle this? Would you count the numbers of commas in each line then manually fix the lines that contain more fields?

        • ReleaseCandidat 3 hours ago ago

          Yes, as in "check the number of parsed fields for each line" and don't forget about empty fields. Throw an error and stop the program if the number of columns isn't consistent. Which doesn't mean that you can't parse the whole file and output all errors at once (which is the preferred way, we don't live in the 90s any more ;), just don't process the wrong result. And with usable error messages, not just "invalid line N".

        • harry8 4 hours ago ago

          http://www.catb.org/~esr/writings/taoup/html/ch05s02.html

          paragraph titled DSV style. (Yeah esr, not a fan, whatever...)

          Csv sucks no matter what, there is no one csv spec. Then even if you assume the file is "MS Excel style csv" you can't validate it conforms. There's a bunch of things the libraries do that cope with at least some of it that you will not replicate with cut or an awk one liner.

      • aadhavans 4 hours ago ago

        What if you had constraints on the CSV files? Suppose you knew that they don't contain spaces, for example. In that case, I don't see the problem with using UNIX tools.

        • throw10920 4 hours ago ago

          Then you're not actually processing the CSV format, you're processing a subset of it. You'll also likely bake that assumption into your system and forget about it, and then potentially violate it later.

          Well-defined structured data formats, formal grammars, and parsers exist for a reason. Unix explicitly eschews that in favor of the fiction of "plain text", which is not a format for structured data by definition.

      • chasil 6 hours ago ago

        You might have enjoyed DCL under VMS.

        It did not immediately succumb to envy of the Korn shell.

  • gavinhoward 4 hours ago ago

    > I hope to see more "sugar" in languages to take advantage of calling out to other programs for help.

    How about [1] and [2]?

    My language has those because its first program was its own build script, which requires calling out to a C compiler. It had that before printing to stdout.

    Turns out, that made it far more powerful than I imagined without a standard library. Calling out to separate programs is far better than a standard library.

    [1]: https://git.yzena.com/Yzena/Yc/src/commit/95904ef79701024857...

    [2]: https://git.yzena.com/Yzena/Yc/src/commit/95904ef79701024857...

  • mmcgaha 10 hours ago ago

    I always tell people the best way to learn how to use linux is to read The Unix Programming Environment.

    • anthk 9 hours ago ago

      Perl superseded it for almost all of the chapters, except for the C ones. Altough for small programs, for sure it did.

      Perl used to have an AWK to Perl converter because most of the language could be mapped 1:1 to Perl.

      UPE would be fine under 9front save for sh (rc) and make (mk).

      • buescher 8 hours ago ago

        I liked awk and perl was even better where either more structured (I know, I know) constructs were comfy or I needed perl dbi (which was awesome, what do people use now?) but that was a while ago. Sort of nuts that awk is much faster on really big columnar (csv etc) data, though.

        • cafard 6 hours ago ago

          >> what do people use now?

          Well, sometimes Perl DBI. But the young seem to learn Python about the time they get their drivers' licenses, and some unfortunate among them will inherit my code, so these days I use more psycopg or cx_Oracle (the latter now superseded, yes).

  • emmelaich 4 hours ago ago

    Nice article.

    The criticism of the file system as overly simple or archaic is often been made, ever since the 70s. However the fact is that it IS use-able as a base for ACID capable software. Numerous reality based evidence attests to that.

    I remember in Rochkind's book[0] there is a quote criticising Unix being inferior to IBM's MVS because it didn't have locking. As Rochkind retorts, MVS didn't either! Not as a kernel feature, but via user space software, which is eminently do-able in Unix too.

    [0] https://www.oreilly.com/library/view/advanced-unix-programmi...

  • niobe 7 hours ago ago

    Great article. I was only just thinking this week, "are there really still only 3 channels?".

    But short of a massive overhaul and in spite of the shortcomings the current system still _works_ better than any other platform.

    I would like to see unix stay relevant for the long-term however. It's possible these shortcomings lead one day to a the trade-off against newer systems not being worth making, or being just incompatible.

  • buescher 8 hours ago ago

    With image-capable terminals and funky enhanced cli utilities we are sort of slouching towards something like a CLIM listener or a notebook interface at the shell. What would something in that vein that was really, really nice look like?

  • metadat 7 hours ago ago

    > We see that languages like Perl and Python have huge numbers of libraries for doing all sorts of tasks. Those libraries are only accessible through the programming language they were developed for. This is a missed opportunity for the languages to interoperate synergistically with the rest of the Unix ecosystem.

    What would this interoperability look like, in practical terms?

    For example, how would you invoke a program in language A from language B, other than the typical existing `system.exec(...)'.

    • paulddraper an hour ago ago

      The author is saying these libraries expose Perl/Python/etc functions, that can only be invoked by Perl/Python/etc code.

      Whereas Unix functions (programs) can be invoked by any programming language.

      ---

      The C ABI would be a second place, as many languages can interact with it.

    • jeremyjh 6 hours ago ago

      Its nonsense. They interoperate just as well as any other programs in UNIX. You can pipe stdin to them, pipe their output to other programs, or invoke the shell or other programs. The fact that they have libraries that don't require integration through text streams doesn't take anything away from the text processing interfaces and programs. Shell scripts have their place, and UNIX is beautiful, but that doesn't mean everything has to work this way.

  • whartung 8 hours ago ago

    I'm on board with this.

    Unix is my favorite OS.

    I like that it's fundamental unit of work is the process, and that, as users, we have ready access to those. Processes are cheap and easy.

    I can stack them together with a | character. I can shove them in the background with a & (or ^Z and bg, or whatever). Cron is simple. at(1) and batch(1) are simple.

    The early machines I worked on, processes were a preallocated thing on boot. They weren't some disposable piece of work. You could do a lot with it, but it's not the same.

    Even when I was working on VMS, I "never" started new processes. Not like you do in Unix. Not ad hoc, "just for a second". No, I just worked directly with what I had. I could not compose new workflows readily out of processes.

    Processes give a lot of isolation and safety. If a process goes mad, it's (usually) easily killed with little impact to the overall system. Thus its cheap and forgiving to mess up with processes.

    inetd was a great idea. Tie stdin/stdout to a socket. Any one and their brother Frank could write a service managed by inetd -- in anything. CGI-BIN is the same way. The http server does the routing, the process manages the rest. Can you imagine shared hosting without processes? I shudder at the thought.

    Binary processes are cheap too, with shared code segments making easy forks, fast startup, low system impact. The interpreters, of course, wrecked that whole thing. And, arguably, the systems were "fast enough" to make that impact low.

    But inetd, running binary processes? That is not a slow server. It can be faster (pre-forking, threads, dedicated daemons), but that combo is not necessarily slow. I think the sqlite folks basically do this with Fossil on their server.

    Note, I'm not harping on "one process, one thing", that's different. Turns out when processes are cheap and nimble, then that concept kind of glitters at the bottom of the pan. But that's policy, not capability.

    But the Unix system is just crazy malleable and powerful. People talk about a post-holocaust system. How they want something like CP/M cuz its simple. But, really? What a horrific system! Yes, a "unix like system" is an order of magnitude more complex than something like CP/M. But its far more than an order of magnitude more capable. It's worth the expense.

    Even something weak, like Coherent on a 286. Yea, it had its limitations, but the fundamentals were there. At the end of the world, just give me a small kernel, sh, vi, cc, and ld -- I can write the rest of the userland -- poorly :).

  • mustache_kimono an hour ago ago

    > Compare that to Clojure, where you constantly define and redefine functions at the REPL.

    It's an interactive shell FFS, does it get more REPL than that?!

    `set -x` is what you want brother.

  • golly_ned 6 hours ago ago

    What does this article add to the countless others espousing the Unix model for exactly the same thing?

  • paulddraper an hour ago ago

    > Unix is homoiconic

    Wild, very cool

  • zzo38computer 2 hours ago ago

    Being a programmable environment is one of the good benefits of UNIX, and piping programs together is also a good benefit of UNIX.

    "Write programs that do one thing and do it well" and "Write programs to work together" are good ideas, too (unfortunately many programs don't).

    I think that using a text stream for everything is not the best idea though. In many cases binary formats will do better. I think XML and JSON are not that good either.

    I think "cache your compiler output to disk so you wouldn't have to do a costly compile step each time you ran a program" is a good idea, although this should not be required; REPL and other stuff they mention there is also very helpful.

    They say the file system is also old. My idea is a transactional hypertext file system. It doesn't have metadata (or even file names), but a file can contain multiple numbered forks and you can store extra data in there.

    (Transactional file system is something that I think is useful and that UNIX doesn't do.)

    They are also right about the terminal is old, although some of the newer things that some people had tried to do have different sets of problems.

    They also say another unfortunate thing is layering, and I agree that this layering is excessive.

    Interoperating without needing FFI is also helpful (and see below what I mention about typed initial messages, too).

    About the stuff listed in "Text streams, evolved", my idea of the operating system design, involves the "Common Data Format" (which is a binary format, somewhat like ASN.1 BER but different), and most data, including the command shell and most files, would use it; this also allows for common operations.

    I agree with "a program which displays all of the thumbnails of the files listed on stdin would be much more useful to me than a mouse-oriented file browser", and I do not have a GUI file browser anyways. I do use command-line programs for most things, even though I have X Windows to run some GUI programs and to be able to have multiple xterms at once (I often have many xterms at once). However, it could be improved as I describe above, too.

    They mention the shell. I agree that it could be greatly improved, and I think that it would go with the other improvements above. My operating system design effectively requires "programs as pure functions over streams of data" (although it is functions over "capabilities", and not necessarily "streams of data") due to the way that the capability-based security is working, and the way the linking and capability passing is working also allows working like higher-order functions and transformations and all of that stuff. Even, my idea also involves message passing (all I/O is done by passing messages between capabilities), too.

    I had also considered programs that require types. One of the forks (like I mentioned above) of a executable file can specify the expected type of the initial message, and the command shell can use this to effectively make them like functions that have types.

    Something they don't mention is security. That can also be improved; the capability-based security that I mention above, if you have proxy capabilities too, will improve it. There is also the possibility that users can use the command shell and write other programs to make up your own proxy capabilities, and this allows programs to be used to do things that they were not necessarily designed to do, in addition to improving security. Instead of merely a user account, it might e.g. allow to write to only one file, or allow connecting to only one remote computer (without the program knowing which one it is, and perhaps even with data compression that the application program is unaware of), etc.

    I still think that, even if you have powerful computers, you should still program it efficiently anyways.

    The new one won't be UNIX; it will be something else.

    • Tor3 41 minutes ago ago

      Using text streams between piped-together processes is not a requirement though. I'm using binary streams for some of the stuff I do, as I write simulators for some hardware (and other things) which gets processed by something else through a pipe or two (and may end up being parsed into text at or near the final point).

  • anthk 11 hours ago ago

    Today Unix philisophy it's better done at 9front than the Unix clones themselves.

    >Functional + universal data structure + homoiconic = power

    It everything used TSV or tabular data, yes. But is not the case. With lisp you can always be sure.

    >I edit my entries in Emacs.

    Emacs can do dired (ls+vidir), eshell, rsync maybe to s3 (emacs package+rclone), markdown to HTML (and more from ORG Mode) and tons more with Elisp. With ORG you can basically define your blog and with little of Elisp you could upload your blog upon finishing.

    >21st Century Terminal

    Eshell, or Emacs itself.

    >. What if we take the idea of Unix programs as pure functions over streams of data a little further? What about higher-order functions? Or function transformations? Combinators?

    Hello Elisp. On combinators, maybe that shell from Dave from CCA. MPSH? https://www.cca.org/mpsh/

    • 9dev 9 hours ago ago

      Every time people praise Emacs like this, I wonder if I just don’t get it or they have an Emacs-shaped hammer and only see Emacs-shaped nails. Lots of braced nails, naturally.

      • skydhash 7 hours ago ago

        The nice thing about emacs is the customization. Unix utilities like ls, grep,… are opaque blob with switches. With Emacs, you have direct access to the functions and variables. Instead of praying for a switch or a configuration option, you just write or alter the code and integrate them together.

        And while you have libraries and code access, elips is easier than the unix way (writing c/go/rust/… programs or bash/perl/awk/python/… scripts). Except for few cases.

      • lotharcable 6 hours ago ago

        Emacs is unique because it is self-editable. That is you can edit and modify the program on the program realtime. There is a C-based core that can't be updated on the fly, but by and large Emacs a self-mutable Lisp virtual machine that comes with a built-in editor and repl.

        Depending on how you want to look at it it is possible to say that Emacs editor you use when you first install it is just the default application for the ELisp machine. This is why people talk about things like Org-Mode as if it is this separate thing. It kinda really is. Sure it is included with Emacs nowadays, but it really is just another Elisp application. And, yes, it is a editor first and the machine is based around concepts like buffers, but it is still a full fledged programming environment.

        Which also means that if you don't like Emacs as a editor you can write your own. Which people have done. It makes a great Vi/Vim editor with Evil that is far more compatible with Vim then most people imagine. I use "Meow-mode" which is another model editor that adopts some more modern approaches from things like Helix and puts a lot of focus on improving the efficiency of Emacs keyboard macros.

        So saying that Emacs users just have a "Emacs-shaped hammer" makes as much sense as saying that all Java authors have is a big Java hammer or that Linux users can only see problems as Linux nails, or whatever.

        There is a downside to all of this, of course.

        Emacs where-everything-is-changeable-and-accessible-all-the-time doesn't lend itself to multi-threading, so if you have a lot of stuff going on in the "background" it can cause performance problems. The newer "native compilation" that became standard in the past few years does helps a lot, but there is a still a single thread deep down.

        Also if you want to get very productive in Emacs there is a learning curve. If you are a sysadmin type that has been using Vi for decades then going to Emacs is going to be very painful. The best bet for becoming a advanced user very quickly is to learn just enough Emacs to do basic editing and navigating info files... and then just put the effort into learning Elisp. You don't have to do this, lots of people use it for years without learning any real elisp, but it does limit you. Of course thanks to things like Doom Emacs you don't lose much compared to other editors/IDEs.

        Also things like Eshell and GNU Calc are criminally underrated and misunderstood. (hint: Eshell is not a terminal emulator and doesn't use a external shell program, so don't confuse it with things like ETerm)

        And, hey, I can now have conversations with my editor with the help of ollama. So there is that.

    • amy-petrik-214 4 hours ago ago

      >Functional + universal data structure + homoiconic = power

      >It everything used TSV or tabular data, But is not the case. With lisp you can always be sure.

      basic unix kit is built around line-separated lines which are field-separated and you even get to choose your own separators and not get locked into tab. You can use this kitset, a common one, or other different kit. But with this kitset, yes, everything is indeed a table

      Re: emacs https://www.youtube.com/watch?v=urcL86UpqZc&t=253s

    • coliveira 9 hours ago ago

      Elisp is dependent on Emacs. It is useful to have a language that you can run without loading Emacs.

      • anthk 7 hours ago ago

        Guile supports Elisp, albeit it's far slower. Also you can run

              emacs -q --script "foo.el"
        
        foo.el being

           (/ 2.0 3.0)
           (princ "Hello")
           (terpri)
  • anthk 11 hours ago ago

    On shells for Unix, this can be really useful to cut script regex matching in half:

    https://www.cca.org/mpsh/docs-08.html

  • gregw2 10 hours ago ago

    The author of the article seems unaware of awk or jq or perl one-liners for handling JSON or other forms of data from UNIX command line.

    • taejavu 10 hours ago ago

      The contents of the article indicate you're mistaken:

      > You really can use the best tool for the job. I've got Bash scripts, awk scripts, Python scripts, some Perl scripts. What I program in at the moment depends on my mood and practical considerations.

    • anthk 9 hours ago ago

      You often have to do dances with JSON, XML, TSV... converters before parsing the actual data.

      If you use something like Emacs, you just handle s-exps.

      • cutler 6 hours ago ago

        What about jq and family?