Tiny Core Linux: a 23 MB Linux distro with graphical desktop

(tinycorelinux.net)

282 points | by LorenDB 7 hours ago ago

126 comments

  • ifh-hn 5 hours ago ago

    I've used many of these small Linux distros. I used to have Tiny Core in a VM for different things.

    I also like SliTaz: http://slitaz.org/en, and Slax too: https://www.slax.org/

    Oh and puppy Linux, which I could never get into but was good for live CDs: https://puppylinux-woof-ce.github.io/

    And there's also Alpine too.

    • forinti 4 minutes ago ago

      I tried a handful of small distros in order to give new life to an old laptop with an AMD C-50 and 2GB of RAM.

      The most responsive one, unexpectedly, was Raspberry Pi OS.

    • LorenDB 4 hours ago ago

      Puppy was the first Linux distro I ever tried since it was such a small download (250ish MB) and I had limited bandwidth. Good memories.

    • t_mahmood 3 hours ago ago

      Wondering if it would be a good idea to setup a VM with this. Setup remote connection, and intellij. Just have a script to clone it for a new project and connect from anywhere using a remote app.

      It will increase the size of the VM but the template would be smaller than a full blown OS

      Aside from dev containers, what are other options? I'm not able to run intellij on my laptop, is not an option

      I use Nvim to ssh into my computer to work, which is fine. But really miss the full capacity of intellij

      • Aurornis an hour ago ago

        Ive experimented with several small distros for this when doing cross platform development.

        In my experience, by the time you’re compiling and running code and installing dev dependencies on the remote machine, the size of the base OS isn’t a concern. I gained nothing from using smaller distros but lost a lot of time dealing with little issues and incompatibilities.

        This won’t win me any hacker points, but now if I need a remote graphical Linux VM I go straight for the latest Ubuntu and call it day. Then I can get to work on my code and not chasing my tail with all of the little quirks that appear from using less popular distros.

        The small distros have their place for specific use cases, especially automation, testing, or other things that need to scale. For one-offs where you’re already going to be installing a lot of other things and doing resource intensive work, it’s a safer bet to go with a popular full-size distro so you can focus on what matters.

        • dotancohen 17 minutes ago ago

          To really hammer this home: Alpine uses musl instead of glibc for the C standard library. This has caused me all types of trouble in unexpected places.

          I'm all for suggestions for a better base OS in small docker containers, mostly to run nginx, php, postgress, mysql, redis, and python.

      • ornornor 2 hours ago ago

        Isn’t this what GitHub remote envs are (or whatever they call it)?

        Never really got what it’s for.

        • rovr138 2 hours ago ago

          JetBrains has Gateway which allows connecting to a remote instance and work on it.

          • t_mahmood 2 hours ago ago

            Yes, but it requires JetBrain running on the client too.

      • silasb 2 hours ago ago

        moonlight / sunshine might work if you can't run it locally.

        It'd be best with hardwired network though.

    • hdb2 4 hours ago ago

      > I also like SliTaz

      thank you for this reminder! I had completely forgotten about SliTaz, looks like I need to check it out again!

    • sundarurfriend 3 hours ago ago

      > puppy Linux, which I could never get into

      In what way? Do you mean you didn't get the chance to use it much, or something about it you couldn't abide?

    • samtheprogram 4 hours ago ago

      Wow, Slax is still around and supports Debian now too? Thanks for sharing.

      • projektfu 4 hours ago ago

        I used to use it during the netbook era, was great for that.

    • dayeye2006 4 hours ago ago

      wondering what's your typical usage for those small distros?

      • marttt a few seconds ago ago

        I like using old hardware, and Tiny Core was my daily driver for 5+ years on a Thinkpad T42 (died recently) and Dell Mini 9 (still working). I tried other distros on those machines, but eventually always came back to TC. RAM-booting makes the system fast and quiet on that 15+ years old iron, and I loved how easy it was to hand-tailor the OS - e.g. the packages loaded during boot are simply listed in a single flat file (onboot.lst).

        I used both the FLTK desktop (including my all-time favorite web browser, Dillo, which was fine for most sites up to about 2018 or so) and the text-only mode. TC repos are not bad at all, but building your own TC/squashfs packages will probably become second nature over time.

        I can also confirm that a handful of lenghty, long-form radio programs (a somewhat "landmark" show) for my Tiny Country's public broadcasting are produced -- and, in some cases, even recorded -- on either a Dell Mini 9 or a Thinkpad T42 and Tiny Core Linux, using the (now obsolete?) Non DAW or Reaper via Wine. It was always fun to think about this: here I am, producing/recording audio for Public Broadcasting a 13+ year old T42 or a 10 year old Dell Mini netbook bought for 20€ and 5€ (!) respectively, whereas other folks accomplish the exact same thing with a 2000€ MacBook Pro.

        It's a nice distro for weirdos and fringe "because I can" people, I guess. Well thought out. Not very far from "a Linux that fits inside a single person's head". Full respect to the devs for their quiet consistency - no "revolutionary" updates or paradigm shifts, just keeping the system working, year after year. (FLTK in 2025? Why not? It does have its charm!) This looks to be quite similar to the maintenance philosophy of the BSDs. And, next to TC, even NetBSD feels "bloated" :) -- even though it would obviously be nice to have BSD Handbook level documentation for TC; then again, the scope/goal of the two projects is maybe too different, so no big deal. The Corebook [1] is still a good overview of the system -- no idea how up-to-date it is, though.

        All in all, an interesting distro that may "grow on you".

        1: http://www.tinycorelinux.net/book.html

      • nopakos 4 hours ago ago

        I use one of them to make an old EEE laptop a dedicated Pico-8 machine for my kids. [https://www.lexaloffle.com/pico-8.php]

      • hamdingers 2 hours ago ago

        In college I used a Slax (version 6 IIRC) SD card for schoolwork. I did my work across various junk laptops, a gaming PC, and lab computers, so it gave me consistency across all of those.

        Booting a dedicated, tiny OS with no distractions helped me focus. Plus since the home directory was a FAT32 partition, I could access all my files on any machine without having to boot. A feature I used a lot when printing assignments at the library.

      • jacquesm 2 hours ago ago

        I used DSL for the control of a homebrew 8' x 4' CNC plasmacutter.

        • ja27 an hour ago ago

          I was just thinking today how I miss my DSL (Damn Small Linux) setup. A Pentium 2 Dell laptop, booted from mini-CD, usb drive for persistence. It ran a decent "dumb" terminal, X3270, and stripped down browser (dillo I believe). Was fine for a good chunk of my work day.

          • jacquesm an hour ago ago

            I ran it on a Via single board computer, a tiny board that sipped power and was still more than beefy enough to do real time control of 3 axis stepper motors and maintain a connection to the outside world. I cheated a bit by disabling interrupts during time critical sections and re-enabling the devices afterwards took some figuring out but overall the system was extremely reliable. I used it to cut up to 1/4" steel sheet for the windmill (it would cut up to 1" but then the kerf would be quite ugly), as well as much thinner sheet for the laminations. The latter was quite problematic because it tended to warp up towards the cutter nozzle while cutting and that would short out the arc. In the end we measured the voltage across the arc and then automatically had the nozzle back off in case of warping, which worked quite well, the resulting inaccuracies were very minor.

            https://jacquesmattheij.com/dscn3995.jpg

      • jbstack 4 hours ago ago

        They can be nice for running low footprint VMs (e.g. in LXD / Incus) where you don't want to use a container. Alpine in particular is popular for this. The downside is there are sometimes compatibility issues where packages expect certain dependencies that Alpine doesn't provide.

  • trollbridge 6 hours ago ago

    Not to disrespect this, but it used to be entirely normal to have a GUI environment on a machine with 2MB of RAM and a 40MB disk.

    Or 128K of ram and 400 kb disk for that matter.

    • maccard 6 hours ago ago

      A single 1920x1080 framebuffer (which is a low resolution monitor in 2025 IMO) is 2MB. Add any compositing into the mix for multi window displays and it literally doesn’t fit in memory.

      • beagle3 27 minutes ago ago

        The Amiga 500 had high res graphics (or high color graphics … but not on the same scanline), multitasking, 15 bit sound (with a lot of work - the hardware had 4 channels of 8 bit DACs but a 6-bit volume, so …)

        In 1985, and with 512K of RAM. It was very usable for work.

        • mrits 17 minutes ago ago

          a 320x200 6bit color depth wasn't exactly a pleasure to use. I think the games could double the res in certain mode (was it called 13h?)

      • snek_case 4 hours ago ago

        I had a 386 PC with 4MB of RAM when I was a kid, and it ran Windows 3.1 with a GUI, but that also had a VGA display at 640x480, and only 16-bit color (4 bits per pixel). So 153,600 bytes for the frame buffer.

        • Dwedit 4 hours ago ago

          640 * 480 / 2 = 150KB for a classic 16-color VGA screen.

      • bobmcnamara 4 hours ago ago

        It's so much fun working with systems with more pixels than ram though. Manually interleaving interrupts. What joy.

      • echoangle 5 hours ago ago

        Do you really need the framebuffer in RAM? Wouldn't that be entirely in the GPU RAM?

        • jerrythegerbil 5 hours ago ago

          To put it in GPU RAM, you need GPU drivers.

          For example, NVIDIA GPU drivers are typically around 800M-1.5G.

          That math actually goes wildly in the opposite direction for an optimization argument.

          • jsheard 5 hours ago ago

            Doesn't the UEFI firmware map a GPU framebuffer into the main address space "for free" so you can easily poke raw pixels over the bus? Then again the UEFI FB is only single-buffered, so if you rely on that in lieu of full-fat GPU drivers then you'd probably want to layer some CPU framebuffers on top anyway.

            • throwaway173738 4 hours ago ago

              Yes if you have UEFI.

            • the8472 4 hours ago ago

              well, if you poke framebuffer pixels directly you might as well do scanline racing.

              • jsheard 4 hours ago ago

                Alas, I don't think UEFI exposes vblank/hblank interrupts so you'd just have to YOLO the timing.

          • Rohansi 5 hours ago ago

            > NVIDIA GPU drivers are typically around 800M-1.5G.

            They also pack in a lot of game-specific optimizations for whatever reason. Could likely be a lot smaller without those.

            • monocasa 5 hours ago ago

              Even the open source drivers without those hacks are massive. Each type of card has its own almost 100MB of firmware that runs on the card on Nvidia.

              • jsheard 4 hours ago ago

                That's 100MB of RISC-V code, believe it or not, despite Nvidias ARM fixation.

          • hinkley an hour ago ago

            Someone last winter was asking for help with large docker images and it came about that it was for AI pipelines. The vast majority of the image was Nvidia binaries. That was wild. Horrifying, really. WTF is going on over there?

        • maccard 3 hours ago ago

          You’re assuming a discrete GPU with separate VRAM, and only supporting hardware accelerated rendering. If you have that you almost certainly have more than 2MB of ram

        • znpy 5 hours ago ago

          Aren’t you cheating by having additional ram dedicated for gpu use exclusively? :)

        • sigwinch 5 hours ago ago

          VGA standard supports up to 256k

        • ErroneousBosh 3 hours ago ago

          Computers didn't used to have GPUs back then when 150kB was a significant amount of graphics memory.

    • forinti 5 hours ago ago

      The Acorn Archimedes had the whole OS on a 512KB ROM.

      That said, OSs came with a lot less stuff then.

      • xyzzy3000 3 hours ago ago

        That's only RISC OS 2 though. RISC OS 3 was 2MB, and even 3.7 didn't have everything in ROM as Acorn had introduced the !Boot directory for softloading a large amount of 'stuff' at boot time.

      • psychoslave 5 hours ago ago

        If that is a lot less of things not needed for the specific use case, that is still a big plus.

        • pastage 5 hours ago ago

          It was GUI defined manually by pixel coordinates, having more flexible guis that could autoscale and other snazy things made things really "slow" back then..

          Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.

          • xyzzy3000 an hour ago ago

            RISC OS has the concept of "OS units" which don't map directly onto pixels 1:1, and it was possible to fiddle with the ratio on the RiscPC from 1994 onwards, giving reasonably-scaled windows and icons in high-resolution modes such as 1080p.

            It's hinted at in this tutorial, but you'd have to go through the Programmer's Reference Manual for the full details: https://www.stevefryatt.org.uk/risc-os/wimp-prog/window-theo...

            RISC OS 3.5 (1994) was still 2MB in size, supplied on ROM.

          • masfuerte 3 hours ago ago

            The OS did ship with bezier vector font support. AFAIK it was the first GUI to do so.

            P.S. I should probably mention that there wasn't room in the ROM for the vector fonts; these needed to be loaded from some other medium.

    • Perz1val 6 hours ago ago

      Yea, but those platforms were not 64bit

      • monocasa 5 hours ago ago

        64 bit generally adds about 20% to the size of the executables and programs as t to last on x86, so it's not that big of a change.

    • taylodl 5 hours ago ago

      When I first started using QNX back in 1987/88 it was distributed on a couple of 1.4MB floppy diskettes! And you could install a graphical desktop that was a 40KB distribution!

    • 1vuio0pswjnm7 5 hours ago ago

      I would like to have this again

      I prefer to use additional RAM and disk for data not code

      • oso2k 2 hours ago ago

        There’s an installation option to run apps off disk. It’s called “The Mount Mode of Operation: TCE/Install”.

      • beng-nl 4 hours ago ago

        To think that the entire distro would fit in a reasonable LLC (last level cache)..

        • bobmcnamara 4 hours ago ago

          I've been wondering if I could pull the DIMM from a running machine if everything was cached.

          Probably not due to DMA buffers. Maybe a headless machine.

          But would be funny to see.

        • veqq 2 hours ago ago

          Like the k language!

    • croes 4 hours ago ago

      With 320x240 pixels and 256 colors

    • nilamo 4 hours ago ago

      "640k ought to be enough for everyone!"

    • embedding-shape 5 hours ago ago

      > Or 128K of ram and 400 kb disk for that matter.

      Or 32K of RAM and 64KB disk for that matter.

      What's your point? That the industry and what's commonly available gets bigger?

  • gardnr 5 hours ago ago

    I love lightweight distros. QNX had a "free as in beer" distro that fit on a floppy, with Xwindows and modem drivers. After years of wrangling with Slackware CDs, it was pretty wild to boot into a fully functional system from a floppy.

    • Someone 4 hours ago ago

      > QNX had a "free as in beer" distro that fit on a floppy, with Xwindows and modem drivers.

      I don’t think that had the X Windows system. https://web.archive.org/web/19991128112050/http://www.qnx.co... and https://marc.info/?l=freebsd-chat&m=103030933111004 confirm that. It ran the Photon microGUI Windowing System (https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx....)

    • ddalex 4 hours ago ago

      I never understood how that QNX desktop didn't pick up instanntly, it was amazing !

      • Joel_Mckay 3 hours ago ago

        Licensing, and QNX missed a consumer launch window by around 17 years.

        Some businesses stick with markets they know, as non-retail customer revenue is less volatile. If you enter the consumer markets, there are always 30k irrational competitors (likely with 1000X the capital) that will go bankrupt trying to undercut the market.

        It is a decision all CEO must make eventually. Best of luck =3

        "The Rules for Rulers: How All Leaders Stay in Power"

        https://www.youtube.com/watch?v=rStL7niR7gs

        • api 2 hours ago ago

          This also underscores my explanation for the “worse is better” phenomenon: worse is free.

          Stuff that is better designed and implemented usually costs money and comes with more restrictive licenses. It’s written by serious professionals later in their careers working full time on the project, and these are people who need to earn a living. Their employers also have to win them in a competitive market for talent. So the result is not and cannot be free (as in beer).

          But free stuff spreads faster. It’s low friction. People adopt it because of license concerns, cost, avoiding lock in, etc., and so it wins long term.

          Yes I’m kinda dissing the whole free Unix thing here. Unix is actually a minimal lowest common denominator OS with a lot of serious warts that we barely even see anymore because it’s so ubiquitous. We’ve stopped even imagining anything else. There were whole directions in systems research that were abandoned, though aspects live on usually in languages and runtimes like Java, Go, WASM, and the CLR.

          Also note that the inverse is not true. I’m not saying that paid is always better. What I’m saying is they worse is free, better was usually paid, but some crap was also paid. But very little better stuff was free.

          • rzerowan an hour ago ago

            There is also the option by well written professional wherer the startergy is to grab as much market share as they can by allowing the proliferation of their product to lockup market/mindshare and rleaget the $ enforcement for later - successfully used by MSWindows for the longest time and Photoshop .

            Conversly i remenber Maya or Autodesk used to have a bounty program for whoever would turn in people using unlicensed/cracked versions of their product.Meanwhile Blender (from a commercial past) kept their free nature and have connsistently grown in popularity and quality without any such overtures.

            Of course nowadays with Saas everything get segmented into wierd verticals and revenue upsells are across the board with the first hit usually also being free.

          • Joel_Mckay 7 minutes ago ago

            The only reason FOSS sometime works was because the replication cost is almost $0.

            In *nix, most users had a rational self-interest to improve the platform. "All software is terrible, but some of it is useful." =3

      • knowitnone3 an hour ago ago

        because it's not free and their aim was at developers and the embedded space. How many people have even heard of QNX?

    • anyfoo 4 hours ago ago

      That famous QNX boot disk was the first thing I thought of when reading the title as well.

      • taylodl 4 hours ago ago

        Me too! And the GUI was only a 40KB distribution and was waaaaaay better than Windows 3.0!

        • jacquesm 2 hours ago ago

          And incredibly responsive compared to the operatings systems of even today. Imagine that: 30 years of progress to end up behind where we were. Human input should always run at the highest priority in the system, not the lowest.

    • knowitnone3 an hour ago ago

      yeah but what can you do with free QNX? With tinycore, you can install many packages. What packages exist for QNX?

  • shiftpgdn 6 hours ago ago

    This is cool. My first into to a practical application of Linux in the early 2000s was using Damn Small Linux to recover files off of cooked Windows Machines. I looked up the project the other day while reminiscing and thought it would be interesting if someone took a real shot at reviving the spirit of the project.

  • noufalibrahim 4 hours ago ago

    In around 2002, I got my hands on an old 386 which I was planning to use for teaching myself things. I was able to breathe life into it using MicroLinux. Two superformatted 1.44" floppy disks and the thing booted. Basic kernel, 16 colour X display, C compiler and Editor.

    I don't know if there are any other options for older machines other than stripped down Linux distros.

    • dpflug 4 hours ago ago
    • Romario77 4 hours ago ago

      I mean - DOS or it's equivalents still exist and for older computers you will probably be able to find drivers.

  • veganjay 6 hours ago ago

    I have an older laptop with a 32-bit processor and found that TinyCoreLinux runs well on it. It has its own package manager that was easy to learn. This distro can be handy in these niche situations.

    • bdbdbdb 5 hours ago ago

      Similar situation here. Have some old 32bit machines that I'm turning into writer decks. Most Linux distros have left 32bit behind so you can't just use Debian or Ubuntu and a lot of distros that aim to run on lower hardware are Ubuntu derivatives

      • Narishma 4 hours ago ago

        Same situation but I'm using NetBSD instead. I'm betting it'll still be supporting 32-bit x86 long after the linux kernel drops it.

        • jacquesm 2 hours ago ago

          Personally, I think that dropping 32 bit support for Linux is a mistake. There is a vast number of people in developing countries on 32 bit platforms as well as many low cost embedded platforms and this move feels more than a little insensitive.

  • supportengineer 2 hours ago ago
  • devsda 3 hours ago ago

    I've used it around early 2010s as a live cd to fix partitions etc. Definitely recommend as a lightweight distro.

    Was a little tricky to install on disk and even on disk it behaved mostly like a live cd and file changes had to be committed to disk IIRC.

    Hope they improved the experience now.

  • hypeatei 6 hours ago ago

    The site doesn't have HTTPS and there doesn't seem to be any mention of signatures on the downloads page. Any way to check it hasn't been MITM'd?

    • Y_Y 6 hours ago ago
    • lysace 6 hours ago ago

      Ideas to decrease risk of MITM:

      Download from at least one more location (like some AWS/GCP instance) and checksum.

      Download from the Internet Archive and checksum:

      https://web.archive.org/web/20250000000000*/http://www.tinyc...

    • firesteelrain 6 hours ago ago

      Not foolproof. Could compute MD5 or SHA256 after downloading.

      • hypeatei 6 hours ago ago

        And compare it against what?

        EDIT: nevermind, I see that it has the md5 in a text file here: http://www.tinycorelinux.net/16.x/x86/release/

        • maccard 6 hours ago ago

          Which is served from the same insecure domain. If the download is compromised you should assume the hash from here is too.

          • hypeatei 5 hours ago ago

            An integrity check is better than nothing, but yes it says nothing about its authenticity.

            • firesteelrain 5 hours ago ago

              You can use this site

              https://distro.ibiblio.org/tinycorelinux/downloads.html

              And all the files are here

              https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/

              Under a HTTPS connection. I am not at a terminal to check the cert with OpenSSL.

              I don’t see any way to check the hash OOB

              Also this same thing came up a few years ago

              https://www.linuxquestions.org/questions/linux-newbie-8/reli...

              • maccard 5 hours ago ago

                Is that actually tiny core? It’s _likely_ it is, but that’s not good enough.

                > this same thing came up a few years ago

                Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!

                Honestly, this is a great use for a blockchain…

                • firesteelrain 4 hours ago ago

                  I usually only install on like a Raspberry Pi or VM for these toy distros

                  Are any distros using block chain for this ?

                  I am used to using code signing with HSMs

                  • maccard 3 hours ago ago

                    I’d install it as a VM maybe,

                    > are any sisters using blockchain

                    I don’t think so, but it’s always struck me as a good idea - it’s actual decentralised verification of a value that can be confirmed by multiple people independently without trusting anyone other than the signing key is secure.

                    > I am used to code signing with HSMs

                    Me too, but that requires distributing the public key securely which… is exactly where we started this!

            • embedding-shape 5 hours ago ago

              An integrity check where both what you're checking and the hash you're checking against is literally not better than nothing if you're trying to prevent downloading compromised software. It'd flag corrupted downloads at least, so that's cool, but for security purposes the hash for a artifact has to be served OOB.

              • uecker 5 hours ago ago

                It is better than nothing if you note it down. You can compare it later if somebody / or you was compromised to see whether you had the same download as everyone else.

                • maccard 4 hours ago ago

                  Sorry but this is nonsense. It’s better than nothing if you proactively log the hashes before you need them, but it’s actively harmful for anyone wi downloads it after it’s compromised.

                  • uecker 2 hours ago ago

                    "It is better than nothing" is literally what I said. But thinking about it more, I actually think is quite useful. Any kind of signature or out-of-band hash is also only good if the source is not compromised, but knowing after the fact whether you are affected or not is extremely valuable.

            • maccard 5 hours ago ago

              It’s not better than nothing - it’s arguably worse.

          • firesteelrain 5 hours ago ago

            There is a secure domain to download from as a mirror. For extra high security, the hash should be delivered OOB like on a mailing list but it isn’t

            • maccard 3 hours ago ago

              Where is that mirror linked from? If for the HTTP site that’s no better than downloading it from the website in the first place.

              > for extra high security,

              No, sending the hash on a mailing list and delivering downloads over https is the _bare minimum_ of security in this day and age.

    • throwaway984393 5 hours ago ago

      Because there's big demand to mitm users of an extremely small and limited distribution from 2008?

  • oso2k 3 hours ago ago

    I love Tiny Core Linux for use cases where I need fast boot times or have few resources. Testing old PCs, Pi Zero and Pi Zero 2W are great use cases.

    • jacquesm 2 hours ago ago

      Thank you for that comment, I did not realize Pi Zero and Pi Zero 2W worked with TCL. I am brewing an application for that environment right now so this may just save the day and make my life a lot easier. Have you tried video support for the Pi specific cams under TCL?

  • haunter 4 hours ago ago

    Another small one is the xwoaf (X Windows On A Floppy) rebuild project 4.0 https://web.archive.org/web/20240901115514/https://pupngo.dk...

    Showcase video https://www.youtube.com/watch?v=8or3ehc5YDo

    iso https://web.archive.org/web/20240901115514/https://pupngo.dk...

    2.1mb, 2.2.26 kernel

    >The forth version of xwoaf-rebuild is containing a lot of applications contained in only two binaries: busybox and mcb_xawplus. You get xcalc, xcalendar, xfilemanager, xminesweep, chimera, xed, xsetroot, xcmd, xinit, menu, jwm, desklaunch, rxvt, xtet42, torsmo, djpeg, xban2, text2pdf, Xvesa, xsnap, xmessage, xvl, xtmix, pupslock, xautolock and minimp3 via mcb_xawplus. And you get ash, basename, bunzip2, busybox, bzcat, cat, chgrp, chmod, chown, chroot, clear, cp, cut, date, dd, df, dirname, dmesg, du, echo, env, extlinux, false, fdisk, fgrep, find, free, getty, grep, gunzip, gzip, halt, head, hostname, id, ifconfig, init, insmod, kill, killall, klogd, ln, loadkmap, logger, login, losetup, ls, lsmod, lzmacat, mesg, mkdir, mke2fs, mkfs.ext2, mkfs.ext3, mknod, mkswap, mount, mv, nslookup, openvt, passwd, ping, poweroff, pr, ps, pwd, readlink, reboot, reset, rm, rmdir, rmmod, route, sed, sh, sleep, sort, swapoff, swapon, sync, syslogd, tail, tar, test, top, touch, tr, true, tty, udhcpc, umount, uname, uncompress, unlzma, unzip, uptime, wc, which, whoami, yes, zcat via busybox. On top you get extensive help system, install scripts, mount scripts, configure scripts etc.

  • roscas 2 hours ago ago

    It is so tiny that it is http only because https was too big...

  • rcarmo 5 hours ago ago

    This would be perfect if it had an old Mac OS 7 Platinum-like look and window shading.

  • slim an hour ago ago

    Tiny Core also runs from ramdisk, uses a packaging systems based on tarballs mounted in a fusefs and can be installed on a dos formatted usb key. It also has a subdistro named dCore[1] which uses debian packages (which it unpacks and mounts in the fusefs) so you get access to the ~70K packages of debian.

    It's documentation is a free book : http://www.tinycorelinux.net/book.html

    [1] https://wiki.tinycorelinux.net/doku.php?id=dcore:welcome

  • snvzz 2 hours ago ago

    For unknown reasons, tinycorelinux's website is geoblocked in Japan.

  • mannycalavera42 2 hours ago ago

    for a moment I thought about a Corel Linux revamp :)

  • bflesch 6 hours ago ago

    Looks really nice, I like the idea.

    But can they please empower a user interface designer to simply improve the margins and paddings of their interface? With a bunch of small improvements it would look significantly better. Just fix the spacing between buttons and borders and other UI elements.

    • wild_egg 6 hours ago ago

      Modern UX trends are a scourge of excessive whitespace and low information density that get in the way of actually accomplishing tasks.

      Any project that rejects those trends gets bonus points in my book.

      • linguae 5 hours ago ago

        I sympathize, but I feel compelled to point out that the parent didn’t say that the interface had to look like a contemporary desktop.

        In my opinion, I believe the Tiny Core Linux GUI could use some more refinement. It seems inspired by 90s interfaces, but when compared to the interfaces of the classic Mac OS, Windows 95, OS/2 Warp, and BeOS, there’s more work to be done regarding the fit-and-finish of the UI, judging by the screenshots.

        To be fair, I assume this is a hobbyist open source project where the contributors spend time as they see fit. I don’t want to be too harsh. Fit-and-finish is challenging; not even Steve Jobs-era Apple with all of its resources got Aqua right the first time when it unveiled the Mac OS X Public Beta in 2000. Massive changes were made between the beta and Mac OS X 10.0, and Aqua kept getting refined with each successive version, with the most refined version, in my opinion, being Mac OS X 10.4 Tiger, nearly five years after the public beta.

        • oso2k 2 hours ago ago

          With CorePlus, you have the the choice of some 10 GUI environments. I prefer openbox or jwm.

      • bflesch 5 hours ago ago

        If you look at the screenshots it immediately jumps out that it is unpolished: the spacings are all over the place, the window maximize/minimize/close buttons have different widths and weird margins.

        I thought that would be immediately clear to the HN crowd but I might have overestimated your aesthetic senses.

      • Perz1val 5 hours ago ago

        Look at screenshots -> wallpaper window. The spacing between elements is all over the place and it simply looks like shit. Seeing this I'm having doubts if the team who did this is competent at all

        • bflesch 5 hours ago ago

          Exactly.

          I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.

      • delfinom 5 hours ago ago

        There is a balance.

        Too much information density is also disorienting, if not stressing. The biggest problem is finding that balance between multiple kinds of users and even individuals.

    • pbhjpbhj 4 hours ago ago

      This just looks like a standard _old_ *nix project. I've used Tiny, a couple of decades ago IIRC, from a magazine cover CD.

      I imagine the sign-off date of 2008, the lack of very simple to apply mobile css, and no https to secure the downloads (if it had it then it would probably be SSL).

      This speaks to me of a project that's 'good enough', or abandoned, for/by those who made it. Left out to pasture as 'community dev submissions accepted'.

      I've not bothered to look, but wouldn't surprise me if the UI is hardcoded in assembly and a complete ballache to try and change.

    • grim_io 6 hours ago ago

      One could argue that visible borders are a feature, not a bug.

      If you are trying to maximize for accessibility, that is.

      • bflesch 5 hours ago ago

        It's not about the damn borders it is about the spacing between the buttons and other UI elements as you can see in the screenshot. I don't want them to introduce some shitty modern design, just fix the spacing so it doesn't immediately jump out as odd and unpolished.

      • egormakarov 6 hours ago ago

        Pretty sure it was not about presence of visible borders, but about missing spacing between borders and buttons. That on some screenshots, but not others. It's not like this ui has some high-density philosophy, it's just very inconsistent

  • theanonymousone 4 hours ago ago

    Does it run docker?

    • oso2k 2 hours ago ago

      With some modifications, yes. Boot2docker and boot2podman were based on tinycorelinux.

  • anthk 3 hours ago ago

    https://luxferre.top http://t3x.org

    All of the minilaguages exposed there will run on TC even with 32MB of RAM.

    On TC, set IceWM the default WM with no opaque moving/resizing as default and get rid of that horrible dock.

  • nine_k 3 hours ago ago

    /* On the website, body { font-size: 70%; } — why? To drive home the idea that it's tiny? The default font size is normally set to the value comfortable for the user, would be great to respect it. */