Static sites enable a good time travel experience

(hamatti.org)

158 points | by speckx 14 hours ago ago

38 comments

  • crazygringo 10 hours ago ago

    I dunno -- generally speaking, the Wayback Machine is a much better time travel experience than trying to recover a website from an old git commit.

    Especially since it's not limited to only sites I've created...

    And in this particular case, all the creator was looking for was old badge images, and they'd generally be in an images directory somewhere no matter whether the site was static or dynamic.

  • Amorymeltzer 12 hours ago ago

    Not strictly the topic, but don't miss or sleep on the blog (self-)gamification links[1][2], excellent whimsy.

    1: https://hamatti.org/posts/i-gamified-my-own-blog/

    2: https://varunbarad.com/blog/blogging-achievements

  • dang 8 hours ago ago

    Can I mention HN's custom time travel experience?

    https://news.ycombinator.com/front?day=2025-08-31

    (available via 'past' in the topbar)

    • coldfoundry 5 hours ago ago

      Wow, to be honest I never knew that was there - just got back from the 2017s. Such a cool feature to support on-site without going through third parties, more sites should have official archival support like this!

      Thanks for the share.

  • codazoda 9 hours ago ago

    I have a bit of an internal struggle here. I use a site generator too but I struggle with the question, should I? I recently wrote about why I’m writing pure html and CSS in 2025.

    https://joeldare.com/why-im-writing-pure-html-and-css-in-202...

    • jszymborski 8 hours ago ago

      I'm not sure how using a static site generator would run counter to any of those points. You can simply generate the same website that you've written by hand.

      EDIT: Well perhaps the "build steps" one, but building my Hugo site just involves me running "hugo" in my directory.

    • EGreg 8 hours ago ago

      The answer is no. You can do this instead: https://community.qbix.com/t/qbix-websites-loading-quickly/2...

      See “Static Sites” section. And realize that DNS caching your pages is essentially making your site “static”.

  • 3036e4 13 hours ago ago

    Plain text files and version control win again.

    • DeepYogurt 12 hours ago ago

      KISS

      • marcosdumay 12 hours ago ago

        Version control isn't really "simple". That said, neither is plain text nowadays.

        It may make sense to change a "S" there for "standard".

        • jszymborski 11 hours ago ago

          Well, it's "relatively simple", as the alternatives either demand a superset of the requirements of static sites or replacements that are more complex.

        • naniwaduni 7 hours ago ago

          Keep it simple, standard,

        • sneak 4 hours ago ago

          Once you have the main mental model, you realize that git (the main/core features, not fancy stuff like submodules or worktrees etc) is basically the simplest thing that is fit for purpose.

        • ars 5 hours ago ago

          Version control can be very simple. Not everything requires the full power of git.

          Use ci from RCS, and that's about it. It makes a single file in the same directly as the file, no need to checkout files, or keep track of, well, anything. Just check in new versions as you make them.

          It's not the right answer in many cases (especially if you need multiple files), but for a single file? The simplicity can't be beat.

  • cosmicgadget 13 hours ago ago

    From the title I thought this was about taking trips down memory lane or seeing historical posts by others. But it seems to be more about seeing design (rather than content) from one's own site in years past. I hope I'm not the only one who would prefer not to see my embarrassing old designs and rather see my archive content rendered in the current (least cringe) template.

  • Liftyee 12 hours ago ago

    My initial thought was that the title was referring to web archive services like the Wayback Machine or archive.is , but the actual topic was equally relevant. I think time travel should work as long as all content is archived / checked in: no reliance on external services (is this the definition of "static site"?)

    • 01HNNWZ0MV43FF 12 hours ago ago

      Static site also means no backend. Each request just serves a file unmodified from disk.

  • jesprenj 4 hours ago ago

    If your website is one static file you can use vim undo history to go back years in the past.

  • lovegrenoble 9 hours ago ago
  • sharps_xp 12 hours ago ago

    is there an decentralized org to ensure that all of the js css we use today remain backward compatible decades from now? or are we just at the whim of these browser vendors?

    • mananaysiempre 12 hours ago ago

      For some part, W3C is supposed to serve this role, so to the extent that WHATWG controls the web platform, yes, yes we are. Part of the problem is, it’s not clear who exactly is supposed to participate in that hypothetical “decentralized” organization—browser vendors do consult website operators, but on occasion[1] it becomes clear that they only care about the largest ones, whose interests are naturally quite different from the long tail (to the extent that it still exists these days). And this situation is in part natural, economically speaking, because of course the largest operators are the ones that are going to have the most resources to participate in that kind of thing, so money will inevitably end up being the largest determinant of influence.

      [1] https://github.com/w3c/webauthn/issues/1255

      • rafram 12 hours ago ago

        That's an unfair characterization. WHATWG doesn't version the spec like W3C did, but it's no less backwards compatible. See their FAQ [1], or just load the 1996 Space Jam site [2] in your modern browser.

        [1]: https://whatwg.org/faq#change-at-any-time

        [2]: https://www.spacejam.com/1996/

        • mananaysiempre 12 hours ago ago

          Thus far, WHATWG has mostly behaved benevolently, true. But because they have stayed benevolent for now doesn’t mean we’re going to be any less at their mercy the moment they decide not to. As the recent XSLT discussion aptly demonstrates, both browser vendors and unaffiliated people are quite willing to do the “pay up or shut up” thing for old features, which is of course completely antithetical to backwards compatibility.

    • AgentME 9 hours ago ago

      The browsers and standards groups do prioritize backwards compatibility and have done a very good job at it. The only real web compatibility breakages I know of have to do with pre-standardized features or third-party plugins like Flash.

    • hypeatei 12 hours ago ago

      The engines are open source, no? I don't think we should break websites on purpose but keeping everything backwards compatible does seem untenable for decades to come.

    • ars 5 hours ago ago

      LibreOffice can open AppleWorks files from 1984.

      And if it couldn't, you could run these old programs in a VM, and I expect that to continue essentially forever, so I see no future problem viewings these browser files.

  • sedatk 11 hours ago ago

    Why do you need such a granular capability, especially when Internet Archive exists. What purpose does it serve?

    • plorkyeran 11 hours ago ago

      > I mentioned this to Varun who asked if I had any screenshots of what it looked like on my website. My initial answer was “no”, then I looked at Wayback Machine but there were not pictures of the badges.

    • zoul 11 hours ago ago

      A safe rollback on a Friday afternoon is a nice thing for sure :)

  • curtisblaine 8 hours ago ago

    If it builds. Author mentions he/she uses Eleventy, so there's always a possibility that current node / npm versions won't work with some ancient dependencies or with old style peer dependencies. Then it's a long bisection with nvm until you get the right version combo.

  • luxuryballs 12 hours ago ago

    interesting idea: a browser plugin that will cache and upload the final html/css of a page, with some logic to avoid duplicates and “extras” it could be a client side distributed archival system that captures the historical web in always static content

  • algo_lover 12 hours ago ago

    I don't get this? I can checkout an old commit of my dynamic server rendered blog written in go and do the same thing.

    Sure I won't have the actual content, but I can see the pages and designs with dummy data. But then I can also load up one of several backups of the sqlite file and most likely everything will still work.

    • laurentlb 11 hours ago ago

      Building old code and getting the same result is not always trivial to do.

      Potential issues:

      - If you have content in a database, can you able to restore the database at any point in time?

      - If you code has dependencies, were all the dependencies checked in the repository? If not, can you still find the same version you were using.

      - What about your tools, compilers, etc.? Sure some of them like Go are pretty good with backward compatibility, but not all of them. Maybe you used a beta version of a tool? You might need to find the same version of the tools you were using. By the way, did you keep track of the versions of your tools, or do you need to guess?

      Even with static websites, you can get into trouble if you referenced e.g. a JS file stored somewhere else. But the point is: going back in time is often much easier with static websites.

      (Related topic: reproducible builds.)

    • inetknght 12 hours ago ago

      > Sure I won't have the actual content, but I can see the pages and designs with dummy data. But then I can also load up one of several backups of the sqlite file and...

      ... so it's useless to anyone except you, then?

      • plouffy 10 hours ago ago

        Does it really need an /s