Paper Computing (great name!) is something I've been thinking about a lot to help my kids benefit from tech without exposing them to the brain melting addiction of screens. I sacrificed a few crazy nights of sleep to try to build a Paper Computer Agent prototype for a recent Gemini hackathon (only to disappointingly have submission issues right before the actual deadline) which my kids loved and keep asking me to set up permanently for them.
It's essentially a poor man's hacked up DynamicLand - projector, camera, live agent. There are so many things you could do if you had a strong working baseline for this. My kids used it to create stories, learn how to draw various things, and watching safe videos they could hold in their hand.
There's something weirdly compelling and delightfully physical about holding a piece of paper that shows a live rocket launch, with the flames streaming down the page. It could also project targeted pieces of text, such as inline homework advice, or graphs next to data. It doesn't take long to imagine any other number of fun use cases, and it feels a lot more freeing and inspiring than keeping everything bound to a screen.
R.I.P. to the Amazon Glow video calling device, killed before AI went mainstream. I'd love to hear how to get root on one... exactly the hardware your project could use most effectively and an amazing interface for playing games remotely with the grandparents.
this is really cool, I'd love to use something like this for my kids too. Maybe I'll try your project when I have some more free time. Would love to contribute but i'm not very skilled in python.
If you don't mind me asking, what hardware did you use? Especially for the project, I'm guessing it needs to have quite a strong bulb in order to be seen in broad daylight?
I love how creatively ai is integrated in here. Amazing.
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/https://folk.computer/https://news.ycombinator.com/item?id=39241472 (165 points, 2 years ago, 53 comments)
I love gow creatively ai is integrated in here. Amazing.
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/https://folk.computer/
I was pretty excited when I saw the premise behind what Apple was doing with VisionPro because I figured they were steering towards this, but it seems they’ve looked away and don’t really care about going deeper into this direction.
I asked at some point if I could theoretically develop an application that could literally be controlled by a Fischer Price toy, like a little plastic car console or something. Or even potentially have a real keyboard that isn’t connected to anything, but the VisionPro can just see my keypresses and apply them as if I was actually pressing something. The former case is possible, but surprisingly difficult, but the latter case isn’t really there yet (requires too much precision and latency is worse than just using a Bluetooth keyboard).
Either way, the idea of a computing environment that meshes with and directly interacts with the real, physical objects around you is an interesting premise I’d like to see taken further with “Spatial Computing”/AR. Scanning and recording things I’m writing on a whiteboard or in a notebook by recognizing that I’ve picked up a pen and am writing something down would just be getting started.
Of course, if we’re ambiently recording everything you’re doing there will need to be some kind of regular process/interface to “sift” everything at the end of the day. This is the core of the Getting Things Done methodology. Everything goes into a big “intake list” and then you do periodic check-ins throughout the day where you review the list and decide whether to move those to a series of sub-lists to “do this now,” “do this soon,” or “do this someday.”
> Now that we have actually good AI, I have this vision of a form of computing that doesn’t involve me using a computer so much. Imagine you had the day’s emails to go through. It would be nice if the ones that required a simple decision could be dispatched with a few pen-strokes: I could write down a date that would work for that meeting; check a box to accept that invitation; etc.
This reminds me of those predictions from 1900 about the year 2000, when they thought we'd all live in enormous skyscrapers and get around by flying cars. Instead we moved out to suburbs because improved logistics systems meant we could buy things from suburban shopping centres rather than having to go into city centres. Revolution, not evolution.
Surely the real advantage of an 'actually good AI' would be getting the AI to do the work itself, rather than just allowing the work to be done in a format with which the human is more comfortable. The underlying problem is that there are too many things vying for our attention.
Don’t think of it as work, but of what a human would want to spend time doing. In https://news.ycombinator.com/item?id=47788736, a commentor describes how his kids love using the “paper computer” prototype he built. They are not working, they are playing and learning and experimenting and creating. Things that humans like to do.
To some degree, that's what one had w/ Apple's Newton Intelligence on the MessagePad --- it was "just" fancy pattern-matching, but mostly it worked, and the UI and implementation were quite good, and it kept me organized all through college.
Mentioning the Newton may be anathema to the discussion (it seems to bring up the usual jokes, etc.) but I was thinking too that the Macintosh (or the Xerox Alto if you like, or the Mother of All Demos) tried to move us in that direction by "skeuomorphising" the computer interface—make it look like the more familiar "real world". The Newton pushed further. It seems to have been on the mind of at least a few people at Apple.
It sounds like the author is on the same track, has the same mindset. And I like.
I am also reminded of the Young Lady's Illustrated Primer: in Neil Stephenson's Diamond Age. It is not exactly what the author describes but, if the book had a computer backend, it also divorces the user from the computer interface we have come to know. Perhaps for me some future (better) local LLM within such a book is what I want. A kind of companion I ask questions of…
(I mean I suppose I should just do what was posted a day or to ago to the Ask HN: and put a local LLM behind a messaging app and I could just converse with it wherever I am. Tangent: I am kind of fascinated by the idea of a personal LLM that has context stretching back to my earliest days—were I to have started conversing with this synthetic companion at a young age. Imagine the lifetime of context where the LLM knows my habits, how I've changed over the years. I suppose this is nightmare fuel for a number of you.)
Other copies of the Primer do have a computer backend.
There are basically three versions of the book:
1) The ones developed for a few rich kids. These are partially automated, but backed by gig workers. They get what we might call (if you'll pardon the term) "Actually Indians" AI (augmented by the regular type).
2) The one our protagonist gets. This is one of the books from #1, but the distinctive feature here is that an early gig worker (the book calls these "'ractors" when they're doing this kind of work) the protagonist draws takes a special interest in her and intentionally keeps drawing jobs for her over a period of several years. This continuity and personal care by a single real person is what sets it apart and makes her experience so excellent.
3) The mass-market version that's entirely computerized, no human touch. This version brainwashes a fuckload of kids into becoming the "mouse army", and that's really all we see as far as what it can do: something really bad (if convenient for our protagonist).
The message of the book is 100% the opposite of "automated learning-books are amazing". It's "tech for learning sucks ass and/or is outright dangerous if you rely only on it, and a real human tutor who cares about a kid is the best thing around even in a crazy high-tech future-world".
Charles de Lint had an intelligent book in his fantasy novel _Jack the Giant Killer_ (or maybe its sequel) --- I've tried doing the conversing/chatting thing w/ an LLM a couple of times, but always got annoyed more than amused.
What's the point? LLMs tend towards the mean/average --- I want better in my life and interactions --- it's useful when I need an example DXF or similar rote task, but my current project is a woodworking joint which has no precedent.
Yes, the skeumorphism angle is an interesting one, and one which is surprisingly absent in the _ur_ description of a stylus equipped computing device, the slates/tablets from Larry Niven and Jerry Pournelle's _The Mote in God's Eye_ --- this sort of thing seems to be coming back around --- a recent Kindle Scribe firmware update add shape recognition. I'd be _very_ pleased if my new Kindle Scribe Coloursoft could fully become a replacement for my Newton....
I think you're right that the use case for an LLM is still rather niche. It's perhaps still worth exploring though as they may well improve over time.
Regardless, I have still found them useful. Diagnosing the problems with a car is maybe an esoteric example but is still useful.
For many months now I have been working through learning about and implementing a hobbyist analog computer with LLM as engineer-confidant. I already knew the basics of op-amps and analog computing but was surprised at a lot of the new things I discovered only by way of the LLM saying (for example), "Hey, here's a nice way to get your reference voltages…" and the project benefited from it (and I learned about a new chip/device/technique).
Yes, they do work well as a stand-in for the "competent technician with skill in the pertinent art and and fully aware of all prior art" (to use wording like to the patent application standard).
But it's only going to allow you to avail oneself of prior art/techniques.
Because it was a profit making venture for car companies. Suburbs are horrifically inefficient, they survive by the twisted "communism" of cannibalizing the dense urban tax bases to support the sprawling, expensive to service and maintain, isolating flatlands.
Not so fast: I would say that the move to suburbs was initially driven by a thirst for homeownership with luxurious lawns, coupled with electric streetcars and other rail-based transport.
It was only later that the almighty combustion engine and tire companies forcibly replaced streetcars with buses and trucks, that cars began their hegemonic domination of suburbia. The National Highway System decrees didn't hurt, either, but highways were built in the USA with an ulterior motive of national defense.
It also happened during a period where cities were polluted, noisy, and the middle-class housing was largely cramped tenements. Basically all of this has been/is being mitigated these days. City-center housing now looks more like luxury loft living than tenements (though this gives us a big problem with ‘missing middle’ housing where there’s very little housing available that is suitable for families where everything is decrepit slums or luxury 1 and 2 bedroom condos). Pollution has been largely mitigated with catalytic converters and, now, EVs. And electrification helps deal with noise pollution as well through getting rid of engine noise (especially for motorized appliances like leaf-blowers).
Meanwhile, traffic and the stigma around drunk driving (which wasn’t nearly as strong or strictly enforced before the 90s), have quickly taken much of the bloom off the rose of car-dependent lifestyles. I predict the growth of micromobility options will continue to make cities even more attractive as well by improving coverage for areas where transit can’t go and generally improve the throughput of city streets and reduce the space needed for parking cars for people who live within “not-quite walking but feels silly to drive” distance.
The big gap in the US at least is simply a lack of cities! Everything is still concentrated in a handful of legacy urban centers that survived the waves of “urban renewal” and it’s simply too expensive to house all the people who want to live there without turning them into Hong Kong sized megalopolises, which starts to introduce new problems from overwhelming density. “Urban” development patterns need to expand out to more of the country to take demand pressure off the 5 or 6 American cities with decent mass transit.
The author is basically advocating that they want to be an executive with a secretary, but they want the secretary to be AI. I don't use secretary in a pejorative sense, just meaning that the author seems to want someone/something that does simple tasks but lets them make decisions, as opposed to an executive assistant that has a little more self-agency to do things on their own.
They just want OpenClaw with printing and scanning privileges. Every morning OpenClaw prints out a task list or items that need action, the author writes notes/responses, and places it on the scanner. This is basically how my program director worked at my last job. Every morning the secretary would have his schedule printed out, he'd go to meetings and write notes, and would pass by his secretary and stick a note or two on her desk saying "set up a meeting with XYZ org/team within the next few days on ABC topic." The secretary would also print documents/presentations and he'd mark them up throughout the day with changes he wanted made, and he'd drop the documents off when he was done going through them, and the secretary would distribute the documents to their respective POCs to make the changes.
Basically the only thing the author hasn't mentioned that the secretary did is that the secretary also acted as a gatekeeper for access to the program director, either in real-time ("no, you can't go in, they are meeting with a higher level director") or would take a request for a meeting and have enough personal context on whether the director would want the meeting themself or want to see it go through a division chief first. Not sure if OpenClaw can do that, but just about everything else is totally do-able. Not sure if I really want to see someone wasting this much paper just to "feel analog" but I suppose it probably isn't a big deal since most people won't do it this way, and will stick to digital forms of communication with their OpenClaw secretary.
Jesse Genet has been posting some cool use cases of OpenClaw for homeschooling that are somewhat along these lines. Using the assistant to inventory the physical manipulables, the curriculum pages, and how they intersect. Printing pages automatically for certain lessons. Updating e-ink screens with other lessons.
I actually quite like this idea, especially if you could have an automated ingest system. It could be a good way to let isolated places have a voice online, even if it isn't necessarily very high speed. It's almost like http-over-post or something. You could even have a comments section, and post the comments to the website author
> At least then you could mimic in software that thing you get from physical objects—which is that they are usually built to do one, and only one, thing well. My alarm clock, for instance, is just an alarm clock; and that's what I like about it!
UNIX Principle anyone ? Do one thing, and do it well - seems like in this 'age of AI' the industry is rediscovering by detour best practices, decades old, all over again.
But otherwise having 'interfaces' printed out to you and an LLM multi-modal later working from your notes on it sounds really interesting and less stressful than modern 'computing'.
The Office's Michael Scott would be proud - Paper may just be the future of Digital after all!
Doing the sort of things the author wants to do simply wouldn't work for me. All I end up with is a pile of screwed up paper and nothing to show for it. Drafting and rewriting is so much better when you don't have to worry about making a mess.
If projects like this and DynamicLand interest you, it's worth checking out https://folk.computer/ - they've been working on this much more recently than DynamicLand and share their code as open source.
Receive email, render page with the email and a reply section and a unique ID, print it out physically
Human picks up all the sheets out of the printer, writes out replies with pen
Human puts the stack of answered email sheets in a multi-page scanner
Scanner physically scans them, agent transcribes them and matches them back to the incoming emails via the unique ID on each sheet, sends replies
You could adjust this flow for anything where human input is just one part of a larger sequence: just add print -> write -> scan into your flow where you'd normally have a human type. It's kind of a rebirth of faxing
On one of my first qualified jobs, my manager (a lovely older lady) did exactly this. All incoming emails were printed and put into a binder. Then she would go home, write an answer with a pen on the back side of every single one, and on the next day write a new email to the recipient. 10-15 % of all emails she sent this way would bounce because she had written the address incorrectly.
When I showed her the reply button in Eudora (this was in 2001), she was so happy that she bought me a cake.
She struggled with IT but was tack sharp otherwise. So far she's the only boss I've ever really liked.
I will say scanners are somewhat unergonomic, but if you had a high enough definition camera, you could photograph the document in its "natural environment". Granted, it's harder to get an evenly lit picture that way, but I think it's a nicer interface.
All of my document "scanning" for the last—god, maybe 15 years?—has been with a phone camera.
Before everyone just started using Docusign anyway, I'd bought houses with a phone "scanner". LOL.
I don't think I started with it, but for a very long time I've had an app called TinyScanner that's good-enough at edge detection, can de-noise or make a document entirely black & white, and can glue multiple pages together into a PDF. The results look better than plenty of flatbed scanner results I've seen, if not as good as the best of those.
Scanners with automatic feeders are ergonomic when you have to scan more than a page or two. Just place your stack of paper in the feeder and press start. I had a job where I used to do that routinely, and no way a camera would have been more convenient.
Fair enough, I actually have been thinking about this topic lately since I have to generate and print and fill out and sign a lot of paper vouchers in my job. I would prefer having a dedicated scanner to just throw them into in a stack with a server/cron job/bash script always watching for new incoming documents rather than a more complex camera setup but yeah something like a camera over your shoulder on your desk could pick up documents too
I always wished I could throw my Pocketmod[0] in the scanner at the end of the day and have a nice new one with any notes I wanted to carry over to the next day freshly printed and waiting in the morning.
There is something incredibly valuable about forcing yourself to trace execution logic on physical paper. It builds a mental model of state changes and memory that you just don't fully develop when a modern IDE's debugger is doing all the heavy lifting for you.
Forth and Lisp, easier with S9 than Guile with tons of modules where some Guile libs overlap with SRFI (global standards for every Scheme to follow). You can almost trace Lisp functions by hand.
Ditto with Forth dumping the memory, creating literal structures for numbers and whatnot. Also, the 'see' command among dumping literal memory bytes.
Being both a REPL helps a lot. But Forth gets into a lower level than S9 itself.
I've been following someone on X building a "Screenless Phone" that can scan to get inputs and print on receipt paper to provide output - very interested in how these types of experiments evolve!
There’s a strong argument for paper computer, in the sense that we have evolved to think in space and with our body (Barbara Tversky’s work springs to mind). The cognitive load of parsing our thoughts, collaborating on ideas through digital interfaces is not insignificant, and changes the nature of the kind of combinatorial thinking required to externalise and socialise ideas, organise thoughts and structure work. I think AI created a huge opportunity for this kind of ambient association with computational power that over time can make the interface recede into the analogue rather than require us to engage with the digital.
I question the idea of pastoralism though, I would argue this is another kind of construct. Laurel Hatcher Ulrich’s ‘age of homespun’ talks about this in detail, and how handcraft revivals were an expression of fear or anxiety about the radical changes brought about by industrialisation, and became a sort of myth making device for the rejection of technological overlords.
In any case, Paper Computer charts neat reformulation of the personal computer into something more interesting. If all individual computing tasks become distributed back into real spaces, objects and physically manipulable media it becomes more of an interpersonal computer, and distributed computing power can be pushed to things that don’t ordinarily engage with computational tasks such as wind or plants or anything within the shared working environment.
I've been thinking along these lines too! My idea here is to use a receipt printer + scanner. In the morning the system prints a receipt with various widgets like weather, calendar, etc. The scanner takes in the marked up receipt at EOD to update the digital data and prepare for tomorrow's receipt.
Since I have a laptop, I threw away all paper support, focusing on the keyboard as primary information interface.
Using paper and space to organize ideas is nice, but that's a niche use-case. And in any case, you'll have to digitalize it anyway afterwards, so better start on the digital version immediately, and be good at it. Everytime I start a new project, I'm tempted to take a pencil and paper, but then I refrain and use draw.io or the like because I know it will be winning on the longer run.
For the rest, you can easily customize your phone / browser / anything to be less distracting.
As for using AI just for convenience, this looks like very expensive in terms of resource.
I can jot out a system diagram on paper way better and faster than I can on a computer. Ditto UI design mockups. Having something that can translate those into a better computerized representation than a png is awesome. Paper -> graphviz/Mermaid/whatever, LOL.
This holds even with really-nice drawing interfaces like ProCreate on a 13" iPad. Paper's still better for some things. Outside of work, the way I make maps (of just about any zoom-level) for RPGs I run is to sketch them on paper, take a photo of that and import it to pro-create, trace the lines there (in a new layer), and add color/texture. I get way better results faster, and am way less frustrated, than if I start with a blank "sheet" on the iPad. The paper sitting fully flat on my table, being able to easily and precisely turn it this way and that, erasing or smudging out or just X-ing elements I mess up, plus just messing up way less to begin with, all that adds up to real paper being a way better UI for an initial draft-sketch, for me.
Maybe you should interrogate that temptation to reach for physical interfaces? It sounds like you're ignoring your own psychology and shaping yourself to the machines around you instead of thinking of how the machines could be shaped to you.
Not that I haven't done exactly the same thing as you, I never keep paper around and my handwriting has gotten terrible. I'm saying this to myself and others as well.
When dealing with humans irl, I try to stick to paper interfaces (note books etc). I feels super distracted/anti social when I am taking notes on my computer or phone.
This is why I was glad to purchase a Newton MessagePad (and before that an NCR-3125 running Go Corp. PenPoint), and all my devices since have had styluses (even my MacBook has a Wacom One display).
Just the other day, I noticed my thinking was so hijacked by distractions while building something (with AI help) that I started writing in a notebook to stay on track. The last time I'd written in the notebook was 3 years ago; in this case writing stuff down in it really helped to get me unstuck.
I'm excited to imagine workflows that could make computing a more physical activity. Thanks for writing and sharing this.
The idea of writing a draft on paper, or cutting out squares to prototype layouts on a table, sounds like a nightmare to me. But I never did like pen and paper much and have lived and breathed computers since I was young. My ideal method of writing is a full screen monospaced terminal
That said, I do much prefer reading on paper, or at least on e-ink, for many of the same reasons outlined in the post. Computers and phones are just too distracting, and too dynamic.
And I'd love some way to write down shopping lists or appointments, and have them available wherever, without having to pull out the phone. Our current method is a whiteboard + a photo whenever we need it, which doesn't quite cut it.
The best way to predict the future is to look at the past. Humans have been living and working in the 3-D world since the dawn of time, we’ve worked with paper for thousands of years, we’ve only been working at screens for about 40 years. Technology to remove technology, such as this, is brilliant.
Unfortunately, I don’t this will work until we have robot secretaries that can automate updating paper wall calendars and documents and books scattered around a room.
The only compromise would be a limited area like a physical desktop that had affordances like an overhead camera and some form of paper output.
We are doing something related; taking the TipToi tech and getting it with our own pen to turn paper into interfaces that can control remote systems. See Https://papiro.press (the pages are still being redesigned, but we needed some placeholders to be able to talk to Chinese factories)
This article tries so very hard to avoid confronting reality - going back to analog proves its inherent advantage over AI. There’s boatloads of research proving mind-hand-writing tool engagement is superior to voice recording or typing notes. I’m going to cite this in the future as a testament against AI, because that’s exactly what it is when seen through an academic lens.
This was my gut reaction as well as an eInk enthusiast, but I think the author is looking for something quite different. As much as the rM is a calmer, slower-paced device by design, it's still a device with a screen that doesn't have the same physical affordances and spatial flexibility as pieces of paper.
I agree, absolutely love my reMarkable for those reasons, as well as not needing to manage the _annoying_ physical properties of paper (storage, organization) any more
> they have the problem that they make it difficult to just use your calendar, todo list, or map—or even just respond to a friend's message—without encountering something else along the way, like a social network, short-form video, Slack, the news, or some other notification.
I see this seemingly everywhere. People are looking for these extreme solutions to solve the problem of getting distracted by an app like Instagram or TikTok on their phone. Wouldn’t uninstalling the app, and going a step further, deleting the account, be the more pragmatic solution here? We control what is installed on our devices, what accounts we have, and which notifications we receive. If someone has enough agency to move to a pen and paper, surely they can uninstall some apps?
While I like the idea of having a magic paper notebook that would somehow interact with computer systems, that idea seems like mostly science fiction without having significant levels of technology all around you (cameras, projectors, etc) which would kind of defeat the purpose imo.
I watched the first video on Dynamic Land and I think I’d feel very uncomfortable in a room like that. Look the wrong way and catch a projector’s light in the eye, and once big tech gets into the game, who knows what happens with all the data from the cameras. I’ve grown rather paranoid.
A phone with just utilities installed, no social media, or going a step further to something like an e-ink tablet (something like Remarkable), seems like it would get most of the way there and actually work today. The biggest concern then becomes the web browser, but the big tech companies do most of the work for us by making sites insufferable to use while logged out and without an app.
Something might be able to get rigged up with RocketBook as well, for an actual pen on paper experience, but having to take a picture of the pages is kind of a pain. I have one and the novelty wore off very quickly; it has sat in a drawer for years now.
I’ve struggled with this idea a bit myself, as I sometimes romanticize the idea of using analog tools, but when they exist alone on an island, that seems to come with some considerable downsides in the modern world.
Apple Notes can be good for some of this too. Instead of using ChatGPT, Apple Notes can use the phone camera to do live OCR on text and add it into a note. I’ve used it a couple times and it’s pretty handy, when I remember it.
Emacs, and technologies built on it, such as org-mode, come somewhat close to ideas expressed here by having plain text in a buffer be the unifying data format. You can organize stuff by just moving snippets of text around.
I think it's difficult in practice to design data manipulation interfaces based on real-world objects because atoms are heavy and bits are not. Data is just much more malleable and transformable than real world objects, at least at the pre-Diamond Age tech level we're at. But maybe ML will help make this easier by allowing computers to track and scan the objects more easily.
Yeah, and it's really worth checking out https://dynamicland.org/, because Bret Victor is actually doing this -- slash pointing the way to what such a world could look like. It just seems like now might be a good time for specific smaller parts of that vision to be carved off and developed further. I say that largely because of the advances in multimodal AI, which maybe haven't been fully applied yet in this area.
And a shout-out to https://folk.computer/ as well! They're not as far along in terms of feature parity, but they are open source, and exploring the space in other directions.
If you have any ins with this project would you mind asking them to add a line or 2 describing what it is about, or even a linked text in the start.txt file?
Just a simple:
> Folk Computer is a research & art project centered around designing new physical computing interfaces.
From ./notes/tableshots.txt with a link towards the top would imo be quite helpful.
(Sorry, this is just one of my pet peeves: needing to know what a project is about before being able to read about it is just terrible UX, although extremely common as we as humans tend to forget that we know things others don't)
I would say that simply expanding the first word of "hello -" into:
> Hello, Folk Computer is a research & art project centered around designing new physical computing interfaces. [read more](./notes/tableshots.txt)
Is more than sufficient, most of the website is for people who already know about the project. I'm just asking for a small part at the beginning for us who are new :)
The problem with screens is you can't get good at them, even after 18 years of them. Not like you could a sewing machine, a stick shift car, or a loom.
Paper Computing (great name!) is something I've been thinking about a lot to help my kids benefit from tech without exposing them to the brain melting addiction of screens. I sacrificed a few crazy nights of sleep to try to build a Paper Computer Agent prototype for a recent Gemini hackathon (only to disappointingly have submission issues right before the actual deadline) which my kids loved and keep asking me to set up permanently for them.
It's essentially a poor man's hacked up DynamicLand - projector, camera, live agent. There are so many things you could do if you had a strong working baseline for this. My kids used it to create stories, learn how to draw various things, and watching safe videos they could hold in their hand.
There's something weirdly compelling and delightfully physical about holding a piece of paper that shows a live rocket launch, with the flames streaming down the page. It could also project targeted pieces of text, such as inline homework advice, or graphs next to data. It doesn't take long to imagine any other number of fun use cases, and it feels a lot more freeing and inspiring than keeping everything bound to a screen.
Github - https://github.com/Pugio/Orly (hacky minimal prototype that did the thing)
Video Pitch - https://youtu.be/-9l1x7GnmxU (filmed an hour before the deadline on an old phone with no sleep)
R.I.P. to the Amazon Glow video calling device, killed before AI went mainstream. I'd love to hear how to get root on one... exactly the hardware your project could use most effectively and an amazing interface for playing games remotely with the grandparents.
https://www.theverge.com/2022/10/20/23415167/amazon-glow-sup...
this is really cool, I'd love to use something like this for my kids too. Maybe I'll try your project when I have some more free time. Would love to contribute but i'm not very skilled in python.
If you don't mind me asking, what hardware did you use? Especially for the project, I'm guessing it needs to have quite a strong bulb in order to be seen in broad daylight?
this is beautiful
I love how creatively ai is integrated in here. Amazing.
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/ https://news.ycombinator.com/item?id=39241472 (165 points, 2 years ago, 53 comments)
I love gow creatively ai is integrated in here. Amazing.
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/
this is such a great idea! well done
This is lovely.
I was pretty excited when I saw the premise behind what Apple was doing with VisionPro because I figured they were steering towards this, but it seems they’ve looked away and don’t really care about going deeper into this direction.
I asked at some point if I could theoretically develop an application that could literally be controlled by a Fischer Price toy, like a little plastic car console or something. Or even potentially have a real keyboard that isn’t connected to anything, but the VisionPro can just see my keypresses and apply them as if I was actually pressing something. The former case is possible, but surprisingly difficult, but the latter case isn’t really there yet (requires too much precision and latency is worse than just using a Bluetooth keyboard).
Either way, the idea of a computing environment that meshes with and directly interacts with the real, physical objects around you is an interesting premise I’d like to see taken further with “Spatial Computing”/AR. Scanning and recording things I’m writing on a whiteboard or in a notebook by recognizing that I’ve picked up a pen and am writing something down would just be getting started.
Of course, if we’re ambiently recording everything you’re doing there will need to be some kind of regular process/interface to “sift” everything at the end of the day. This is the core of the Getting Things Done methodology. Everything goes into a big “intake list” and then you do periodic check-ins throughout the day where you review the list and decide whether to move those to a series of sub-lists to “do this now,” “do this soon,” or “do this someday.”
> Now that we have actually good AI, I have this vision of a form of computing that doesn’t involve me using a computer so much. Imagine you had the day’s emails to go through. It would be nice if the ones that required a simple decision could be dispatched with a few pen-strokes: I could write down a date that would work for that meeting; check a box to accept that invitation; etc.
This reminds me of those predictions from 1900 about the year 2000, when they thought we'd all live in enormous skyscrapers and get around by flying cars. Instead we moved out to suburbs because improved logistics systems meant we could buy things from suburban shopping centres rather than having to go into city centres. Revolution, not evolution.
Surely the real advantage of an 'actually good AI' would be getting the AI to do the work itself, rather than just allowing the work to be done in a format with which the human is more comfortable. The underlying problem is that there are too many things vying for our attention.
Don’t think of it as work, but of what a human would want to spend time doing. In https://news.ycombinator.com/item?id=47788736, a commentor describes how his kids love using the “paper computer” prototype he built. They are not working, they are playing and learning and experimenting and creating. Things that humans like to do.
To some degree, that's what one had w/ Apple's Newton Intelligence on the MessagePad --- it was "just" fancy pattern-matching, but mostly it worked, and the UI and implementation were quite good, and it kept me organized all through college.
Mentioning the Newton may be anathema to the discussion (it seems to bring up the usual jokes, etc.) but I was thinking too that the Macintosh (or the Xerox Alto if you like, or the Mother of All Demos) tried to move us in that direction by "skeuomorphising" the computer interface—make it look like the more familiar "real world". The Newton pushed further. It seems to have been on the mind of at least a few people at Apple.
It sounds like the author is on the same track, has the same mindset. And I like.
I am also reminded of the Young Lady's Illustrated Primer: in Neil Stephenson's Diamond Age. It is not exactly what the author describes but, if the book had a computer backend, it also divorces the user from the computer interface we have come to know. Perhaps for me some future (better) local LLM within such a book is what I want. A kind of companion I ask questions of…
(I mean I suppose I should just do what was posted a day or to ago to the Ask HN: and put a local LLM behind a messaging app and I could just converse with it wherever I am. Tangent: I am kind of fascinated by the idea of a personal LLM that has context stretching back to my earliest days—were I to have started conversing with this synthetic companion at a young age. Imagine the lifetime of context where the LLM knows my habits, how I've changed over the years. I suppose this is nightmare fuel for a number of you.)
Other copies of the Primer do have a computer backend.
There are basically three versions of the book:
1) The ones developed for a few rich kids. These are partially automated, but backed by gig workers. They get what we might call (if you'll pardon the term) "Actually Indians" AI (augmented by the regular type).
2) The one our protagonist gets. This is one of the books from #1, but the distinctive feature here is that an early gig worker (the book calls these "'ractors" when they're doing this kind of work) the protagonist draws takes a special interest in her and intentionally keeps drawing jobs for her over a period of several years. This continuity and personal care by a single real person is what sets it apart and makes her experience so excellent.
3) The mass-market version that's entirely computerized, no human touch. This version brainwashes a fuckload of kids into becoming the "mouse army", and that's really all we see as far as what it can do: something really bad (if convenient for our protagonist).
The message of the book is 100% the opposite of "automated learning-books are amazing". It's "tech for learning sucks ass and/or is outright dangerous if you rely only on it, and a real human tutor who cares about a kid is the best thing around even in a crazy high-tech future-world".
Charles de Lint had an intelligent book in his fantasy novel _Jack the Giant Killer_ (or maybe its sequel) --- I've tried doing the conversing/chatting thing w/ an LLM a couple of times, but always got annoyed more than amused.
What's the point? LLMs tend towards the mean/average --- I want better in my life and interactions --- it's useful when I need an example DXF or similar rote task, but my current project is a woodworking joint which has no precedent.
Yes, the skeumorphism angle is an interesting one, and one which is surprisingly absent in the _ur_ description of a stylus equipped computing device, the slates/tablets from Larry Niven and Jerry Pournelle's _The Mote in God's Eye_ --- this sort of thing seems to be coming back around --- a recent Kindle Scribe firmware update add shape recognition. I'd be _very_ pleased if my new Kindle Scribe Coloursoft could fully become a replacement for my Newton....
I think you're right that the use case for an LLM is still rather niche. It's perhaps still worth exploring though as they may well improve over time.
Regardless, I have still found them useful. Diagnosing the problems with a car is maybe an esoteric example but is still useful.
For many months now I have been working through learning about and implementing a hobbyist analog computer with LLM as engineer-confidant. I already knew the basics of op-amps and analog computing but was surprised at a lot of the new things I discovered only by way of the LLM saying (for example), "Hey, here's a nice way to get your reference voltages…" and the project benefited from it (and I learned about a new chip/device/technique).
Yes, they do work well as a stand-in for the "competent technician with skill in the pertinent art and and fully aware of all prior art" (to use wording like to the patent application standard).
But it's only going to allow you to avail oneself of prior art/techniques.
> we moved out to suburbs because
Because it was a profit making venture for car companies. Suburbs are horrifically inefficient, they survive by the twisted "communism" of cannibalizing the dense urban tax bases to support the sprawling, expensive to service and maintain, isolating flatlands.
Not so fast: I would say that the move to suburbs was initially driven by a thirst for homeownership with luxurious lawns, coupled with electric streetcars and other rail-based transport.
It was only later that the almighty combustion engine and tire companies forcibly replaced streetcars with buses and trucks, that cars began their hegemonic domination of suburbia. The National Highway System decrees didn't hurt, either, but highways were built in the USA with an ulterior motive of national defense.
It also happened during a period where cities were polluted, noisy, and the middle-class housing was largely cramped tenements. Basically all of this has been/is being mitigated these days. City-center housing now looks more like luxury loft living than tenements (though this gives us a big problem with ‘missing middle’ housing where there’s very little housing available that is suitable for families where everything is decrepit slums or luxury 1 and 2 bedroom condos). Pollution has been largely mitigated with catalytic converters and, now, EVs. And electrification helps deal with noise pollution as well through getting rid of engine noise (especially for motorized appliances like leaf-blowers).
Meanwhile, traffic and the stigma around drunk driving (which wasn’t nearly as strong or strictly enforced before the 90s), have quickly taken much of the bloom off the rose of car-dependent lifestyles. I predict the growth of micromobility options will continue to make cities even more attractive as well by improving coverage for areas where transit can’t go and generally improve the throughput of city streets and reduce the space needed for parking cars for people who live within “not-quite walking but feels silly to drive” distance.
The big gap in the US at least is simply a lack of cities! Everything is still concentrated in a handful of legacy urban centers that survived the waves of “urban renewal” and it’s simply too expensive to house all the people who want to live there without turning them into Hong Kong sized megalopolises, which starts to introduce new problems from overwhelming density. “Urban” development patterns need to expand out to more of the country to take demand pressure off the 5 or 6 American cities with decent mass transit.
The author is basically advocating that they want to be an executive with a secretary, but they want the secretary to be AI. I don't use secretary in a pejorative sense, just meaning that the author seems to want someone/something that does simple tasks but lets them make decisions, as opposed to an executive assistant that has a little more self-agency to do things on their own.
They just want OpenClaw with printing and scanning privileges. Every morning OpenClaw prints out a task list or items that need action, the author writes notes/responses, and places it on the scanner. This is basically how my program director worked at my last job. Every morning the secretary would have his schedule printed out, he'd go to meetings and write notes, and would pass by his secretary and stick a note or two on her desk saying "set up a meeting with XYZ org/team within the next few days on ABC topic." The secretary would also print documents/presentations and he'd mark them up throughout the day with changes he wanted made, and he'd drop the documents off when he was done going through them, and the secretary would distribute the documents to their respective POCs to make the changes.
Basically the only thing the author hasn't mentioned that the secretary did is that the secretary also acted as a gatekeeper for access to the program director, either in real-time ("no, you can't go in, they are meeting with a higher level director") or would take a request for a meeting and have enough personal context on whether the director would want the meeting themself or want to see it go through a division chief first. Not sure if OpenClaw can do that, but just about everything else is totally do-able. Not sure if I really want to see someone wasting this much paper just to "feel analog" but I suppose it probably isn't a big deal since most people won't do it this way, and will stick to digital forms of communication with their OpenClaw secretary.
Jesse Genet has been posting some cool use cases of OpenClaw for homeschooling that are somewhat along these lines. Using the assistant to inventory the physical manipulables, the curriculum pages, and how they intersect. Printing pages automatically for certain lessons. Updating e-ink screens with other lessons.
Mandatory RealTalk/Dynamicland mention [0] [1]
[0] https://www.youtube.com/watch?v=7wa3nm0qcfM [1] https://dynamicland.org/
Reminds me of Paper Website from the Tiny Projects series, discussed back in 2021.
https://daily.tinyprojects.dev/paper_website
https://news.ycombinator.com/item?id=29550812
I actually quite like this idea, especially if you could have an automated ingest system. It could be a good way to let isolated places have a voice online, even if it isn't necessarily very high speed. It's almost like http-over-post or something. You could even have a comments section, and post the comments to the website author
> At least then you could mimic in software that thing you get from physical objects—which is that they are usually built to do one, and only one, thing well. My alarm clock, for instance, is just an alarm clock; and that's what I like about it!
UNIX Principle anyone ? Do one thing, and do it well - seems like in this 'age of AI' the industry is rediscovering by detour best practices, decades old, all over again.
But otherwise having 'interfaces' printed out to you and an LLM multi-modal later working from your notes on it sounds really interesting and less stressful than modern 'computing'.
The Office's Michael Scott would be proud - Paper may just be the future of Digital after all!
Doing the sort of things the author wants to do simply wouldn't work for me. All I end up with is a pile of screwed up paper and nothing to show for it. Drafting and rewriting is so much better when you don't have to worry about making a mess.
If projects like this and DynamicLand interest you, it's worth checking out https://folk.computer/ - they've been working on this much more recently than DynamicLand and share their code as open source.
Receive email, render page with the email and a reply section and a unique ID, print it out physically
Human picks up all the sheets out of the printer, writes out replies with pen
Human puts the stack of answered email sheets in a multi-page scanner
Scanner physically scans them, agent transcribes them and matches them back to the incoming emails via the unique ID on each sheet, sends replies
You could adjust this flow for anything where human input is just one part of a larger sequence: just add print -> write -> scan into your flow where you'd normally have a human type. It's kind of a rebirth of faxing
On one of my first qualified jobs, my manager (a lovely older lady) did exactly this. All incoming emails were printed and put into a binder. Then she would go home, write an answer with a pen on the back side of every single one, and on the next day write a new email to the recipient. 10-15 % of all emails she sent this way would bounce because she had written the address incorrectly.
When I showed her the reply button in Eudora (this was in 2001), she was so happy that she bought me a cake.
She struggled with IT but was tack sharp otherwise. So far she's the only boss I've ever really liked.
I will say scanners are somewhat unergonomic, but if you had a high enough definition camera, you could photograph the document in its "natural environment". Granted, it's harder to get an evenly lit picture that way, but I think it's a nicer interface.
All of my document "scanning" for the last—god, maybe 15 years?—has been with a phone camera.
Before everyone just started using Docusign anyway, I'd bought houses with a phone "scanner". LOL.
I don't think I started with it, but for a very long time I've had an app called TinyScanner that's good-enough at edge detection, can de-noise or make a document entirely black & white, and can glue multiple pages together into a PDF. The results look better than plenty of flatbed scanner results I've seen, if not as good as the best of those.
I've been using Genius Scan for ~15 years and it also lets you send faxes (via credits you buy). My phone works for 99% of my use cases.
Scanners with automatic feeders are ergonomic when you have to scan more than a page or two. Just place your stack of paper in the feeder and press start. I had a job where I used to do that routinely, and no way a camera would have been more convenient.
Fair enough, I actually have been thinking about this topic lately since I have to generate and print and fill out and sign a lot of paper vouchers in my job. I would prefer having a dedicated scanner to just throw them into in a stack with a server/cron job/bash script always watching for new incoming documents rather than a more complex camera setup but yeah something like a camera over your shoulder on your desk could pick up documents too
This is fixed by using anoto paper and a supporting pen!
I always wished I could throw my Pocketmod[0] in the scanner at the end of the day and have a nice new one with any notes I wanted to carry over to the next day freshly printed and waiting in the morning.
[0] https://pocketmod.com/
There is something incredibly valuable about forcing yourself to trace execution logic on physical paper. It builds a mental model of state changes and memory that you just don't fully develop when a modern IDE's debugger is doing all the heavy lifting for you.
Forth and Lisp, easier with S9 than Guile with tons of modules where some Guile libs overlap with SRFI (global standards for every Scheme to follow). You can almost trace Lisp functions by hand.
Ditto with Forth dumping the memory, creating literal structures for numbers and whatnot. Also, the 'see' command among dumping literal memory bytes.
Being both a REPL helps a lot. But Forth gets into a lower level than S9 itself.
I've been following someone on X building a "Screenless Phone" that can scan to get inputs and print on receipt paper to provide output - very interested in how these types of experiments evolve!
https://x.com/daviddorg/status/2037050583274954882
https://x.com/daviddorg/status/2033937383012635065
https://yearunplugged.com/newsletter
There’s a strong argument for paper computer, in the sense that we have evolved to think in space and with our body (Barbara Tversky’s work springs to mind). The cognitive load of parsing our thoughts, collaborating on ideas through digital interfaces is not insignificant, and changes the nature of the kind of combinatorial thinking required to externalise and socialise ideas, organise thoughts and structure work. I think AI created a huge opportunity for this kind of ambient association with computational power that over time can make the interface recede into the analogue rather than require us to engage with the digital.
I question the idea of pastoralism though, I would argue this is another kind of construct. Laurel Hatcher Ulrich’s ‘age of homespun’ talks about this in detail, and how handcraft revivals were an expression of fear or anxiety about the radical changes brought about by industrialisation, and became a sort of myth making device for the rejection of technological overlords.
In any case, Paper Computer charts neat reformulation of the personal computer into something more interesting. If all individual computing tasks become distributed back into real spaces, objects and physically manipulable media it becomes more of an interpersonal computer, and distributed computing power can be pushed to things that don’t ordinarily engage with computational tasks such as wind or plants or anything within the shared working environment.
I've been thinking along these lines too! My idea here is to use a receipt printer + scanner. In the morning the system prints a receipt with various widgets like weather, calendar, etc. The scanner takes in the marked up receipt at EOD to update the digital data and prepare for tomorrow's receipt.
Since I have a laptop, I threw away all paper support, focusing on the keyboard as primary information interface.
Using paper and space to organize ideas is nice, but that's a niche use-case. And in any case, you'll have to digitalize it anyway afterwards, so better start on the digital version immediately, and be good at it. Everytime I start a new project, I'm tempted to take a pencil and paper, but then I refrain and use draw.io or the like because I know it will be winning on the longer run.
For the rest, you can easily customize your phone / browser / anything to be less distracting.
As for using AI just for convenience, this looks like very expensive in terms of resource.
I can jot out a system diagram on paper way better and faster than I can on a computer. Ditto UI design mockups. Having something that can translate those into a better computerized representation than a png is awesome. Paper -> graphviz/Mermaid/whatever, LOL.
This holds even with really-nice drawing interfaces like ProCreate on a 13" iPad. Paper's still better for some things. Outside of work, the way I make maps (of just about any zoom-level) for RPGs I run is to sketch them on paper, take a photo of that and import it to pro-create, trace the lines there (in a new layer), and add color/texture. I get way better results faster, and am way less frustrated, than if I start with a blank "sheet" on the iPad. The paper sitting fully flat on my table, being able to easily and precisely turn it this way and that, erasing or smudging out or just X-ing elements I mess up, plus just messing up way less to begin with, all that adds up to real paper being a way better UI for an initial draft-sketch, for me.
Maybe you should interrogate that temptation to reach for physical interfaces? It sounds like you're ignoring your own psychology and shaping yourself to the machines around you instead of thinking of how the machines could be shaped to you.
Not that I haven't done exactly the same thing as you, I never keep paper around and my handwriting has gotten terrible. I'm saying this to myself and others as well.
When dealing with humans irl, I try to stick to paper interfaces (note books etc). I feels super distracted/anti social when I am taking notes on my computer or phone.
This is why I was glad to purchase a Newton MessagePad (and before that an NCR-3125 running Go Corp. PenPoint), and all my devices since have had styluses (even my MacBook has a Wacom One display).
I love the idea.
Just the other day, I noticed my thinking was so hijacked by distractions while building something (with AI help) that I started writing in a notebook to stay on track. The last time I'd written in the notebook was 3 years ago; in this case writing stuff down in it really helped to get me unstuck.
I'm excited to imagine workflows that could make computing a more physical activity. Thanks for writing and sharing this.
Omg I love this, I wrote a very similar blog post last week! I would love to connect and chat @jsomers. Where can I message you?
(My blog post btw if you’re curious https://bhave.sh/make-humans-analog-again/)
My favorite paper computer https://pocketmod.com
Along those lines: Hipster PDA
http://www.43folders.com/2004/09/03/introducing-the-hipster-...
The idea of writing a draft on paper, or cutting out squares to prototype layouts on a table, sounds like a nightmare to me. But I never did like pen and paper much and have lived and breathed computers since I was young. My ideal method of writing is a full screen monospaced terminal
That said, I do much prefer reading on paper, or at least on e-ink, for many of the same reasons outlined in the post. Computers and phones are just too distracting, and too dynamic.
And I'd love some way to write down shopping lists or appointments, and have them available wherever, without having to pull out the phone. Our current method is a whiteboard + a photo whenever we need it, which doesn't quite cut it.
How does a 30” e-ink screen sound?
Throw in touch for using a stylus and I’d buy it
The best way to predict the future is to look at the past. Humans have been living and working in the 3-D world since the dawn of time, we’ve worked with paper for thousands of years, we’ve only been working at screens for about 40 years. Technology to remove technology, such as this, is brilliant.
Unfortunately, I don’t this will work until we have robot secretaries that can automate updating paper wall calendars and documents and books scattered around a room.
The only compromise would be a limited area like a physical desktop that had affordances like an overhead camera and some form of paper output.
I think a bigger blocker is that this is a read-only environment for the computer when we need readwrite.
It’s fantastic that computers can be so effective at this read-only work but so much of what I do needs write feedback from the machine.
We are doing something related; taking the TipToi tech and getting it with our own pen to turn paper into interfaces that can control remote systems. See Https://papiro.press (the pages are still being redesigned, but we needed some placeholders to be able to talk to Chinese factories)
This article tries so very hard to avoid confronting reality - going back to analog proves its inherent advantage over AI. There’s boatloads of research proving mind-hand-writing tool engagement is superior to voice recording or typing notes. I’m going to cite this in the future as a testament against AI, because that’s exactly what it is when seen through an academic lens.
See also -- The Screenless Office: http://screenl.es/
(On HN 2017, 138 comments: https://news.ycombinator.com/item?id=15960056)
How about hacking a remarkable e-ink tablet as an easy prototype? The remarkable is basically a "better paper" already.
This was my gut reaction as well as an eInk enthusiast, but I think the author is looking for something quite different. As much as the rM is a calmer, slower-paced device by design, it's still a device with a screen that doesn't have the same physical affordances and spatial flexibility as pieces of paper.
Remarkable is it though. where it wins, is that I can select an arbitrary region, and copy and paste and resize. Can't do that with pen and paper.
I agree, absolutely love my reMarkable for those reasons, as well as not needing to manage the _annoying_ physical properties of paper (storage, organization) any more
> they have the problem that they make it difficult to just use your calendar, todo list, or map—or even just respond to a friend's message—without encountering something else along the way, like a social network, short-form video, Slack, the news, or some other notification.
I see this seemingly everywhere. People are looking for these extreme solutions to solve the problem of getting distracted by an app like Instagram or TikTok on their phone. Wouldn’t uninstalling the app, and going a step further, deleting the account, be the more pragmatic solution here? We control what is installed on our devices, what accounts we have, and which notifications we receive. If someone has enough agency to move to a pen and paper, surely they can uninstall some apps?
While I like the idea of having a magic paper notebook that would somehow interact with computer systems, that idea seems like mostly science fiction without having significant levels of technology all around you (cameras, projectors, etc) which would kind of defeat the purpose imo.
I watched the first video on Dynamic Land and I think I’d feel very uncomfortable in a room like that. Look the wrong way and catch a projector’s light in the eye, and once big tech gets into the game, who knows what happens with all the data from the cameras. I’ve grown rather paranoid.
A phone with just utilities installed, no social media, or going a step further to something like an e-ink tablet (something like Remarkable), seems like it would get most of the way there and actually work today. The biggest concern then becomes the web browser, but the big tech companies do most of the work for us by making sites insufferable to use while logged out and without an app.
Something might be able to get rigged up with RocketBook as well, for an actual pen on paper experience, but having to take a picture of the pages is kind of a pain. I have one and the novelty wore off very quickly; it has sat in a drawer for years now.
I’ve struggled with this idea a bit myself, as I sometimes romanticize the idea of using analog tools, but when they exist alone on an island, that seems to come with some considerable downsides in the modern world.
Apple Notes can be good for some of this too. Instead of using ChatGPT, Apple Notes can use the phone camera to do live OCR on text and add it into a note. I’ve used it a couple times and it’s pretty handy, when I remember it.
This is similar to how movies and tv show productions are timed out over days of production.
cool idea
The fact the paper is only an interface and you're not depending any less on a computer, doesn't bother you at all?
Thought this was gonna be about CARDIAC, lol.
Emacs, and technologies built on it, such as org-mode, come somewhat close to ideas expressed here by having plain text in a buffer be the unifying data format. You can organize stuff by just moving snippets of text around.
I think it's difficult in practice to design data manipulation interfaces based on real-world objects because atoms are heavy and bits are not. Data is just much more malleable and transformable than real world objects, at least at the pre-Diamond Age tech level we're at. But maybe ML will help make this easier by allowing computers to track and scan the objects more easily.
https://www.instructables.com/CARDIAC-CARDboard-Illustrative...
Although the cardboard implementation is kind of the point, I think it's cool that someone made an FPGA version (dead link though, RIP drdobbs.com).
If I understand this correctly, you're talking about using paper as a computing interface? That's such a neat idea!
Yeah, and it's really worth checking out https://dynamicland.org/, because Bret Victor is actually doing this -- slash pointing the way to what such a world could look like. It just seems like now might be a good time for specific smaller parts of that vision to be carved off and developed further. I say that largely because of the advances in multimodal AI, which maybe haven't been fully applied yet in this area.
And a shout-out to https://folk.computer/ as well! They're not as far along in terms of feature parity, but they are open source, and exploring the space in other directions.
If you have any ins with this project would you mind asking them to add a line or 2 describing what it is about, or even a linked text in the start.txt file?
Just a simple:
> Folk Computer is a research & art project centered around designing new physical computing interfaces.
From ./notes/tableshots.txt with a link towards the top would imo be quite helpful.
(Sorry, this is just one of my pet peeves: needing to know what a project is about before being able to read about it is just terrible UX, although extremely common as we as humans tend to forget that we know things others don't)
Thanks! I can pass it on. Would you say then that having an overview on the main page, with links for more details would be a better approach?
I would say that simply expanding the first word of "hello -" into:
> Hello, Folk Computer is a research & art project centered around designing new physical computing interfaces. [read more](./notes/tableshots.txt)
Is more than sufficient, most of the website is for people who already know about the project. I'm just asking for a small part at the beginning for us who are new :)
https://wiki.xxiivv.com/site/paper_computer.html
Also, check the spirograph too, among the slide ruler and any abacus.
The problem with screens is you can't get good at them, even after 18 years of them. Not like you could a sewing machine, a stick shift car, or a loom.