The slab and the permacomputer
source link: https://society.robinsloan.com/archive/slab/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
To: The Media Lab
Sent: October 2021
The slab and
the permacomputer
Obsidian revetment slab fragment, 1st century A.D., RomanThis is a note intended to lay out something that’s lately clicked for me. Here are three glimpses of the future of computing that all seem to “rhyme”:
1. Cloud functions
I wrote about my experience with Google Cloud Functions back in the spring; for me, these represent the “perfection” of the AWS/GCP model. Their utility snuck up on me! I now have about a dozen cloud functions running — or, I should say, waiting to run. My little platoon of terra cotta warriors, dozing in their data centers until called upon.
2. Colab notebooks
I’d heard about these forever, but it’s only in the past year that I’ve used them, and they have since become indispensable. The fusion of “document” with “program” AND “environment” is frankly dizzying; using Colab feels more futuristic than just about anything else I do in a browser. (I should add that I’m terrible at Python, but part of the appeal is that you can be terrible at Python and still get a lot done in these notebooks.)
You might say, “surely, Robin, you’re just saying that you admire IPython and Jupyter notebooks generally” — but I’m not really.
For me, the magic is in the specific combination of Jupyter’s affordances with Google’s largesse: you open a new tab, and poof, it’s a document with a powerful computer attached.
3. World computers
All of the blockchains, particularly those that depend on costly proof of work algorithms, strike me as deeply aesthetically ugly; these are systems with thrashing waste at their core, by design. Clever, maybe, but not elegant.
Even so, I can’t deny that Ethereum’s “world computer” is interesting and, even more than that, evocative. The Ethereum blockchain is one entity, shared globally, agreed upon by all its participants: that’s what makes it useful as a ledger. The Ethereum Virtual Machine, a kind of computer — simultaneously sophisticated and primitive — is likewise one logical entity, even if it’s distributed in space and time.
As with a lot of things in crypto, the feeling is as much mystical as it is technical. I understand why people get excited when they deploy an Ethereum contract: it feels like you are programming not just a computer, but THE computer. That feeling is technically wrong; it is definitely just a computer; but since when did the technical incorrectness of feelings prevent them from being motivating?
I think these are glimpses of an accelerating reformulation of “computers”—the individual machines like my laptop, or your phone, or the server whirring in the corner of my office — into “compute”, a seamless slab of digital capability.
I like “slab” better than “cloud”, both for its sense of a smooth, opaque surface and its suggestion of real mass and weight. That’s the twist, of course: cloud functions and Colab notebooks and Ethereum contracts DO run on “computers”, vast armadas of individual machines taking up real physical space, venting real hot air. A responsible user of these systems ought to remember that, but … only sometimes. Power outlets also conceal gnarly infrastructural realities, real mass and weight, and a person ought to be aware of those, too — but not, perhaps, every time they plug in the vacuum.
The idea that “computers” might melt into “compute”, a utility as unremarkable as electricity or water, isn’t new. But I do feel like it’s suddenly melting faster!
For me, a more useful analogy than electricity is textile manufacturing, which was, a couple centuries ago, THE high-tech industry; innovations in mechanical weaving were close to the core of the industrial revolution. Today, aside from the weird technical fabrics that are like, bullet-proof and opaque to cosmic rays, textile manufacturing isn’t considered high-tech: it’s just … industry, I suppose. Textiles are produced with extreme efficiency in huge, matter-of-fact facilities. Move along! Nothing to see here.
I recently read David Macaulay’s book Mill, illustrating the construction and growth of a textile mill in Rhode Island in the early 1800s, and, I’ve got to tell you: Macauley’s mill looks and feels like a data center.
They put data centers near rivers, too!
For me, this raises the analogical question:
Textiles in 1800 : textiles in 2020 :: computers in 2020 : ???
I mean, I am betting the ??? is a slab — but I don’t know exactly what kind, nor do I know how it will be built or operated or accessed.
The dutifully critical part of me wants to shout: you shouldn’t trust these slabs! Their operators, G — and A — and M — and the rest, will surely betray you. The very signature of the corporate internet is the way it slips from your grasp. The leviathans swim off in pursuit new markets, and what do they leave you with? Deprecation notices.
There are other endings, too: even now, the slabs occasionally flicker offline, and it’s not difficult to imagine a seriously hard crash, one that lasts a long time, caused by either an accident or an attack. So much for my terra cotta warriors.
Then again … internet trunk lines run alongside railroad tracks. Won’t the slab operators and their infrastructure still be with us in a hundred years, in SOME form, just as the railroads are today? I would guess yes, probably.
I think maybe we — that’s the “we” of people interested in the futures, near and far, of computers — ought to go in two directions at once.
First, if somebody offers you a seamless slab of compute and says, here, take a bite: sure, go for it. See what you can make. Solve problems for yourself and for others. Explore, invent, play.
At the same time, think further and more pointedly ahead. There’s an idea simmering out there, still fringe, coaxed forward by a network of artists and hobbyists: it’s called “permacomputing” and it asks the question: what would computers look like if they were really engineered to last, on serious time scales?
You already know the answers! They’d use less power; they’d be hardy against the elements; they’d be repairable — that’s crucial — and they’d be comprehensible. The whole stack, from the hardware to the boot loader to the OS (if there is one) to the application, would be something that a person could hold in their head.
Basically every computer used to be like that, up until the 1980s or so; but permacomputing doesn’t mean we have to go backwards. The permacomputers of the future could be totally sophisticated, super fast; they could use all the tricks that engineers and programmers have learned in the decades since the Altair 8800. They would just deploy them toward different ends.
As a concrete-ish example, I think this project from Alexander Mordvintsev is lovely, and totally permacomputing:
Alexander is the discoverer, in 2015, of the “DeepDream” technique, an early — now iconic — fountain of AI-generated art. You have surely seen examples: images that boil with strange details; whorls of eyeballs where eyeballs should not whorl.
Earlier this year, Alexander released a stripped-down implementation of DeepDream written in a vintage dialect of C, his code carefully commented. This version runs on a CPU, not a GPU. It will do so very slowly. Who cares? It will do its thing eventually, even on the humblest hardware. You could run Alexander’s deepdream.c
on a Raspberry Pi. You could probably run it on a smart refrigerator.
The implementation does depend on a single pre-trained model file, produced at (then-)great expense by many computers with very fast GPUs. I find this totally evocative: it’s easy to imagine future permacomputers that rely, for some of their functions, on artifacts from a time before permacomputing. It would be impossible, or at least forbiddingly difficult, to produce new model files, so the old ones would be ferried around like precious grimoires …
(For the record, I already feel this way about some ML model files: whenever I find one that’s interesting or useful, I diligently save my own copy.)
This is one of those ideas where, even if it turns out you’ll never need a permacomputer, you’ll be glad you thought about them. Powerful forces are pushing computing toward vast, brittle systems that devour energy and are incomprehensible even to their own makers; I should know, because I am a small constituent part of these forces. Given such pressure, even a faint countervailing wind is precious.
The sailing/computing duo called Hundred Rabbits are pilgrim-poets of permacomputing. Their Uxn project is a simple 8-bit computer design that can be built or emulated in a variety of ways, including on old, recycled hardware.
Of Uxn, they write:
With only 64kb of memory, it will never run Chrome, TensorFlow or a blockchain. It sucks at doing most modern computing, but it’s also sort of the point. It’s more about finding what new things could be made in such a small system.
Where does this leave us? I’m perfectly comfortable in the both/and. I accept the invitation of the seamless slab, and I benefit daily from the ability to solve problems with scraps of code pinned in abstract space. I am, at the same time, certain those scraps will be blown away before the decade is over; maybe just by the leviathan’s restlessness, or maybe by something more dire.
I’d like a permacomputer of my own.
Sent to the Media Lab committee in October 2021
This is a newsletter established by Robin Sloan, a novelist and media inventor. It is open to everyone. The best thing to do is subscribe via email:
This website doesn’t collect any information about you — not even basic analytics. It aspires to the speed and privacy of the printed page.
Don’t miss the colophon.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK