1

Beyond The Web: Reassembly of the Internet

 2 years ago
source link: https://socketsupply.qa/blog/beyond-the-web/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Beyond The Web: Reassembly of the Internet

2022-04-19 — Written by Dominic Tarr

We believe that human dignity and happyness requires autonomy, the more the better. Autonomy is the ability to solve the problems that affect you. It is in the first place, emotionally rewarding to overcome a problem, but also, in making an improvement, you have now increased your ability to overcome, so future rewards increase too. If you can directly solve a problem, you have autonomy. If you can negotiate, bargain, campaign, or suggest someone else into solving your problem. This also counts as autonomy. If you can only complain, and are ignored, or worse, the person who could fix it doesn't even know about your problem, you don't have autonomy. You are frustrated. This is the opposite of autonomy.

Modern technology has achived incredible things, but it has, for the most part, decreased autonomy. Because modern technology depends on incredible scale of systematic processes and supply chains, when problems arise, they may originate a very long distance from you.

There are some thinkers who agree that Autonomy is important, and that modern technology decreases it, and conclude that the only solution is to abolish modern technology and return to hunter/gathering or some variation on that. But we do not agree with those thinkers.

Autonomy Enhancing Technology

We do not agree, because we recognise that there are some cases where modern technology increases autonomy. These cases are the exception rather than the rule but they offer a ray of hope. One of these is the way that the internet has dramatically increased the ability to spread skills and knowledge. Aquiring skills and knowledge always increases autonomy, and the ability we have to find out how to do something just by reading some articles or watching a youtube video are amazing. Another ray of hope is open source. Once you have computer programming skills (acquirable via the many open resources available on the internet) you not only get huge amounts of given-away code that you don't have to write, but if you have a problem with it, you can probably improve it, and when you are really lucky you can make a difference just by suggesting an improvement. Amazingly, open source has become the dominate way that the best development software tools are created. Software developers know this is the best way to run things. It has already won. Not all the consumer (read: autonomy starving) market has caught on yet, but software developers already just do it. This is amazing because it's the core of modern infrastructure, but also, open source is economic dark matter that can not be explained if you only measure value in dollars. Open source also has the very valuable quality of being a autonomy multiplier. Because all this free and open code exists, you now have more resources to learn from and solve problems. These two examples are not enough to save us from being herded around by the rest of modern technology, but provide enough foundations to argue that autonomy increasing technology is possible.

Therefore, we must research, experiment, explore, and design more autonomy increasing technology. This includes creating alternatives to, or rethinking existing autonomy-robbing technology.

Autonomy Robbing Technology

The web browser is one example of an autonomy robbing technology, although it increases autonomy in some ways. It has enabled the increased autonomy via distributed learning, and created a set of tools that a developer can create websites and web applications. However, in the 2000s certain web applications grew to such a degree that the web's inherent autonomy robbing features became apparent. It is not just one web application. It's many, known collectively as "web2.0". In web 1, people made websites directly. In web 2, platforms provided slick tools to make it easy for "users" to "create content". web 2 is driven by "user created content". This is mildly autonomy growing because these users are able to share their thoughts and grow their audience. But they don't really have control over the platform they use. The platform uses manipulative designs to distract viewers into spending more time on the platform, which is monetized by showing them ads. These manipulative designs are known as "dark patterns"

For example, instead of simply showing users the content they have explicitly asked for, the platform inserts "suggestions" into side panels, between articles, or even before they have scrolled down to the actual content! users may regain some of their autonomy using ad-blockers, but usually the dark patterns remain.

This is enabled by the browser by a feature called the "Same Origin Policy". The Same Origin Policy means that a website can only make requests back to the server it came from. This means that the platform has total control over how their platform is presented. It is not possible for a third party to create a new interface, so users have to take it or leave it. This is also disappointing, not just because of dark patterns, but often even very successful platforms have clunky interfaces, so you cannot simply choose one that suits you better. Not to mention that this blocks 3rd parties from creating innovative new features that enhance autonomy. If you have any kind of problem with a platform's website, you are limited to complaining about it, and probably being ignored. You cannot create another interface, or choose an alternate one someone else has made.

Compared to the web, mobile apps platfrom rob autonomy from developers although the basic technology and capabilities are arguably better than the web, apps must be distributed from a appstore, which is controlled by the platform. Some app stores have an approval process, and app developers have to wait for approval, then jump through hoops. App stores prevent developers creating competing app stores.

It would be a very small niche of people who wish to create better app stores, or better interfaces to web 2 platforms, but the proportion of people who would benefit from having a choice in which interface they prefer is significant.

Mobile platforms are also autonomy robbing simply because they are not cross platform. You have to develop software twice simply because people need to use them on different brands of phone. And again if someone wants to use it on a regular computer. That's just a silly waste of time.

Platforms are greedy for control, but their platform would be more successful if they were more generous. I believe a key reason that the web was so successful was because it was so generous. It did not prevent anyone from publishing a web site, and it neither did it prevent others from implementing a browser. If the web had been created by a corporation, they would never have allowed that. They would have kept as much control as possible. They would not have permitted other implementations, and likely required approval for new websites. This would have been justified by claiming they were protecting users, children, etc. But it would have stunted growth of the web itself.

Not every reduction in autonomy is because a selfish actor has constructed the situation such that they have more control and you have less. Autonomy can be reduced through accidental disorganization. For example, a messy workshop, where it is hard to find the tools you need decreases autonomy because it's harder to solve your problems if you are wasting time looking for tools. The situation with the modern web is like that, because there are too many different ways of doing essentially the same thing - sending programs to other computers and having them run. Not only is there two major mobile vendors, but then if you want to run code on a desktop, it's a completely different process. If it's a server, it's different again. The web, different again. Yet, these are all just computers. It shouldn't be so hard. If it wasn't as hard it would lower the barrier to application creation. Making tools more accessable is autonomy-increasing.

In 2014 I began working on secure scuttlebutt (ssb). I was soon joined by Paul Frazee. I had seen that Paul had a project named grimwire that was about sandboxing user code to build a web page from potentially many user contributed portions. I thought that this was exactly what the p2p internet and ssb needed. I wanted to be able to distribute ssb apps over ssb. Paul joined the project but unfortunately we never quite got around building the sandboxing system.

Experiences Building p2p

When I started ssb, I wanted to demonstrate that you could make a viable modern application in a purely decentralized way. I had noticed that most applications were built around a social feature, which is good because that was actually quite straightforward to decentralize.

However, what the experince has shown me, is that the missing technology is a good distribution mechanism. The web is good, but not really suitable for p2p, because the origin has too much power, and browsers can make requests but can't receive requests. App stores are better in that regard but the origin still has too much power and of course the app store. Downloading and running an application ends up being the most decentralized technique but it's really quite a huge hassle to distribute an app for mac, linux, and especially windows. And that's not even thinking about mobile yet.

And it's not just distributing applications for people to install and run, or to deploy a server. There are many reasons you might wish to send around programs. For example, a database query can be considered a program that is sent to the database. Many programs have plugin systems for extending that program, and this makes those programs much more useful. Historically, security is often a problem in these plugin systems, but they have still been very useful, as long as it's reasonable to trust the developers. Usually this is implemented by exposing some scripting language, such as python, javascript, lua, or lisp. An using an interpreted language means that the process memory is safely separated from the plugin, but if that language provides disk or network access, there are still many nefarious things it could do. To create effective security, it is necessary that subprograms can be restricted in arbitrary ways. For example, if a program doesn't have network access, it doesn't matter what files it can read. But if it can write to files then maybe it can write to a file that gets uploaded by something else.

Serverless and Serverfull

Recently, "serverless" programming has become quite popular. This doesn't really mean that there are no servers anymore, but at means that developers don't deploy entire operating systems and then manage long running programs that receive an manage databases and network connections. Instead they send short programs to a serverless platform that then respond to specific events (such as an incoming request) and do not store local state in-between requests. This is a significantly restricted set of capabilities, but these restrictions can push a system into a space where it becomes much easier to define what a program actually does, and thus to test it and make it work well. The capabilities needed here are quite application-specific. Shaders, which run on a GPU are analogous to this. They run very fast graphics code without network or file access, so they are very application specific and not a risk to your personal data.

Restricted Child Programs

Apps should be able to have their own child apps, which they may restrict to some subset of the abilities that they have. This could mean that it would be fully possible to create a browser or appstore like system. Being able to run restricted programs is essential to give the user confidence to run arbitrary programs. The web depends on this, and so do mobile app platforms, but unfortunately neither of these expose these abilities to the apps themselves, which means it is not possible for developers to innovate on permission management, or create permission their own targeted subplatforms.

I think the correct approach to this, is have a mechanism for communication between processes/VMs (applications being comprised of one or more VMs) and using that same mechanism for communication with the network, the file system, creating child processes, etc. Other systems, such as unix, have special built in functions for accessing files, network, etc, called "system calls" but then they expect processes to communicate with each other by writing streams of bytes. Every process gets access to essentially the same system calls - possibly restricted by somewhat inflexible file permissions. This has led to incredibly heavy forms of sandboxing like emulating the entire operating system. If the perferred way for programs to communicate is via streams, then they should talk to the file system and network like that too. Afterall, the "file system" is actually just another program.

This means that child VMs that you want to restrict can, if necessary, be proxied through another VM - the child doesn't need to know wether it's talking to the true system host or a more authoritative VM. The child doesn't know and can't know. The only thing "system calls" are needed for then are to write to and read from streams.

A Realistic Plan

It is easy to describe the ideal system. But the once something becomes as successful as the web, people do not choose to use it because it's the best technology. They use it because everyone else is using it. They may be quite well aware that the technology is less than idea, but there are many other considerations that lead to choosing a technology to build with. The dominant system is well understood by many people, so it's easy to hire developers, and also users know what they are getting into. There is also likely to be an ecosystem of tools available to work with.

This means that someone seeking to create a totally new system must think very carefully. If the appeal is only that it does the same basic thing as the old one but with a clean design, it's unlikely to overcome the incumbent. It needs to not only have appealing new capabilities, but it needs to have a graceful upgrade path. For example, the web came to dominate nearly everything we do with computers, but it was just a program you installed on a standard operating system. That was the way that software was distributed at the time.

There are many ways that software is distributed today. Dev-tools are installed via a package manager (or downloaded directly). Mobile apps are installed via app stores. Applications are pushed to servers, or deployed as serverless functions. And of course, opened as web pages. What is the lowest common denominator?

The Dominant Platform

The answer... is the web. The most dominant platform. Applications run on a desktop or mobile can embed a WebView - a feature available on every operating system, mobile or desktop. Dev-tools, if installed as a executable, can run anything, but that includes web tech. And of course, if you open it in a browser that is the web. But if I had earlier convinced you that the web is actually bad, why am I saying this now? We need to replace the web with something that improves autonomy - but we need to do in a way that can supercede the web. Something that installs via the web but can possibly reduce the web to irrelivance like the web did to desktop operating systems in the 90s.

Luckily, there is a part of the web that is ideal for our goals - WebAssembly. WebAssembly is a relatively low level VM, that enables good security, and good performance. The web itself has a rather messy permissions model, but web assembly has a clean one: a wasm instance knowns nothing about the external world by default. There are no web apis exposed to wasm except those expliticly passed to it from the host. Wasm doesn't even know what the time is unless you give it. (knowing the time might sound unimportant but knowing the time enables a class of attack knows as "side channel" attacks)

Because WebAssembly already runs on the dominate platform, it's already being targeted by a wide range of programming languages. So unlike the web, which must be coded in javascript, you can use anything to write wasm programs.

Unfortunately, there may be some aspects of embedding web assembly that are less than ideal. For example, at the current time it appears that messaging raw data into a web view has unnecessary overhead compared to if it was running directly - this would be a problem for applications that are limited by their maximum performance capacity. This could be applications that must run as fast as possible to be useful - but it could also be applications that need to be as light as possible - such as run on the smallest devices.

However, that problem is small enough that there is still a lot to be gained from a WebAssembly platform. WebAssembly does not have to run inside of a web view. There are many WebAssembly engines in development, and a platform that is only WebAssembly could be implemented without using the web at all - only web assembly. Then you could open an app via a web page, but you could also download a WebAssembly hosting application - just like you'd download a new web browser, but this browser doesn't include anything but WebAssembly.

Because web-assembly is both low level and high performance, it would even be possible to ship a conventional web browser as a web assembly application.

WebAssembly will eat it's own tail. We need a platform that is simple and uniform across every kind of hardware, but sometimes we will need to emulate those old platforms. There is always gonna be some programs that, for whatever reason, havn't been updated but is still in use. With WebAssembly, this is okay - instead of keeping the bloated old platforms around forever, we'll just emulate them within WebAssembly.

Thus, we can use this ability to step past the web and other legacy platforms, such as linux, windows, and MacOS. We can create an elegant new system that builds upon the current dominant platform, but that can ultimately supercede it, without dragging it along.

Summary

We believe that autonomy is essential, and we seek to create autonomy enhancing technology. In particular, we seek a uniform system to deploy software to any sort of device: destop, mobile, server. This system must enable access to fundamental basic resources of the file system, the network, and the ability to run other processes. The abilities of this system must be generous. Successful platforms must be generous to be successful. The web was relatively generous, but the Same Origin Policy gave websites too much power over users. Mobile platforms didn't displace the web because they were not generous enough, the platform retained too much control. We believe that it's better to create a truly generous platform than to try to control a less successful one.

It's much harder to deploy software than it should be. We need a uniform way to deploy software to any sort of device!

It's essential that programs be given the ability to run other programs, with restricted capabilities. Being able to restrict the capabilites of another program makes it possible to run untrusted programs, and this makes for a far more flexible system. To enable this, I propose that it's much better to have a "service oriented" pattern, where even access to fundamental system capabilites (fs, network, child processes) look like communication between services.

It is not enough to describe the ideal system, we need a realistic plan for a transition from the currently dominant platform to a new one. The currently dominant platform is the web - but web assembly is part of the web and provides us what is needed to create the system we describe.

To learn more join our Discord and use Operator Framework


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK