23

BEAM in the Browser with Lumen, Part 1: Motivations & Constraints

 4 years ago
source link: https://tylerscript.dev/bringing-the-beam-to-webassembly-with-lumen
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

This article is the first in a short series aiming to - unofficially - transcribe and interpret Paul Schoenfelder ’s excellent talk introducing Lumen - an alternative BEAM implementation written in Rust .

My hope is that writing these might assist me and others with the goal of eventually contributing to the project .

Lumen is a new compiler and runtime for Erlang/Elixir being developed thanks to support from DockYard . It’s primarily built around Erlang, but supports Elixir and any other languages which compile to BEAM bytecode.

The central goal of the project is to bring these languages to the browser with all the functionality of the BEAM and the OTP standard library - at least the parts of it worth bringing - by targeting WebAssembly.

Lumen is not :

  • A new Elixir-like syntax on top JavaScript.
  • An Elixir to JavaScript transpiler.
  • An effort to cross-compile the existing BEAM implementation to Wasm.
  • An effort to replace the existing BEAM.

Steady API, Cross-Pollination ⛰:bee:

A common criticism of the client-side web ecosystem is that it is fractured, and constantly in flux. Much of this could be a symptom of its enormous reach and popularity; new patterns and features are being brought into JavaScript, the Browser, and DOM APIs all the time. Because of this, you could argue that you’re left with shaky ground to build applications on.

Looked at another way, if you’re using Elixir or Erlang on the back-end but JavaScript on the front-end, it’s pretty likely the only reason you’re using JavaScript is because it’s the de-facto language of the web. Now that Wasm has become so widely supported , that begins to change. Given the choice, many would choose to invest in just one language ecosystem if it could effectively support both environments. Organizational benefits like code re-use and ease of end-to-end knowledge sharing are difficult to ignore.

The Actor Model of the BEAM actually meshes very well with patterns we see in component-based user-interface libraries like React. Components in a thoughtfully designed React application are essentially tightly focused state machines in a tree, each responsible for just one part of our application. Except, without significant effort, when just one of them encounters an error it often crashes the rest.

What if we took the same application but implemented it as an Elixir/Erlang based front-end? Now our components could exist in a supervision tree , each operating concurrently and able to crash or fail, then recover - all without affecting the rest of the application.

If we can get the BEAM running in the browser, we also get the patterns and OTP tooling that comes with it. Imagine implementing a client application using Erlang Term Storage - a miniature database key to many common BEAM patterns, or spinning up Observer to analyze the memory consumption and performance of your browser application, all alongside a rich standard library.

A Few Things to Know About WebAssembly

Before getting into any technical details on Lumen, there’s a few things you should know about Wasm.Here’s the next section if you’re already familiar.

What is WebAssembly?

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

You can think of it like assembly language , but slightly higher-level. This is what it looks like:

;; Simple add/2 function
(module
  (func (param $lhs i32) (param $rhs i32) (result i32)
    local.get $lhs
    local.get $rhs
    i32.add))

A Harvard Architecture :mortar_board:

iiChi7rxxJwWyv9g7Mxrw80xspap_small.png

WebAssembly is built upon a Harvard architecture . In an x86 machine, based on the more common Von Neumann architecture , code and data live in the same memory address space, which means it’s simple to take the address of some code to jump to it, and start executing. Using a Harvard architecture, code and data address spaces are separate, so you can’t take the address of some code and just call it. The equivalent of a function pointer is actually an index into a jump table of functions that the Wasm runtime knows about. In order to call a function, you have to give it’s index and arguments to this table so the runtime can check to see if it exists - doing nothing if it’s not found instead of accessing invalid memory.

Structured Control Flow :potable_water:

goto explanation;
explanation:

Wasm operates under something called structured control flow instead of using control flow graphs . The difference here is that control flow graphs allow multiple entry points into the same piece of code, whereas structured control flow dictates everything must have only one entry point. This means by design it does not have jumps like goto , instead it provides structured control flow constructs like if / else .

One one hand, this is a very good thing - we cannot express irreducible loops in Wasm, and attacks which use arbitrary jumps in assembly are impossible. On the other, this adds complexity to our implementation when it comes to expressing any BEAM behaviors that utilize goto statements.

Talking to WebAssembly :telephone_receiver:

let memory = new WebAssembly.Memory({initial:10, maximum:100});

Passing data between Wasm and JavaScript modules is not exactly simple: Currently, we can only pass integer values across this barrier. It sounds incredibly difficult to do anything useful under a constraint like this. Essentially, the only way to pass objects in and out is to convert them by sharing a pointer into the linear memory of the Wasm module, and parsing what’s in that memory. This limitation has a large impact on FFI and interoperability with the JavaScript & DOM APIs.

How Do We Bring Elixir/Erlang to This Environment?

To compile to Wasm, we must accommodate for a few major constraints:

Code Size & Load Time :hourglass_flowing_sand:

When you’re building API that lives on a server, this is easy to overlook. But consider that we have to deliver a virtual machine and the code to make it useful to every client loading a Lumen application. These clients will also have to compile all that code before it can even run. If we can achieve a small code size, we can also deliver on the fast load times users expect in the web environment.

Wasm Concurrency Model :octopus:

let wasmWorker = new Worker('worker.js');

Wasm achieves concurrency by using Web Workers . When you spawn a web worker, it runs your code in a named JavaScript file separate from the calling window process. This makes it very different to your traditional server environment: Web Workers behave like multiple different processes in an operating system, rather than multiple threads within a process. This means actor-to-actor message passing is currently non-trivial because they don’t share memory. There are some efforts underway to work around this, but it’s worth noting that the threading model in Wasm is still evolving.

JavaScript/DOM Interop

WebAssembly.instantiateStreaming(fetch('lumen.wasm'), importObject)
.then(results => { ... });

We also have to appropriately accommodate JavaScript in the runtime.

Async Functions

In the scheduler of Lumen, which is very much like the BEAM scheduler, we have to represent JavaScript async functions as separate to closures within Erlang/Elixir. This is because they’re garbage collected by the JavaScript runtime, not by ours, and so they must be managed separately.

Events :envelope_with_arrow:

Events we receive from JavaScript and the DOM are surfaced as messages to processes in Erlang/Elixir. So rather than getting or giving a callback to be fired when an event comes in, you’d get a message like you would for an event in an Erlang/Elixir application. This is central to the idea of making a usually server bound language compatible to the browser environment, and ensuring consistent behavior when delivering to either client or server.

Because of the all integers, all the time, limitationbeing how we have to pass data back and forth, then we need to handle translation between Erlang terms and JavaScript values every time we cross that barrier. This is different to FFI under a traditional server environment where we can pass things directly via the erl_nif API. So we have to have translation routines on both the JavaScript side and the Wasm side to make this work.

So why not use the BEAM?

Why don’t we make the existing BEAM implementation work for the web, instead of build it from the ground up?

Runtime APIs :runner:‍♀️

Most of the APIs the BEAM the expects to be available just aren’t present in a Wasm environment. Virtually everything in the the runtime depends on system APIs. Even memory allocation.

An Incompatible Scheduler

You could argue that those APIs could be shimmed in or worked around, but here’s a bigger problem: Because we have to do things like treat JS async functions as a separate resource to Erlang closures, the current BEAM scheduler would need to be almost completely rewritten to accommodate for this anyway.

Shipping BEAM bytecode is Expensive :money_with_wings:

The BEAM is bulky. The full dependency tree of your average Elixir/Erlang application gets to be in the tens of megabytes - this is completely non-viable on the web. This is because every module in the dependency tree has to be included in the final build to accommodate two major things:

Hot Code Reloading

OTP allows hot code reloading , meaning that at any point we might need to call code in the dependency tree which isn’t currently being called in the source.

apply/3

The apply/3 function allows you to call any function completely dynamically at runtime.

These constraints ultimately mean we get pretty weak dead code elimination capabilities to reduce our code size.

Performance Considerations

Finally we have the issue of running a virtual machine, on top of another virtual machine. The browser is actually pretty clever in the way it can generate native code from Wasm and JavaScript, but if you’re executing BEAM bytecode on top of a virtual machine it can’t effectively reason about it beyond seeing the central core loop that’s executing. This limits our options when looking to apply optimizations to our code. We really don’t want our implementation to come with a computational performance ceiling like that in place from the start.

All these things considered give us a pretty compelling reason to pursue an alternative implementation, better suited to the requirements of the web.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK