12

VMware Fusion Tech Preview 20H1: Introducing Project Nautilus

 3 years ago
source link: https://blogs.vmware.com/teamfusion/2020/01/fusion-tp20h1-introducing-nautilus.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

It’s Tech Preview time, and this year we’re doing things a bit differently. Let’s dive in!

New Decade, New Approach to “Beta”

Here on the Fusion team, we want to get features in the hands of customers faster than ever before, and we want to iterate and refine things with the guidance of our users, and to do so transparently, out in the open, as much as possible.

In that vein, for the Fusion Pro Tech Preview 2020 we’re doing things a bit differently than we have in previous years.

This year, in an ongoing way, we’ll be releasing multiple updates to our Tech Preview branches, similar to how we update things in the main generally available branch.  The first release is available now, and we’re calling it ’20H1′.

What this means is that if you have Tech Preview 20H1 (TP20H1 as we lovingly call it…)  installed, it will get updates throughout the year as we improve the quality of our release.

We’re also moving our documentation and other things over to GitHub. We’ll be continuing to add more to the org and repos there, maintain and curate it, as well as host code and code examples that we are able to open source.

Having our docs etc on GitHub let users provide feedback and file issues filed against both docs as well as the products themselves. We will continue to post updates and encourage discussion in the community forum, while GitHub becomes more of a place where we can refer to the ‘latest source of truth’, and where folks can file (and even track) more ‘official’ bugs.

We encourage folks to file issues on GitHub, as well as fork and make changes to the repos there if you believe there’s a better way or if we’re missing something.

Same as always, the Tech Preview builds are free for use and do not require a purchased license, but they come with no guarantees of support and things might behave unexpectedly. But hey, that’s where the fun is, right?

Okay, let’s talk about features…

Firstly, we did some cool USB work!  We’ve opted into using Apple’s native USB stack, enabling us to remove one of our root-level kernel extensions. Try out your devices and let us know if they have any trouble by filing an issue in this GitHub repo: Fusion GitHub usb-support

In Fusion Tech Preview 20H1, however, our main focus is the initial release of an internal project we’ve been calling ‘Project Nautilus‘. We’ve been working on this for almost 2 years, so I’m extremely pleased to say that it’s finally available to the public to use, for free, as part of TP20H1.

What is Project Nautilus?

Project Nautilus enables Fusion to run OCI compliant containers on the Mac in a different way than folks might be used to. Our initial release can run containers, but as we grow we’re working towards being able to declare full kubernetes clusters on the desktop.

By leveraging innovations we’re making in Project Pacific, and a bevy of incredible open source projects such as runC, containerD, Cri-O, Kubernetes and more, we’re aiming to make containers first-class citizens, in both Fusion and Workstation, right beside virtual machines.

Currently a command-line oriented user-experience, we’ve introduced a new tool for controlling containers and the necessary system services in VMware Fusion and Workstation: vctl.

Containers on the desktop today

Today when you have say, Docker for Mac installed, its services start, it creates a special Linux virtual machine (in one of many ways, including using Fusion), and essentially maps all of the ‘docker’ commands back the kernel running in the linux vm. (remember that docker is just a front-end to containerd, formerly dockerd, which front ends runC, which interfaces into the linux kernel ‘cgroups’ feature for isolating processes [i.e. the ‘container‘ part of the container].)

So that bulky VM sits there running, waiting for your docker commands, and runs all your containers within it.

Each running container becomes a part of the docker private network, and you forward some ports to localhost and expose your service.

In Fusion with Project Nautilus, we’ve taken a different approach.

Nautilus is different

The vision for Nautilus: A single development platform on the desktop that can bring together VMs, Containers and Kubernetes clusters, for building and testing modern applications.

With Nautilus, leveraging what we built for vSphere and Project Pacific, we’ve created a very special, ultra-lightweight virtual machine-like process for isolating the container host kernel from the Host system. We call that process a PodVM or a ‘Native Pod’.

Each Container get’s its own Pod, and each Pod gets its own IP address from a custom VMnet, which can be easily seen when inspecting the container’s details after it launches.

Meaning, we can easily consume running services without have to deal with port forwarding back to localhost.

It also means that while today we deploy the container image in a pod on a custom vmnet, we can conceivably change that to a bridged network… Meaning you could start a container, the pod would would get an IP from the LAN, and you can then immediately share that IP to anyone else on the LAN to consume that service, without port forwarding.

Of course with custom vmnets we can configure port forwarding, and we’ll also be exposing more functionality there as we grow the Nautilus toolkit.

One of our goals is to bring to bear a new model for design much more complex deployments. We see a future where we can define, within a single file, a multi container + VM + kubernetes cluster deployment, allowing users to accelerate their application modernization.

Nautilus Today

Today Nautilus is controlled by ‘vctl’, and that binary is added to your $PATH when Fusion TP 20H1 is installed.

Let’s look at the vctl default output:

mike@OctoBook >_ vctl  vctl - A CLI tool for Project Nautilus Container Engine powered by VMware Fusion  Feature Highlights:  • Native container runtime on macOS.  • Pull and push container images between remote registries & local macOS storage.  • Run containers within purpose-built linux-based virtual machines (CRX VM).  • 1-step shell access into virtual machine debug environment. See 'vctl sh'.  • Guide for quick access to & execution in container-hosting virtual machine available in 'vctl describe'.  USAGE:  vctl COMMAND [options]  COMMANDS:  delete Delete images or containers.  describe Show details of containers.  exec Execute a command within containers or virtual machines.  get List images or containers.  help Help about any command  pull Pull images from remote location.  push Push images to remote location.  run Run containers from images.  sh Shell into container-hosting virtual machines.  start Start containers.  stop Stop containers.  system Manage Nautilus Container Engine.  tag Create tag images that refer to the source ones.  version Prints the version of vctl  Run 'vctl COMMAND --help' for more information on a command.  OPTIONS:  -h, --help help for vctl  

You can see we are off to a good start, there’s a lot we can do already. We also have many aliases in place. Most commonly you’ll have ‘ls’ for ‘get’, ‘i’ fo

As a quick example, to run our first container first we need to start the services.

mike@OctoBook >_ vctl system start Preparing storage... Container storage has been prepared successfully under /Users/mike/.nautilus/storage Preparing container network, you may be prompted to input password for administrative operations... Password: Container network has been prepared successfully using vmnet: vmnet12 Launching container runtime... Container runtime has been started.

Once the system is prepared and started, we can pull an image:

Note that we’re providing a full URL to the image hosted on docker hub, but we could easily point that to a private Harbor instance or some other OCI-compliant registry. In these examples I’m referring to the full path as the image name, but you could ‘tag’ it and just refer to the tag for simplicity’s sake.

mike@OctoBook >_ vctl pull image docker.io/mikeroysoft/mrs-hugo:dev ─── ────── ──────── REF STATUS PROGRESS ─── ────── ──────── manifest-sha256:83cd5b529a63b746018d33384b1289f724b10bb624279f444c23a00fd36e3565 Done 100% (951/951) layer-sha256:c94289816e8009241879a23ec168af2d9189260423f846695538c320c8b99ea7 Done 100% (17575762/17575762) layer-sha256:9d48c3bd43c520dc2784e868a780e976b207cbf493eaff8c6596eb871cbd9609 Done 100% (2789669/2789669) layer-sha256:b6dac14ba0a98b1118a92bc36f67413ba09adb2f1bb79a9030ed25329f428c1f Done 100% (5876538/5876538) config-sha256:cb657649e42335e58df4c02d7753f5c53b6e92837b0486e9ec14f6e8feb69b61 Done 100% (7396/7396) INFO Unpacking docker.io/mikeroysoft/mrs-hugo:dev... INFO done

Now that we have the container in our local inventory:

mike@OctoBook >_ vctl ls i ──── ───────────── ──── NAME CREATION TIME SIZE ──── ───────────── ──── docker.io/mikeroysoft/mrs-hugo:dev 2020-01-19T17:46:09-08:00 25.0 MiB

Cool, there’s my image (you can see it live at https://mikeroysoft.com!).

Let’s start it up!

mike@OctoBook >_ vctl run container my-www --image=docker.io/mikeroysoft/mrs-hugo:dev -d INFO container my-www started and detached from current session   mike@OctoBook >_ vctl ls c ──── ───── ─────── ── ───── ────── ───────────── NAME IMAGE COMMAND IP PORTS STATUS CREATION TIME ──── ───── ─────── ── ───── ────── ───────────── my-www docker.io/mikeroysoft/mrs-hugo:dev nginx -g daemon off; 172.16.223.128 running 2020-01-19T17:58:33-08:00  

You can see that the container ‘my-www’ is running, based on the mrs-hugo:dev image in it’s fully-pathed form.

You can see the command being run, and most interestingly you have an IP address.

Opening that up yields whatever was running in the container. In my case it’s nginx serving up some static content on port 80. No port mapping necessary.

I won’t go into much further detail in this post, but in the coming days and weeks we will be doing a series of posts and additions to the GitHub repository to explore using all of the capabilities we’ve been able to deliver as part of Nautilus.

Nautilus Tomorrow: Let’s get there together

This is only the first iteration, and we’re making great effort to ensure that we can iterate quickly. This means not only listening better and hearing more from our users, but also tracking issues more transparently, and hold ourselves accountable for delivering fixes and improvements in a timely manner.

We see a not-so-distant future where we can define complex multi-vm+container+kubernetes cluster setups locally on the desktop using a standard markup, and to be able to share that quickly and easily with others even if they’re using Windows.

So there you have it… time to go get started!

Direct Download

VMware Fusion on GitHub


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK