8

Bridging the edge computing gap

 1 year ago
source link: https://devm.io/cloud/edge-computing-gap
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Full tool reusability

The task of taking on edge computing projects usually ends up with IT organizations. By default, IT organizations take a bottom-up approach to infrastructure into account and start by looking at what hardware is appropriate for the edge. Then move on to the infrastructure layer, including the appropriate hypervisor and operating system. The last and final step is to consider a solution for managing the applications at scale across the many different locations.

Generally, enterprises are handing more decision power to application teams, but there’s a gap between the infrastructure and platform builders, and application teams’ tools and processes. Any new platform needs to be built with a focus on supporting application teams, their existing tools, and ways of working. And the application teams have largely built their tools and processes guided by experiences from the central clouds. Therefore, Avassa’s fundamental task and vision is to bridge the edge infrastructure and the tools and processes in place in the application teams.

Application teams live, dream, and think about their applications all day, every day, but they don't necessarily think about the platform or infrastructure. At Avassa, we wanted to take an application-centric worldview. An edge system must delight application teams, giving them the automation that they love from the cloud, with the same ergonomics and feel to it.

Previous edge systems required a completely separate operational stack. In the worst case, enterprises were stuck with two application organizations—one managing the central applications and another managing the distributed applications. Of course, that's not likely going to last. Some of the applications will run centrally, some will run in the distributed domain, and application teams will have a set of core operations tools that must work with both.

We took a first-principle approach to ask: How do we build a platform that offers the application teams full reusability of the tools they already have? Some teams are invested in tools like GitLab or GitHub for deployment, and Grafana or Splunk for application observability. Others are fully invested in tooling provided by cloud giants. Any mix of such solutions must be able to easily integrate with the edge environment, provide full reuse, and shouldn’t require separate organizations or stacks. By bridging the tooling void between the edge and application teams they can continue to do the same stellar work, just at the edge.

Many applications have two components. One component runs centrally, while another runs in multiple locations and should be treated as a single application. Teams should be able to deploy, upgrade, patch, and monitor that from the same tool that they already have in place. The extended ability to manage applications should be seamless and require low effort, while not forgetting security.

Components for the edge

There are two components to the Avassa platform. The first is a centralized control plane called the Control Tower. Most users consume it as a service starting with a free trial instance available through the Avassa website (avassa.io). The Control Tower hosts all user interfaces including a web UI, a REST API, and a command line interface.

Users then install the second piece of software on their edge hosts: an agent called the Edge Enforcer. The Edge Enforcer is a containerized application installed on top of any Linux and container runtime.

A lot of effort has been put into making installation and the component lifecycle streamlined and inexpensive to prevent complexity. Each Edge Enforcer calls home to a Control Tower instance, authenticates, and authorizes itself. Once that process is complete, users can start deploying applications onto the host on which Edge Enforcer is running. To build an edge cloud, users install Edge Enforcers on all hosts they want to deploy on, and they will turn up in the control tower ready to receive scheduling and configuration operations.

To meet application teams’ expectations, there’s more to the system than just starting and stopping containerized applications. There are three knockout requirements:

The first is to provide some means of event logging. You might be used to having Kafka or Pulsar available in cloud environments, but event log streaming is different on the edge. Most do not want to pull in heavyweight logging frameworks in each of these locations but expect it to be part of the infrastructure. The Edge Enforcer provides a publish-subscribe and topic-based event log streaming API locally on each site.

If there is an upstream outage, you can keep logging locally until the host runs out of disk space or connection back to the Control Tower is restored.

Second; there is a growing expectation for infrastructure to provide secrets management. This is the ability to manage the lifecycle of sensitive material (application credentials, certificates, etc.) outside the application. Examples of secrets management solutions from the centralized cloud include HashiCorp Vault and CyberArk. The Edge Enforcer also provides secrets management—locally in each site.

In the centralized world, rotating certificates is simple. It usually involves updating two locations: the main site, and in some cases a disaster recovery site. Now, imagine doing that in hundreds of places. You need a centralized way to handle the rotations. Rolling your certificate in all these locations with Avassa is a single operation in the Control Tower. Then, that data is either dynamically injected into the container environments, exposed as a mounted volume, or exposed to the application through an API.

All of this is produced locally, so even during an outage, the local application can reach the most recent sensitive material secrets—even if they restart or move from one host to another.

The third is infrastructure-specific. When a new container is scheduled, the process starts with a pull operation from a registry to download the appropriate container image and start it. But this implies that if users want a full restart or want to move an image or container application from one host to another, they need to be able to reach the registries. This is a problem when there’s an upstream outage from the edge site, rendering any central registries unreachable. Images corresponding to the containers running in each site must always be available from a local registry even in case of full upstream connectivity failure.

Those are the three application or infrastructure services on top of the whole lifecycle of management, monitoring, and observability provided by the edge. It contributes to the operational survivability of applications running in edge locations.

Carl Moberg
Carl Moberg

Carl has spent many years solving for automation and orchestration. He started building customer service platforms for ISPs back when people used dial-up for their online activities. He then moved on to focus on making multi-vendor networks programmable through model-driven architectures. Now CTO and co-founder at Avassa, he spends his days obsessing over how to deliver a distributed edge control plane that developers and devops love.


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK