0

Do We Need a New Generation of Abstractions for Virtualization?

 2 years ago
source link: https://blog.cimicorp.com/?p=5102
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Do We Need a New Generation of Abstractions for Virtualization?

Ancient Romans didn’t have computers or networks, but I’d bet that if they did, we’d see “In abstractione simplicitatis” graven on their monuments; “In abstraction, simplicity”. We’ve been taking advantage of that principle, though often implicitly rather than explicitly, since the dawn of computing, and going forward it may well be that our ability to frame the right abstractions will determine how far and fast we advance.

The earliest commercial computers were often programmed in “machine language”, which required considerable skill, took a lot of time, and introduced many errors that cropped up in unexpected ways. Programming languages like FORTRAN and COBOL came along and abstracted the computer; you spoke one of these “high-level” languages to a “compiler” and that translated your instructions into machine language for the computer. We’d have a major software problem these days without this abstraction.

The other problem early computers quickly faced was that of complex but repetitive tasks. You have to read and write files, or “I/O devices” and this involves complicated things like locating the file, sensing end of data or error conditions, etc. If these were left to the programmers writing applications, there would be hundreds of programmers doing the same things in different ways, and it would waste programming time. Not only that, every condition (like “mount this drive”) that required human coordination would be done a different way, and operations would be a mess. What fixed this was the concept of the operating system, and without that abstraction we’d have a programming and operations resource problem.

If you look at these examples of early abstractions, you can see that there’s a common element. What we’re trying to do is to isolate software, software development, and software operations from resources. The same thing happens in networking. The venerable OSI model with its seven layers was designed so that for a given layer, the layer below it abstracted everything below into a single set of interfaces. We abstract networks and network operations, separating them from resources. Which, of course, is also what virtualization, containers, and cloud computing do for IT overall.

There’s a challenge here, though. Two, in fact. First, if abstraction is so important in computing and networking, how do we know we have a good one? If we pick a bad strategy for abstraction, we can expect it to fail to deliver on its promise of efficiency. Second, if there’s enough abstraction going on, we may end up with nested abstraction. Stay with Latin a moment and you find “Quis custodiet ipsos custodes”, which means “Who will watch the guards themselves”. Who will abstract the abstractions, and what do they turn into?

Network abstraction and compute abstraction have each evolved, largely independently but joined loosely by the common mission of information technology. We are now, in the world of distributed software components and resource pools, entering a phase where recognizing interdependence is really essential to both these critical technology areas. An application today isn’t a monolith loaded into a compute-monolith, it’s a collection of connected components hosted across a distributed set of systems and connected within and without by a network. What is the abstraction for that?

The cloud is a compute-centric abstraction. You deploy something on virtual resources, and that works in large part because you have a set of abstractions that make those resources efficient, ranging from operating systems and middleware to containers, Kubernetes, and operations tools. The network is almost presumptive in this model; we assume that connectivity is provided using the IP model of subnetworking, and that assumption is integrated into the container/Kubernetes deployment mechanism.

The questions I’ve raised about service lifecycle automation, network infrastructure security, and service modeling all relate in some way to an abstraction model, but we really don’t define exactly what that model is. That we have questions in how we can operationalize complex services, how we can integrate hosted elements with physical devices, and how we can build features into a network via hosting, or connect them on the same network, or both, is proof of that lack of an explicit model of abstraction. We’re seeking it, so far without success.

What are we seeking? I’ve worked with service and application standards initiatives for about thirty years now, and I’ve seen attempts at answering that question in virtually every one. I’ve also seen almost-universal failure, and the reason for the failures is that all approaches tend to be either top-down or bottom-up. Both have had fatal problems.

In a top-down approach, you start with the goals and missions, and from them you dive into how they can be achieved, through what’s essentially the process of analysis, which is a bit like hierarchical decomposition. There are two problems with the top-down approach. First, there’s a risk that it will require wholesale changes in existing technologies because those technologies don’t guide and constrain the early work—it’s above them. Second, the entire process will have to be completed before anything can be delivered, since the bottom-zone where real stuff exists isn’t addressed until the very end.

The bottom-up approach trades the fact that it resolves these problems for the fact that it causes others. Starting at the bottom means that you presume the use of at least specific current technologies, which is good because they do exist and so can be immediately exploited. It’s bad because we’re presuming that these existing technologies are at least suitable, and hopefully optimal, in addressing the goals we’ve not yet even gotten to (because they’re at the top and we started at the bottom). It’s also bad because trying to exploit that current technology as we climb up from the bottom means that we may well be picking low benefit apples that will make further climbing difficult to justify, and also that we may cement in intermediary practices to support our exploitation, practices we’ll have to undo if we continue upward.

Are we then doomed? Do we have to figure out some way to start in the middle, or at both ends? Well, maybe any or all of these things, but whatever we do, it is likely it will exploit the modern notion of the intent model.

Back in those early days of programming, as the concept of “modular programming” and “components” emerged, it was common to design programs by creating “shells” for each component that defined the interfaces and (through the name and comments) the basic functionality. This was eventually formalized in most modern programming languages, to require a “definition” or “class” module which was then “implemented” by a second module. Any module that implemented a class was equivalent, so at the class/definition level, the implementation was opaque.

An intent model works much like this. You specify the interfaces, meaning the inputs and outputs, but the contents of the model are invisible; it’s a “black box”. If you use intent model principles in a top-down design, you can decompose a service into pieces and specify only the stuff that’s visible from the outside. You can then decompose that further, to the point where you “implement” it.

In service modeling, you could imagine that all services could be considered to have three pieces. First, the Service_Access piece that defines how users connect to the service. Second, the Service_Connectivity piece that defines how internal connections are made, and finally the Service_Features piece that defines what functional elements make up the service experience. Each of them could be decomposed; Service_Access might decompose to “Access_Ethernet” and “Access_Consumer_Broadband”. If this process is continued, then an “empty” intent model could be defined as a hierarchy, and only at the bottom would it be necessary to actually send commands or deploy elements.

Done right, this approach maintains the separation between services and resources, and also requires that you define the way that choices between implementations and features are made, without mandating that you actually do anything to solidify the process. The “resource binding” task comes only at the end, and so you’re highly flexible.

The obvious challenge in applying this strategy to service and application lifecycle management is the lack of a toolkit, but in some sense we might actually have one already, called a “programming language”. Since programming languages and techniques have applied an intent-model-like approach for decades, why not build services (for example) like we build applications? I actually did that in the first ExperiaSphere project, which used Java to create service models. It might not be the ideal approach, particularly for highly dynamic services with regular changes, but it could at least serve as a testbed to prove out the concept.

Intent modeling has been around for a long time, first informally and then in a formalized way, but it’s greatly under-utilized in network services and IT. We could go a long way toward easing our way into the future if we took it more seriously.


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK