6

Time for a 'cloud-alike' approach to architecture? Nebulon pitches a deep infras...

 2 years ago
source link: https://diginomica.com/time-cloud-alike-approach-architecture-nebulon-pitches-deep-infrastructure-strategy
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Time for a 'cloud-alike' approach to architecture? Nebulon pitches a deep infrastructure strategy

By Martin Banks

June 3, 2022

Audio mode

Dyslexia mode

iceberg

The now famous paper produced last year by analysts Andreessen-Horowitz, called The cost of the cloud, a trillion dollar paradox, observed that a lot of businesses were effectively wasting more money with hypercloud hyperscale cloud service providers on services than they would if they were to provide better services themselves, in-house.

There are a variety of reasons for this, ranging from the fact that many tasks ported to the cloud are regular, predictable and unchanging, so the flexibilities of the cloud services are in practice irrelevant, through to straightforward poor cost management of the cloud services being used, often typified by forgetting to shut down cloud services once they are not being used – while they are still costing money.

This has prompted a large number of users to re-evaluate their mix of tasks and services, and to realise that a reasonable percentage of them can probably be more cost-efficiently performed on-premises than out in the cloud. This trend then demands the obvious corollary - the development of tools and processes that make fulfilling this objective a far more trivial task than re-engineering on-premises IT resources, in a one-off exercise, from the ground up.

US-based startup Nebulon spotted this opportunity and has now reached the point of offering tools that fit the ‘on-premises-private-cloud-alike’ environment that such businesses are seeking. As one of its early steps down that path the company engaged with the ESG Research Group to help identify the market and its potential.

The resulting report showed that some 50% of enterprise CIOs made at least early attempts at repatriating some workload back onto their onto their environments, using the cost savings argument as justification. This argument was, however often then countered by the loss of the operational flexibilities of running those processes in the cloud. What users required was both the cost savings and the cloud experience, back on the premises.

Dedicated

Speaking at a recent Technology Live virtual seminar in London, Siamak Nasari, Nebulon’s CEO, pointed to some IDC research which showed that users would like to match as much of that experience as possible. The research suggested that the need for what it called Dedicated Infrastructure as a Service is set to grow at a five year CAGR of around 160% between 2021 and 2025.

The potential being foreseen for the widespread application of edge computing is also seen by Nebulon as a key reason more enterprises will see benefit in having the core legacy business management applications back in-house, albeit on a cloud environment. Nasari said:

There are places that users can't simply take the workload to the cloud, with manufacturing and financial services being two prime examples. Yet there is a growing need for enterprises to adopt the cloud model in their own operations so they can get the efficiencies they need to be able to compete.

Just going back to the traditional legacy architectures, such as one application-per-server, is certainly not going to cut it for such companies. Some real mimicry of cloud architecture and management systems is what they will need, and this is where Nebulon has looked for inspiration in developing its solution to the problem. It has, in particular, turned its attention to public cloud hyperscale giant AWS and one of its key developments, the Nitro Cards.

There are two flavours of card, one handling networking and one that manages storage and all the Enterprise Data Services. The latter version creates a level of isolation and fault domain security and allows the entire system to work much more efficiently because much of the workload is moved down to the card, which in turn links to a cloud management service, as opposed to running on the server CPUs. These cards sit in every server in the on premise environment.

The Nebulon implementation of this is in three phases: the networking and data plane component, called ON, which is based on PCIe cards, the Services Processing Units (SPU), an operating environment that offloads much of the workload onto the card itself to offload data services and bypass the server CPUs.

But where AWS is able to run its own homogeneous environment, the typical enterprise doesn't often get that chance and has to buy from existing server vendors. So Nebulon has struck arrangements with the leading vendors so that customers can go to their favourite supplier, specify the Nebulon card and have it incorporated at delivery.

The other aspect of AWS architecture that caught the company’s eye was the use of Amazon Machine Images to deploy consistent workload environments that can be dynamically selected from a pick list. This provides a server-agnostic instantiation of the selected environment so that it each deployment is operationally consistent regardless of the server. The company has added technology that allows users to keep the immutability of the workload instantiation over time as systems get maintained and updated. So despite the chances of day-to-day management changing the system environment security, every time a change is made the system needs to be restarted. At that point, Nebulon takes users back to a specifiable known state.

This can then be used with any kind of deployment tool, such as Ansible. The idea is to provide an on-premises representation of a cloud infrastructure that can then be managed from the cloud from behind an API.

This has required the creation of a zero-trust SaaS delivery model, for enterprise environments will often work to a different set of security requirements to the cloud, especially if they are still using a physical infrastructure on the premises.

One important stumbling block when returning applications to an on-premises environment – even if it is cloud-alike in operation – is not paying attention to what Nebulon calls ‘deep infrastructure’. Its countepart, ‘shallow infrastructure’, is the level at which most of us interact with computers mentally, the basics of its specification - a CPU, an amount of memory, a type of network and connectivity, the operating system and the applications it is running. Nasari explained:

So what is deep infrastructure? It turns out the server is really made up of a lot more than just the CPU and OS that is running on it. You think about your peripherals, your network card, your storage card, to BIOS to UEFI, the BMC; all those have firmware running on the card. While there's a lot of commonalities, there is a plethora of different pieces of hardware running out there. If you're talking about a network card, there are loads of different network cards with different firmware. So, so it turns out that that managing deep infrastructure in a heterogeneous environment is quite difficult.

The hyperscale cloud providers have their homogeneous environments, which makes life a good bit easier for them, but in the enterprise, where servers can come from different vendors with different type of peripherals, firmware can be seriously different from rack to rack. Nebulon’s goal, therefore, is to try and establish some homogeneous immutability out of what is, in reality, a potentially volatile heterogeneous mutability.

Its solution has been, and continues to be, in developing a series of new implementations of the zero-trust security model. One, for example, ensures that every instantiation of an application is the same ever time.

Users don't have to go and individually manage them,  so avoiding the risk of creating any drift in configuration. This aims to guarantee a secure delivery of applications reliably and repeatably. Another covers potential drift in the computer hardware itself. Nasari said: 

If you're restarting your server, and every time you restart it comes back with an immutable instance of the OS, then there is no drift happening in the system over time. This gives an unmatched level of availability. If your servers are impacted by malware or some system administrator installing a patch, restart the system and it goes back into a known configuration that the system architect has decided is the standard.

This data becomes integral to the Nebulon card, reducing the collection of drivers and agents that can accumulate, and helping to isolate fault domains. It also helps reduce the actual load on the CPU, because the services that would ordinarily be running on the server are running on the card, freeing up CPU cores to run applications.

The final part of this is then the use of encryption, soup-to-nuts, as a default state for all volumes including boot volumes. Nasari said:

This really needs to happen. If it's difficult to implement, people just won't implement. It has to be built into the system. Then doing the ransomware recovery model works, because the code is running all the storage services and it can keep track of what has been changing. You can go to the cloud and say take my system back to the day before yesterday Because the card is controlling all the state, including the boot state, it can take the entire system back to a known good state in time.

My take

Using the cloud to deliver a fake cloud system to a real on-premises environment may seem a bit of an overkill, but this is an area where a new compromise is required. There are many applications – even new ones – where operating closer to home than a public cloud service can provide is the right answer. But it doesn’t necessarily require the full-on management of a co-location service. The time has come for some granularity between the two – a time to slice a bit of ‘public’ off and find a way to make it ‘private’.

That is where Nebulon is heading and as one of the first contenders for this new opportunity, only time will tell how close it is to whatever ends up as the sweet spot of the market. For every CIO, however, the thinking behind this development, and the questions it sets out to answer, should now be filling at least one corner of their mind. This is a problem they are all likely to face, possibly sooner than expected.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK