7

Why the Shift to Cloud-Native Doesn’t Mean Losing Your Legacy Infrastructure Inv...

 2 years ago
source link: https://www.gigaspaces.com/blog/why-the-shift-to-cloud-native-doesnt-mean-losing-your-legacy-infrastructure-investments
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Why the Shift to Cloud-Native Doesn’t Mean Losing Your Legacy Infrastructure Investments

10min. read
iStock-899731922.jpg

This question will inevitably come up when an organization opts to make the leap to cloud environments: what will become of their investments in legacy data center infrastructure? 

This concern is well warranted. Enterprises in traditional industries have often made significant investments in data centers and server equipment. In addition to the equipment, organizations such as insurance firms and banks, have often spent years maintaining their existing monolith applications written with legacy languages such as Cobol. Many organizations’ entire business operations, for example, rely on perhaps millions of lines of code they wish to maintain as they begin their digital transformation. 

Furthermore, a profitable company, for instance, might not also have much incentive to leverage their legacy infrastructure because it is mission-critical and works at least reasonably well, and are afraid of the risk of rewriting the code. 

In all of these cases, they fear losing those investments, which may have run to tens of millions of dollars over several years. However, organizations — even if profitable — cannot afford to not transform digitally.

At the same time, cloud environments offer obvious benefits and — upstarts that are solely cloud-native have disrupted leaders in many industries. Microservices environments are also a way many organizations are able to deploy applications at rapid cadences that can keep increasing as the cloud’s power grows. More importantly, they can leverage cloud resources to improve the customer experience in ways that were impossible with a traditional monolith data center model.

Getting to ‘Yes’

Understandably, organizations continue to struggle with the shift to hybrid and multi-cloud environments. Part of this is trying to determine how to maintain their investments in their legacy systems. All too often, it may seem reasonable to decide to “lift and shift,” moving their existing apps entirely to the cloud without redesigning them. They might come to believe that the ability to offer the amazing customer experiences that successful competing startup companies offer inevitably means making their legacy systems redundant — and losing investments in the process.

However, digital transformation does not hinge on cloud resources alone. While the cloud is a major component in the process, organizations cannot ignore the fact that they still have legacy infrastructure and, for the most part, it will still be there for years to come. 

Moving to the cloud doesn’t mean that enterprise is shutting down their legacy data centers and mainframe servers — instead, the cloud is there for organizations to reap new benefits, while still integrating legacy infrastructure. 

The insurance industry serves as a prime example. Often wrongly perceived as lagging behind the financial services industry in innovation, insurance firms are increasingly seeking to offer ultra-fast and new digital services, including policies that only take 90 seconds to process from the time when the customer completes the application. Some digital insurers are even able to pay insurance claims in less than three minutes once the claim is filed online or are using IoT sensor data for hyper-personalized services — as examples of what is possible. 

An established insurance company that has been profitable for decades may appear to be threatened by that solely cloud-native and seemingly more agile startup. The narrative associated with that same cloud-based startup might involve how it began its operations with the company founder firing up a Kubernetes cluster with Elastic Kubernetes Service (EKS) on Amazon Web Services (AWS) from a laptop, just as one example.

However, the good news is that the traditional insurance company could potentially offer its customer base digital services superior to those of a cloud-based startup. This can be achieved by combining cloud resources with their tried-and-trusted legacy data center infrastructure. With the right technologies in place, legacy infrastructure plus cloud resources can deliver a competitive edge that a solely cloud native-only company can’t replicate.

Key technologies, which enable this shift, include in-memory computing, caching, cloud bursting, and future-proofing processes with a modern Operational Data Store (ODS) (which we describe below) for both on-premises and cloud infrastructures including CDC, real-time processing of data and analytics queries. 

The idea is to more than just avoid losing your investments in legacy infrastructure. To do that, there are ways to leverage existing on-premises servers and infrastructure with cloud services for significant performance gains while lowering costs and, most importantly, improving customer experience. 

Cloud Bursting 

unnamed-1.png

Figure 1: Cloud bursting can allow for seamless addition of more server or memory capacity in a cloud environment in conjunction with on-premises needs.

A main selling point of cloud environments has been the versatility they offer to add and reduce computing resources on an as-needed basis. However, this capability typically requires manual intervention. But a more fine-tuned way for organizations to benefit from their existing on-premises and cloud services is through cloud bursting. 

Cloud bursting can allow for the seamless addition of more server or memory capacity in a cloud environment in conjunction with on-premises needs. This way capacity is scaled with the potential to access server or computing resources as needed. 

At the same time, your organization should be able to rely on cloud bursting to scale from on-premises to cloud environments. This capability, among other things, extends the capacity of your on-premises infrastructure to handle peak computing and data loads. 

The underlying platform that facilitates cloud bursting should also rely on in-memory storage and processing capabilities that can maintain computing performance throughout the network where needed by boosting scaling speeds and maintaining low-latency data transfers. 

Under this model, hybrid availability with cloud bursting should also, of course, extend to computing resources from different cloud vendors. Neither Amazon Web Services (AWS), Google Cloud, Azure, nor anyone else has a hegemony on the best of all resources. A viable cloud bursting option should thus allow DevOps teams to benefit from the best of what is on offer by picking and choosing to meet their needs. 

Future-Proofing

The last thing your organization wants is to adopt an all-or-nothing platform — one that extends its legacy infrastructure to cloud environments but limits options in the future as new tools and technologies become available. Decoupling is one way to avoid cloud or software provider lock-in that tethers a legacy system to a particular software platform or cloud environment.

A cloud application or data platform must be configured in such a way that it can be replaced at will. In fact, you should see warning signs if DevOps cannot replace the application, now or in the future, without changing a single line of code. Gradually modernizing cloud applications running across your on-premises and cloud environments should only depend on the best option available for your organization’s needs at any given time. You should have zero worries of compatibility at any time in the future.

ODS’ Place

unnamed-2.png

Figure 2: The ODS configuration removes data silos by combining data from different sources for a single view.

Like “digital transformation,” a “single pane of glass” arguably falls under the buzzword category. However, both descriptive terms describe what is also necessary to successfully deploy resources in cloud environments. In this way, ODS falls under the “single pane of glass” category. This is because it describes a configuration that removes data silos by combining data from different sources for a single view (hence, “a single pane of glass”). These sources might include data from legacy mainframe servers, cloud environments, and, of course, microservices-connected data from highly distributed on-premises and multi-cloud environments. 

Other benefits ODS offers include added security. Access to a centralized data system of record, for example, remains limited, unlike certain data pools where several users might have API access — a common gateway for security breaches. 

ODS is not a fresh concept but the underlying technology for its use with legacy infrastructure to support digital transformations is relatively new.  With emerging technologies such as Smart Operational Data Store, ODS systems can serve as Digital Integration Hubs that are highly operational for today’s hybrid environments that include legacy as well as cloud native infrastructure. This represents a departure from the not-so-distant past when ODS systems of record suffered from high latency, unacceptable time lags required to update data and other connectivity issues, rendering them unfeasible for distributed environments across multiple infrastructures. 

Real-Time Data Access 

The ability to manage an organization’s entire range of data is largely contingent on maintaining connections between all data sources, including on-premises legacy servers and multi-cloud environments often connected between microservices through APIs. The reach of this connectivity should allow for real-time access to the data regardless of its location. In our insurance example, a user seeking approval for a claim, or perhaps a banking customer elsewhere seeking an online loan, requires immediate access to transactional data. If the required data remains in a siloed system of record on a legacy mainframe, it’s one way that an all-cloud upstart can seem years ahead in its offerings. 

Change Data Capture (CDC) is a technology that ensures high-speed data connectivity between legacy on-premises servers and cloud environments. It ensures any change made to a source database — regardless of its location in a network — is automatically updated in a target data store. 

A distributed in-memory data storage and processing layer also relies on Smart Caching of both legacy and cloud data applications. A smart cache that maintains data consistency ensures that the data remains identical across on-premises and multi-cloud environments while being accessed through an in-memory cache at very low latencies. 

Data Jurisdictions

Every update to a mainframe database or to a cloud data store — such as when data is input to a data center server or a customer completes an online transaction — should be replicated between all ODS implementations automatically. These data transfers should be encrypted and configured to meet local jurisdictional data mandates and compliance. 

However, in some cases, such as large global organizations that require multiple data routes in multi-cloud environments, data stores need to be delineated or tenanted according to geographical zones and jurisdiction. Global organizations cannot thus rely on a single cloud vendor. Instead, in order to comply with localized data storage laws and mandates, their architectures must support multiple cloud environments.  

A customer in China, for example, might insist that all data associated with your service remains on servers physically located on servers in China in order to comply with regulations to benefit local vendors. In the European Union (EU), data access and storage requirements are subject to certain restrictions under the General Data Protection Regulation (GDPR) and other mandates. An organization in a heavily regulated industry, such as insurance or banking, will certainly be subject to data localization in different jurisdictions worldwide. 

Meeting data access compliance requirements serve as another example of how technologies such as Smart Operational Data Store, CDC and intelligent data replication methods can come into play. It is thus now possible — again, with the right systems in place — for an organization with on-premises legacy infrastructure and a rapidly growing presence across multi-cloud environments to meet both the performance and compliance demands required to service online customers across multiple geographic zones and jurisdictions.   

Leverage Your Legacy

The decision to benefit from cloud environments still lets organizations with traditional mainframe and other legacy servers take advantage of their investments, as described above. Through Cloud bursting, modern ODS platforms, Smart Caching — as described above — and other technologies, enterprises in long-established sectors such as insurance can certainly leverage both on-premises and cloud resources. The services legacy players can offer with these resources. Thanks to these resources, legacy players can offer services for the end-use customer that can remain out of reach for even the most agile cloud-only startups. So, in other words, your legacy infrastructure does not have to be a liability — in fact, it should be an asset. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK