2

Kubernetes automation with Relay

 3 years ago
source link: https://puppet.com/blog/kubernetes-automation-with-relay/?utm_campaign=Feed%3A+PuppetLabs+%28Puppet+Labs%29
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Kubernetes automation with Relay

by Peter DeTender|25 May 2021
See more posts about: Products & Services

Why Use Kubernetes for CloudOps?

Kubernetes — a popular open source container orchestration system — enables you to easily deploy, monitor, and scale cloud-native application workloads in both private and public cloud environments. In other words, Kubernetes does the hard work of managing containerized applications, giving you more time to spend building it. Adopting Kubernetes can seem like a daunting task at first, but many teams that have taken the leap find that, after the initial learning curve, Kubernetes transforms the way they approach applications from development to multi-cloud operations.

But why is Kubernetes useful for CloudOps? How can Relay by Puppet ease several typical Kubernetes tasks using its automation workflow as a service integration? Let’s find out.

Kubernetes Cloud Services and CloudOps

You can build and deploy a full Kubernetes topology on-premises, since it is all based on traditional virtual machine compute, storage, and networking resources. Solutions like RedHat OpenShift, Apache CloudStack, OpenStack, and others are popular. You could also build your own Kubernetes environment using Docker running on Linux — either virtual machines or bare metal.

Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure each offer Kubernetes as a service hosted on their respective cloud environments:

Google Kubernetes Engine (GKE) on Google Cloud Elastic Kubernetes Service (EKS) on AWS Azure Kubernetes Service (AKS) on Azure In addition to a public cloud model, Google, Amazon, and Microsoft now also offer similar Kubernetes-as-a-service deployment in on-premises architectures (Google Anthos, Azure Stack HCI, and Amazon EKS Anywhere).

The biggest difference between running your own self-hosted topology or using any of the main public cloud scenarios falls under the concept of CloudOps. This refers to the level of operations and maintenance you still need to manage as a cloud consumer.

In a self-hosted setup, you are responsible for the day-to-day management of the entire stack — from hardware to VMs to Kubernetes itself — on top of your applications. Your management tasks include monitoring, patching, disaster recovery, and maintaining both hardware and software availability.

Kubernetes-as-a-service offerings eliminate most of that complexity, enabling you to focus on the services and applications running within the Kubernetes environment instead of managing all the underlying infrastructure.

Kubernetes Automation with Relay

Whether an organization chooses one of the cloud Kubernetes services or runs in an on-premise environment, another benefit of Kubernetes is that the requirements and operation are always the same. A developer can run a miniature Kubernetes stack on a local workstation for development and then deploy the same application code to a private cloud environment, or to a public cloud service like AKS — the application services and underlying foundation (the Kubernetes cluster) will remain the same.

Kubernetes CloudOps means you can scale your containerized application investment from exploratory efforts to full multi-cloud engagement with the same skills and people. But out-of-the-box, Kubernetes still needs manual, imperative oversight and management by your CloudOps team.

Relay by Puppet provides an automation workflow engine as a service, helping organizations manage their environments, whether running on-premises, in public clouds, or in hybrid cloud environments. This allows for declarative-like functionality which enables things like automatic installs and updates via a Helm integration. Relay can also automate incident remediation with k8s rollbacks, like with this workflow.

By providing the necessary connectors to integrate with common datacenter technologies — like AWS, Azure, GCP, Jira, Datadog, GitHub, Twilio, Docker, and many more — Relay lets you automate the interaction across many different applications and environments. This is beneficial in several use cases. For example, when you need to rely on different applications working together, Relay helps optimize cloud costs by identifying and removing unused cloud resources.

Relay provides a connector for Kubernetes. Using Relay workflow automation, you can automate tasks you would otherwise need to run against your Kubernetes cluster manually. To learn more, see Relay’s documentation and sample workflow.

Peter DeTender was an Azure MVP for 5 years, is a Microsoft Certified Trainer (MCT) for 12+ years and is still actively involved in the community as public speaker, technical writer, book author, and publisher.

Learn more

  • Find out more about Relay.
  • Learn how to auto-remediate your cloud-native environment, rollback Kubernetes deployments and update Datadog incidents automatically with this workflow
  • Learn what K3s (pronounced “keys”) is and how Relay uses it for product DevOps orchestration in this blog post.
  • Learn how to install and use Knative with no coding experience (AKA no YAML).
Share this post via:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK