6

Modernizing Python Apps and Data on Azure - Part 4: Azure Kubernetes Service

 2 years ago
source link: https://www.codeproject.com/Articles/5330685/Modernizing-Python-Apps-and-Data-on-Azure-Part-4-A
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Modernizing Python Apps and Data on Azure - Part 4: Azure Kubernetes Service

Marcelo Ricardo de Oliveira

Rate me:
0.00/5 (No votes)
2 May 2022CPOL9 min read
How to containerize legacy Python app, create an AKS cluster, and deploy it using VS Code
This is Part 4 of a 6-part series that demonstrates how to take a monolithic Python application and gradually modernize both the application and its data using Azure tools and services. This article shows how to containerize your legacy Python app, create an AKS cluster, and deploy the app to AKS using Visual Studio Code.

Azure offers a variety of cloud-based services to fit your needs. For example, Azure App Service may be the best choice for your single monolithic app or n-tier app with a few backend services. However, if you have multiple back-end services or microservices, Azure Kubernetes Service (AKS) is a better fit.

AKS is more complex than Azure App Service. Still, it’s a good choice when an organization moves its applications to the cloud and wants to run all of its legacy and modern apps in a unified container runtime environment.

In this fourth article of the six-part Modernizing Python Apps and Data on Azure series, we’ll demonstrate how to containerize your legacy Python app, create an AKS cluster, and deploy the app to AKS using Visual Studio Code. To review the previous three articles, check out:

Follow the steps in this tutorial to get your application running. Or, download and open this GitHub repository folder to get the final project. You should know Python and Django to follow along.

Introducing Azure Kubernetes Service (AKS)

Our Azure Kubernetes Service (AKS) implementation includes these participants:

  • Azure Load Balancer: When we configure Azure Kubernetes Service, it creates a new Load Balancer. Azure Load Balancer helps scale our applications and create highly-available services.

  • Azure CLI: The Azure command-line interface (Azure CLI) lets us create and manage Azure resources. It’s available across Azure services to get you working quickly, emphasizing automation. We’ll use Azure CLI to log in to an Azure account, manage credentials, create a role, and connect to the container registry, among other tasks.

  • Azure Container Registry: We'll use Azure Container Registry to store and manage container images. Once we build our Conduit image locally using Visual Studio Code, we’ll push the image to our Azure Container Registry to create a container with our Conduit app running inside.

  • Azure Kubernetes Cluster: We can create an AKS cluster using the Azure command-line interface (CLI), Azure portal, or PowerShell.

  • Azure Active Directory and RBAC: We can configure Azure Kubernetes Service to use Azure Active Directory (AD) for user authentication. We sign into an AKS cluster using an Azure AD authentication token in this configuration. Once authenticated, we’ll use the built-in Kubernetes role-based access control (Kubernetes RBAC) to manage our new cluster based on our user identity.

  • Azure Database for PostgreSQL: Our containerized app will interact with the existing PostgreSQL database we created in the second article of this series.

Containerizing a Django App

Azure App Service lets us host web apps, mobile back ends, and REST APIs in our favorite language. It enables apps to run on Windows and Linux, offers auto-scaling and high availability, and enables automated deployments. That said, you may be wondering why we would move to a container-based approach.

Organizations often choose this modality because containers offer many possibilities. They’re isolated and portable environments where developers can package applications with necessary libraries and links. The result is more work efficiency and simplified deployment.

Containers have been around for a long time but took a while to gain traction. Docker popularized containers with its wide range of support and ease of learning, and in a short time, it became the industry standard.

To get Docker running on your machine, install Docker Engine, available on Linux distributions, macOS and Windows through Docker Desktop, and a static binary installation.

Installing Docker and Kubernetes for VS Code Extensions

Let’s install Docker for Visual Studio Code. It’s easy to build, manage, and deploy containerized applications from VS Code using the Docker extension. It also conveniently provides many commands to manage your containers inside VS Code.

Then install Kubernetes for Visual Studio Code. This extension lets developers build applications to run in Kubernetes clusters and helps DevOps staff troubleshoot Kubernetes applications.

Containerizing a Legacy Python App

Now let’s add a Docker file to our workspace by opening the Command Palette (Ctrl+Shift+P) then choosing the Docker: Add Docker Files to Workspace command.

This command generates Dockerfile and .dockerignore files and adds them to your workspace. VS Code will also ask if you want to add Docker Compose files, which is optional.

The extension can scaffold Docker files for popular development languages, including Python. Select Python: Django from the list to customize the generated Docker files accordingly.

Then choose manage.py as your app’s entry point.

And add 8000 as the port your app will listen to.

Then choose Yes to include optional Docker Compose files.

When we create these files, we also make the necessary artifacts to provide debugging support for Python.

A traditional web server can’t run Python applications independently. It needs some additional modules. The WSGI specification provides a standard interface for the webserver to execute Python code correctly.

So, let’s install Gunicorn, a Python WSGI HTTP server compatible with various web frameworks. Open the application’s requirements.txt file and add this line:

Copy Code
gunicorn==20.1.0

Open the Dockerfile file and add this code block before the requirements.txt installation:

Docker
Copy Code
RUN apt-get update \
    && apt-get -y install libpq-dev gcc \
    && pip install psycopg2

At the end of the Dockerfile, modify this line so Gunicorn knows how to serve your Conduit WSGI application:

Docker
Copy Code
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "conduit.wsgi:application"]

Now open the docker-compose.yml file and replace both the service name and the image name with “conduit.”

Copy Code
version: '3.4'
services:
  conduit:
    image: conduit
    build:
      context: .
      dockerfile: ./Dockerfile
    ports:
      - 8000:8000

Then open the docker-compose.debug.yml file and replace both the service name and the image name with “conduit.”

Copy Code
services:
  conduit:
    image: conduit

Now, modify the \conduit\settings.py file to include a new allowed host we'll configure later on AKS via an environment variable:

Copy Code
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', 
'conduitwebapp.azurewebsites.net', os.environ.get("AKS_HOST")]

Finally, right-click the Dockerfile and click Build Image to build a containerized image of your Conduit application.

Then tag the image as “conduit:v1”.

Creating a Resource Group

To host a Kubernetes-based application on Azure, you need to create resources for virtual machines (VMs), storage accounts, virtual networks, web apps, databases, and database servers. And as you host multiple applications on Azure, managing the growing number of resources can become overwhelming. Working with resource groups helps. An Azure Resource Group is an entity holding an Azure solution’s related resources.

Using the Microsoft Azure Portal, let's create a resource group, conduit-rg. First, click Create a resource and search for “resource group”:

Next, click Create.

Then, name it “conduit-rg” and click Review + create.

Creating an AKS Cluster

Go to the Azure Portal home and click the plus sign to create a new AKS cluster. Then, search for “Kubernetes Service”:

Then click Create.

Define the Azure Subscription next. The Azure subscription grants you access to Azure services. Azure also provides resource use reports and bills services according to your subscription.

Specify the Resource group. This collection of resources shares the same lifecycle, permissions, and policies.

Also, choose the Cluster preset configuration. Since this is a demo, we’re aiming for the least cost: the Dev/Test ($) preset.

The example in the following screenshot creates a cluster named “conduit-cluster” with one node. This creation will take several minutes to complete.

Creating a Container Registry

You can use the Azure Container Registry to build, store, and manage container images and artifacts in a private registry, no matter your container deployment type. Later, this Container Registry will store the same image we built locally.

Choose the Resource groupconduit-rg” and name the Registryconduitacr.” Also, choose the “BasicSKU, then click Review + create.

Preparing AKS to Publish the App

Let’s use a command line with Azure CLI and kubectl and prepare AKS to publish our app.

If you're using a local install, log in with Azure CLI using the az login command. Follow the steps your terminal displays to finish authentication.

Azure-CLI
Copy Code
> az login

This command will open your web browser at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize and prompt you to sign in. Once you have signed in, your local installation will remember your account.

Installing kubectl

kubectl is a command-line tool for managing Kubernetes clusters. Set up Kubernetes tools on your computer by installing kubectl.

Use the az aks get-credentials command to configure kubectl to connect to your Kubernetes cluster. This command downloads credentials and configures the Kubernetes CLI to use those credentials.

Azure-CLI
Copy Code
> az aks get-credentials --resource-group conduit-rg --name conduit-cluster

The command results in the following information:

Copy Code
Merged "conduit-cluster" as current context in C:\Users\moliveira\.kube\config

Now, let’s use the following command to create a service principal and assign the AcrPull role to it:

Azure-CLI
Copy Code
> az ad sp create-for-rbac --role AcrPull

Creating 'AcrPull' role assignment under scope '/subscriptions/3acc8650-3ea0-42db-b1dd-694439b0aa06'.

Before pushing and pulling container images, we must log in to the Azure Container Registry we have already created. Use the az acr login command specifying only the "conduitacr" registry name when logging in with the Azure CLI.

Azure-CLI
Copy Code
> az acr login --name conduitacr

Login succeeded

Next, run the following command to attach the conduitacr ACR account to the "conduit-cluster" AKS cluster within the conduit-rg resource group:

Azure-CLI
Copy Code
> az aks update -n conduit-cluster -g conduit-rg --attach-acr conduitacr

AAD role propagation done[############################################] 100.0000%

Now create a tag conduit:v1 that refers to the source image named conduit:v1:

Docker
Copy Code
> docker tag conduit:v1 conduitacr.azurecr.io/conduit:v1

Then, run the following command to push the conduitacr.azurecr.io/conduit:v1 image to the conduitacr repository:

Docker
Copy Code
> docker push conduitacr.azurecr.io/conduit:v1

A Kubernetes manifest file defines the desired cluster state, such as what container images to run. Let’s create a manifest file named conduit-app.yml containing the following YAML definition:

Shrink ▲   Copy Code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: conduit
spec:
  replicas: 1
  selector:
    matchLabels:
      app: conduit
  template:
    metadata:
      labels:
        app: conduit
    spec:
      containers:
      - name: conduit
        # image: conduit:v1
        image: conduitacr.azurecr.io/conduit:v1
        resources:
          limits:
            memory: "256Mi"
            cpu: "500m"
        ports:
        - containerPort: 8000
        env:
        - name: DATABASE_NAME
          value: "conduit_db"
        - name: DATABASE_USER
          value: "myadmin@mydemoserver-20220116"
        - name: DATABASE_PASSWORD
          value: "123!@#qweQWE"
        - name: DATABASE_HOST
          value: "mydemoserver-20220116.postgres.database.azure.com"
        - name: AKS_HOST
          value: "my-aks-cluster-external-ip"
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - django-app
              topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
  name: conduit
spec:
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8000
  selector:
    app: conduit

Finally, deploy the application using the kubectl apply command and specify your YAML manifest’s name:

Copy Code
> kubectl apply -f conduit-app.yml
deployment.apps/conduit created
service/conduit created

Testing the Application

A Kubernetes service exposes the running application’s front end to the Internet. This process can take a few minutes.

Use the kubectl get service command with the --watch argument to monitor progress.

Copy Code
> kubectl get service conduit --watch

NAME      TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
conduit   LoadBalancer   10.0.81.75   52.147.222.26   80:32599/TCP   15s

Now open the conduit-cluster resource:

Open the Workloads tab and find the conduit deployment:

Then click the YAML tab to edit the configuration. Look for the AKS_HOST environment setting:

Then, change the AKS_HOST environment variable’s value to the external IP from previously running the kubectl get service command:

Finally, save the changes to the YAML configuration and go to http://<<EXTERNAL-IP>>/api/articles.

Next Steps

In this article, we further modernized our Python app by moving it to Azure Kubernetes Service (AKS). We used Docker and Visual Studio Code to generate a container image from the Conduit Django app we’ve been working with since the beginning of this series.

We then created Azure resources related to AKS, including a resource group, deployment, service, container registry, and Kubernetes cluster. Next, we pushed our local containerized app image to the container registry and started the AKS service to test our containerized app online.

Moving your Conduit app to Azure Kubernetes Service offers your organization many benefits, including simplifying the deployment and management of microservices-based architecture, streamlining horizontal scaling, and enabling self-healing, load balancing, and secret management.

Although we previously shifted our data into the cloud-hosted Azure Database for PostgreSQL, we can go further into data modernization. Continue to Part 5 of this series to migrate your data and app to a new Cosmos DB database using Djongo Mapper and Cosmos DB’s MongoDB API.

To learn more about how to build, deliver, and scale container-based applications faster with Kubernetes, and how to deploy and manage containers at scale with Kubernetes on Azure, check out Get up and running with Kubernetes.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK