Eventually - Taking Kubernetes for a Spin
source link: https://devops.datenkollektiv.de/eventually-taking-kubernetes-for-a-spin.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
With more than two years now since the announcement: Kubernetes is Now Available In Docker Desktop Stable Channel It’s high time to take Kubernetes for a spin.
Kubernetes (K8s) - Production-Grade Container Orchestration - Automated container deployment, scaling, and management
It takes two major components for this experiment:
Prerequisites:
Docker Desktop - The fastest way to containerize applications on your desktop
Please follow the instructions for your operating system in use.
Check the installation with docker --version
.
The
kubectl
command-line tool lets you control Kubernetes clusters.
In case you are using a Mac: brew install kubectl
otherwise please check Install and Set Up kubectl
Check the installation with kubectl version --client
.
With cluster-info
we get the first insights:
$ kubectl cluster-info Kubernetes master is running at https://kubernetes.docker.internal:6443 KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Looks good master is running...
Tune into the playground cluster
Usually, you want access to additional clusters, e.g. development, canary, … you name it.
Inspired by Configure Access to Multiple Clusters we tune into a K8s playground (running unsecured on localhost:8080
) besides our local K8s cluster.
Tune into the playground cluster with the configuration option set-cluster
...
kubectl config --kubeconfig=config-playground set-cluster playground --server=http://localhost:8080 --insecure-skip-tls-verify
...finishing touches to the sandbox context with set-context
kubectl config --kubeconfig=config-playground set-context sandbox --cluster=playground --namespace=default --user=developer
View the result with kubectl config --kubeconfig=config-playground view
.
Switch the context with kubectl config --kubeconfig=config-playground use-context sandbox
and finally check your current context with kubectl config current-context
and you are ready to go!
Note: You can use the environment variable
KUBECONFIG
to avoid the cumbersome--kubeconfig
parameter.
You should see something similar to:
$ export KUBECONFIG=config-playground $ kubectl config view --minify apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: http://localhost:8080 name: playground contexts: - context: cluster: playground namespace: default user: developer name: sandbox current-context: sandbox kind: Config preferences: {} users: - name: developer user: {}
The first pod - Dashboard
Let's continue with two more tools in the K8s ecosystem: The Web UI / Dashboard and Helm.
Deployment with kubectl
We'll deploy the Web UI (Dashboard) via kubectl
first:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
Note: As you might have noticed, we used a deployment snippet from the internet.
You might want to check the latest version (and the content) directly at the GitHub project kubernetes/dashboard...
Get the list of all pods with the namespace kubernetes-dashboard
:
$ kubectl get pods --namespace kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-6b4884c9d5-v5qlf 1/1 Running 0 111m kubernetes-dashboard-7d8574ffd9-v6crt 1/1 Running 0 111m
No ports are publically accessible by default. Run kubectl proxy
to expose the dashboard.
Add ServiceAccount
and ClusterRoleBinding
with the shell script create-dashboard-user.sh
.
Grab the token with describe secret
...for more details check the script mentioned above.
Visit the proxy URL http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. and enter the token...that's it!
Deployment with Helm
Helm - The package manager for Kubernetes
Instead of using kubectl with YAML
, you can use Helm with a bunch of Kubernetes packages.
On a Mac, use brew install helm
to install Helm, otherwise check Installing Helm.
Verify the installation with helm version
.
Artifact HUB - Find, install and publish Kubernetes packages
Installing the dashboard (using a so-called chart) with helm install
would look something like:
$ helm install local-dashboard kubernetes-dashboard/kubernetes-dashboard ... Get the Kubernetes Dashboard URL by running: export POD_NAME=$(kubectl get pods -n default -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=local-dashboard" -o jsonpath="{.items[0].metadata.name}") echo https://127.0.0.1:8443/ kubectl -n default port-forward $POD_NAME 8443:8443
Note: At the time of writing, I had certificate issues when accessing the dashboard locally.
Please check https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard for more details about this chart.
A simple helm list
generates what you would expect … a list of deployments.
To round up this section run:
$ helm delete local-dashboard release "local-dashboard" uninstalled
Cleanup
Run kubectl delete pod <NAME>
will result in spawning new pods...
Extract the NODE
running the Kubernetes dashboard pods run on with kubectl
and awk
...
$ kubectl get pods --namespace kubernetes-dashboard -o wide | awk '{print $7,$1}' NODE NAME docker-desktop dashboard-metrics-scraper-6b4884c9d5-v5qlf docker-desktop kubernetes-dashboard-7d8574ffd9-9db86
Run delete deployments
to properly delete the dashboard.
$ kubectl delete deployments --namespace kubernetes-dashboard kubernetes-dashboard deployment.apps "kubernetes-dashboard" deleted
Tidy up the playground with the deletion of the service account and the role binding...
kubectl -n kubernetes-dashboard delete serviceaccount admin-user kubectl -n kubernetes-dashboard delete clusterrolebinding admin-user
The cluster is a good as new...have fun on your K8s journey! 🎉
Bonus - Troubleshooting with Octant
Octant - Visualize your Kubernetes workloads
Run the command brew install octant
in case you are using a Mac.
Check the releases from the Github project vmware-tanzu/octant otherwise.
Visit the Octant Overview with the browser of your choice to gain insights and start troubleshooting your K8s cluster(s).
k9s - Kubernetes Manager for Console Power Users
k9s - Kubernetes CLI To Manage Your Clusters In Style!
The documentation start with Who Let The Pods Out?. A proof that technical documentations can be funny to read…
Bonus - Katacoda
Visit the Katacoda Kubernetes Playground for additional hands-on experiments...
Resources
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK