Gitpod+RKE2 Deployment Notes · GitHub
source link: https://gist.github.com/kevinraymond/bd8c6513555e41d84d108aaae0acaee3
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Gitpod Self-hosted + RKE2
My journey so far making Gitpod play nice with RKE2, and other notes. This is what I've got working in the very small amount of time I have to work on this. Hopefully this helps as a starting point for a more robust and supported Gitpod+RKE2 solution.
This only documents some of the specific configurations, not everything needed to create the cluster itself (e.g., VPC; DNS).
I will update as time permits along the way!
Reference: gitpod-io/gitpod#5410
Current Cluster Summary
- Rancher RKE2 Kubernetes distribution v1.21.4+rke2r2
- Deployed on AWS EC2 instances using the latest Ubuntu 20.04 AMIs
- Components (e.g., containerd; runc) and versions are listed on the release page
- Gitpod Self-hosted v0.10.0
- cert-manager v1.5.3
- Harbor registry v2.3.2
Configuration Details
RKE2
The following configurations are applied during node creation via user data.
First server node configuration:
/etc/NetworkManager/conf.d/rke2-canal.conf
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:flannel*
The Canal configs, like above, were applied before changing the CNI to Multus/Calico and I just left the config files in place. Documenting here to show the current state.
/etc/rancher/rke2/config.yaml
write-kubeconfig-mode: 644
cloud-provider-name: aws
node-name: $(hostname -f)
selinux: true
cni: multus,calico
tls-san:
- <TLD_NAME>
- api.<TLD_NAME>
- apps.<TLD_NAME>
token: <SECRET_TOKEN>
Additional server nodes configuration:
Same as above except append one line in /etc/rancher/rke2/config.yaml
server: https://api.<TLD_NAME>:9345
Agent nodes configuration:
/etc/NetworkManager/conf.d/rke2-canal.conf
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:flannel*
/etc/rancher/rke2/config.yaml
cloud-provider-name: aws
node-name: $(hostname -f)
selinux: true
token: <SECRET_TOKEN>
server: https://api.<TLD_NAME>:9345
Gitpod
Helm custom values:
components:
imageBuilder:
registryCerts: []
registry:
secretName: image-builder-registry-secret
name: harbor.<DOMAIN>/gitpod
path: secrets/registry-auth.json
proxy:
serviceType: ClusterIP
server:
sessionSecret: <SOME_SECRET>
defaultBaseImageRegistryWhitelist:
- harbor.<DOMAIN>
workspace:
defaultImage:
imagePrefix: harbor.<DOMAIN>/docker-cache/gitpod/
pullSecret:
secretName: image-builder-registry-secret
template:
default:
spec:
dnsPolicy: ClusterFirst
wsDaemon:
containerRuntime:
containerd:
socket: /run/k3s/containerd/containerd.sock
nodeRoots:
- /var/lib
- /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io
userNamespaces:
fsShift: fuse
docker-registry:
enabled: false
hostname: gitpod.<DOMAIN>
minio:
accessKey: <ACCESS_KEY>
secretKey: <SECRET_KEY>
rabbitmq:
auth:
username: rabbitmq
password: <PASSWORD>
In addition to the above custom chart values, I have cert-manager installed with various ClusterIssuer
resources for HTTP/DNS challenge validation. I created an Ingress
to automatically apply certificates for the Gitpod endpoints instead of the currently documented manual configuration.
gitpod-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gitpod-proxy-ingress
namespace: gitpod
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod-dns
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 0
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: gitpod.<DOMAIN>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy
port:
number: 443
- host: "*.gitpod.<DOMAIN>"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy
port:
number: 443
- host: "*.ws.gitpod.<DOMAIN>"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy
port:
number: 443
tls:
- hosts:
- gitpod.<DOMAIN>
- "*.gitpod.<DOMAIN>"
- "*.ws.gitpod.<DOMAIN>"
secretName: https-certificates
Harbor
Nothing special here. I've got docker-cache/gitpod
, which is a Docker Hub cache, and gitpod
for the workspace-images
repository. Robot creds are in the image-builder-registry-secret
mentioned above.
Notes
Random stuff, thoughts, spam ...
All other Gitpod components are defaults at this point, with no other external "production" modifications.
One of the issues I'd had the entire time is when creating a workspace with 0.10.0 my setup doesn't want to actually build any image within the workspace-images
repository. I've had to manually push a properly tagged workspace-full
image into that repository for it to do anything.
As far as I understand it, Gitpod should pull workspace-full
(assuming using default) from the public repository and then push that to the gitpod
repository in Harbor. I don't see that happening at all.
Interestingly, I just tried 0.10.0-nightly within the last 15 minutes while writing this and that does open the workspace and start to build an image, which is subsequently pushed in the
workspace-images
repository! I've got no idea why it doesn't work with 0.10.0 (no time to investigate). Looks like this has to do with this additionalimage-builder-mk3
pod ... I'll have to search for and read about that. Can't get a workspace to actually open though - not a big concern with nightly.
Another new issue is that gp
is no longer working or found in any workspace I create - this used to work! I haven't changed the process used to deploy or test Gitpod between all the install/remove cycles, nor am I trying to use a custom image (strictly default for this testing). I am wondering if the image I pushed for workspace-images
to be found got messed up.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK