From 9ce68bc1033be1458dbad3440b0c04d1fc575c7c Mon Sep 17 00:00:00 2001 From: Stefan Reimer Date: Wed, 13 Apr 2022 15:08:26 +0200 Subject: [PATCH] chore: test markdown --- releases/v1.22/README.md | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/releases/v1.22/README.md b/releases/v1.22/README.md index 601b120..45bc064 100644 --- a/releases/v1.22/README.md +++ b/releases/v1.22/README.md @@ -1,31 +1,44 @@ +--- +title: KubeZero 1.22 +--- + +# Release notes + +## Custom AMIs +Starting with 1.22.X, all KubeZero nodes will boot from custom pre-baked AMIs. These AMIs will be provided and shared by the Zero Down Time AWS account. +This change elimitates *ALL* dependencies at boot time other than container registries. Gone are the days when Ubuntu, SuSE or Github decided to ruin your morning coffee. + +While we are at it, KubeZero also moves from Ubuntu 20.04LTS to Alpine 3.15 as its base OS. + +## Misc +- new node label `node.kubernetes.io/instance-type` for all nodes containing the EC2 instance type +- container runtime migrated from runc to crun, reduces memory overhead per pod from 16M to 4M, more info: https://www.redhat.com/sysadmin/introduction-crun + + +## Upgrade + *Ensure your Kube context points to the correct cluster !!!* 1. Trigger the cluster upgrade: -`./upgrade_121.sh` +`./upgrade_122.sh` 2. Upgrade CFN stacks for the control plane and all worker groups -Change Kubernetes version in controller config from `1.20.X` to `1.21.9` +Change Kubernetes version in controller config from `1.21.9` to `1.22.8` 3. Reboot controller(s) one by one Wait each time for controller to join and all pods running. Might take a while ... -4. Patch current deployments, blocking ArgoCD otherwise -`./kubezero_121.sh` - -5. Migrate ArgoCD config for the cluster +4. Migrate ArgoCD config for the cluster `./migrate_argo.sh ` Adjust as needed, eg. ensure eck-operator is enabled if needed. git add / commit / push Watch ArgoCD do its work. -6. Replace worker nodes +5. Replace worker nodes Eg. by doubling `desired` for each worker ASG, once all new workers joined, drain old workers one by one, finally reset `desired` for each worker ASG which will terminate the old workers. ## Known issues -On old/current workers, until workers get replaced: -If pods seem stuck, eg. fluent-bit shows NotReady *after* control nodes have been upgraded - -> restart `kube-proxy` on the affected workers