doc: refine README

This commit is contained in:
Stefan Reimer 2022-02-02 17:16:14 +01:00
parent 8274406aee
commit ad3af99c10
1 changed files with 19 additions and 13 deletions

View File

@ -1,25 +1,31 @@
! Ensure your Kube context points to the correct cluster !
*Ensure your Kube context points to the correct cluster !!!*
# Trigger the cluster upgrade
1. Trigger the cluster upgrade:
`./upgrade_121.sh`
# Upgrade CFN stacks for the control plane and all worker groups
2. Upgrade CFN stacks for the control plane and all worker groups
Change Kubernetes version in controller config from `1.20.X` to `1.21.9`
# Reboot controller(s) one by one
3. Reboot controller(s) one by one
Wait each time for controller to join and all pods running.
Might take a while ...
# Patch current deployments, blocking ArgoCD otherwise
4. Patch current deployments, blocking ArgoCD otherwise
`./kubezero_121.sh`
# Migrate ArgoCD config for the cluster
`./migrate_argo.sh "cluster/env/kubezero/application.yaml"`
Adjust as needed, eg. ensure eck-operator is enabled if needed.
git add / commit / push
5. Migrate ArgoCD config for the cluster
`./migrate_argo.sh <cluster/env/kubezero/application.yaml>`
Adjust as needed, eg. ensure eck-operator is enabled if needed.
git add / commit / push
Watch ArgoCD do its work.
# Replace worker nodes
6. Replace worker nodes
Eg. by doubling `desired` for each worker ASG,
once all new workers joined, drain old workers one by one,
finally reset `desired` for each worker ASG which will terminate the old workers.
## Known issues
- pods seem stuck, eg. fluent-bit on workers shows NotReady *after* control nodes have been ugpraded
On old/current workers, until workers get replaced:
If pods seem stuck, eg. fluent-bit shows NotReady *after* control nodes have been upgraded
-> restart `kube-proxy` on the affected workers