66 lines
2.6 KiB
Markdown
66 lines
2.6 KiB
Markdown
# KubeZero 1.24
|
|
|
|
## TODO
|
|
|
|
## What's new - Major themes
|
|
- Cilium is now the default CNI, calico gets removed
|
|
- cluster-autoscaler is enabled by default on AWS
|
|
- worker nodes are now fully automatically updated to latest AMI and config in a rolling fashion
|
|
- integrated Bitnami Sealed Secrets controller
|
|
- reduced avg. CPU load on controller nodes, well below the 20% threshold to prevent extra costs from CPU credits in most cases
|
|
|
|
## Version upgrades
|
|
- cilium
|
|
- metallb
|
|
- nvidia-device-plugin
|
|
- aws-node-termination-handler
|
|
- aws-ebs-csi-driver
|
|
- aws-efs-csi-driver
|
|
- istio 1.16
|
|
- argocd 2.5.5 + tweaks
|
|
- all things prometheus incl. automated muting of certain alarms, eg. CPUOverCommit when cluster-autoscaler is available
|
|
|
|
### FeatureGates
|
|
- PodAndContainerStatsFromCRI
|
|
- DelegateFSGroupToCSIDriver
|
|
|
|
# Upgrade
|
|
`(No, really, you MUST read this before you upgrade)`
|
|
|
|
Ensure your Kube context points to the correct cluster !
|
|
|
|
1. Review CFN config for controller and workers, no mandatory changes during this release though
|
|
|
|
2. Upgrade CFN stacks for the control plane *ONLY* !
|
|
Updating the workers CFN stacks would trigger rolling updates right away !
|
|
|
|
3. Trigger cluster upgrade:
|
|
`./admin/upgrade_cluster.sh <path to the argocd app kubezero yaml for THIS cluster>`
|
|
|
|
4. Review the kubezero-config and if all looks good commit the ArgoApp resouce for Kubezero via regular git
|
|
git add / commit / push `<cluster/env/kubezero/application.yaml>`
|
|
*DO NOT yet re-enable ArgoCD before all pre v1.24 workers have been replaced !!!*
|
|
|
|
5. Reboot controller(s) one by one
|
|
Wait each time for controller to join and all pods running.
|
|
Might take a while ...
|
|
|
|
6. Upgrade CFN stacks for the workers.
|
|
This in turn will trigger automated worker updates by evicting pods and launching new workers in a rolling fashion.
|
|
Grab a coffee and keep an eye on the cluster to be safe ...
|
|
Depending on your cluster size it might take a while to roll over all workers!
|
|
|
|
7. Re-enable ArgoCD by hitting <return> on the still waiting upgrade script
|
|
|
|
8. Quickly head over to ArgoCD and sync the KubeZero main module as soon as possible to reduce potential back and forth in case ArgoCD has legacy state
|
|
|
|
|
|
## Known issues
|
|
|
|
### existing EFS volumes
|
|
If pods are getting stuck in `Pending` during the worker upgrade, check the status of any EFS PVC.
|
|
In case any PVC is in status `Lost`, edit the PVC and remove the following annotation:
|
|
``` pv.kubernetes.io/bind-completed: "yes" ```
|
|
This will instantly rebind the PVC to its PV and allow the pods to migrate.
|
|
Going to be fixed during the v1.25 cycle by a planned rework of the EFS storage module.
|