KubeZero/docs/v1.24.md

66 lines
2.6 KiB
Markdown
Raw Normal View History

2022-10-27 12:27:42 +00:00
# KubeZero 1.24
## TODO
## What's new - Major themes
- Cilium is now the default CNI, calico gets removed
- cluster-autoscaler is enabled by default on AWS
2023-01-15 14:33:49 +00:00
- worker nodes are now fully automatically updated to latest AMI and config in a rolling fashion
- integrated Bitnami Sealed Secrets controller
2023-01-15 14:33:49 +00:00
- reduced avg. CPU load on controller nodes, well below the 20% threshold to prevent extra costs from CPU credits in most cases
2022-10-27 12:27:42 +00:00
## Version upgrades
- cilium
- metallb
- nvidia-device-plugin
- aws-node-termination-handler
- aws-ebs-csi-driver
- aws-efs-csi-driver
- istio 1.16
- argocd 2.5.5 + tweaks
- all things prometheus incl. automated muting of certain alarms, eg. CPUOverCommit when cluster-autoscaler is available
### FeatureGates
- PodAndContainerStatsFromCRI
- DelegateFSGroupToCSIDriver
2022-10-27 12:27:42 +00:00
# Upgrade
`(No, really, you MUST read this before you upgrade)`
Ensure your Kube context points to the correct cluster !
2022-10-27 12:27:42 +00:00
1. Review CFN config for controller and workers, no mandatory changes during this release though
2022-10-27 12:27:42 +00:00
2. Upgrade CFN stacks for the control plane *ONLY* !
Updating the workers CFN stacks would trigger rolling updates right away !
2022-10-27 12:27:42 +00:00
3. Trigger cluster upgrade:
2022-10-27 12:27:42 +00:00
`./admin/upgrade_cluster.sh <path to the argocd app kubezero yaml for THIS cluster>`
2023-01-15 14:31:17 +00:00
4. Review the kubezero-config and if all looks good commit the ArgoApp resouce for Kubezero via regular git
2023-01-22 16:24:58 +00:00
git add / commit / push `<cluster/env/kubezero/application.yaml>`
2023-01-15 14:33:49 +00:00
*DO NOT yet re-enable ArgoCD before all pre v1.24 workers have been replaced !!!*
2023-01-15 14:31:17 +00:00
5. Reboot controller(s) one by one
2022-10-27 12:27:42 +00:00
Wait each time for controller to join and all pods running.
Might take a while ...
2023-01-15 14:31:17 +00:00
6. Upgrade CFN stacks for the workers.
This in turn will trigger automated worker updates by evicting pods and launching new workers in a rolling fashion.
Grab a coffee and keep an eye on the cluster to be safe ...
2023-01-15 14:31:17 +00:00
Depending on your cluster size it might take a while to roll over all workers!
2022-10-27 12:27:42 +00:00
2023-01-15 14:31:17 +00:00
7. Re-enable ArgoCD by hitting <return> on the still waiting upgrade script
8. Quickly head over to ArgoCD and sync the KubeZero main module as soon as possible to reduce potential back and forth in case ArgoCD has legacy state
## Known issues
2023-01-15 14:31:17 +00:00
### existing EFS volumes
If pods are getting stuck in `Pending` during the worker upgrade, check the status of any EFS PVC.
In case any PVC is in status `Lost`, edit the PVC and remove the following annotation:
``` pv.kubernetes.io/bind-completed: "yes" ```
This will instantly rebind the PVC to its PV and allow the pods to migrate.
Going to be fixed during the v1.25 cycle by a planned rework of the EFS storage module.