kubezero/releases/v1.22/README.md

61 lines
3.1 KiB
Markdown
Raw Normal View History

2022-04-13 14:23:16 +00:00
# KubeZero 1.22
2022-04-13 13:08:26 +00:00
2022-04-13 16:02:14 +00:00
## What's new - Major themes
2022-04-13 14:23:16 +00:00
2022-04-13 16:02:14 +00:00
### Alpine - Custom AMIs
Starting with 1.22, all KubeZero nodes will boot from custom pre-baked AMIs. These AMIs will be provided and shared by the Zero Down Time for all customers. All sources and the build pipeline are freely [available](https://git.zero-downtime.net/ZeroDownTime/alpine-zdt-images) as usual though.
2022-04-13 13:08:26 +00:00
2022-04-13 14:22:03 +00:00
This eliminates *ALL* dependencies at boot time other than container registries. Gone are the days when Ubuntu, SuSE or Github decided to ruin your morning coffee.
2022-04-13 13:08:26 +00:00
2022-04-13 14:22:03 +00:00
KubeZero also migrates from Ubuntu 20.04 LTS to [Alpine v3.15](https://www.alpinelinux.org/releases/) as its base OS, which reduces the root file system size from 8GB to 2GB.
Additionally all AMIs are encrypted, which is ensures encryption at rest even for every instance's root file system. This closes the last gaps in achieving *full encryption at rest* for every volume within a default KubeZero deployment.
2022-04-13 13:08:26 +00:00
2022-04-13 14:23:16 +00:00
### DNS
2022-04-13 14:22:03 +00:00
The [external-dns](https://github.com/kubernetes-sigs/external-dns) controller got integrated and is used to provide DNS based loadbalacing for the apiserver itself. This allows high available control planes on AWS as well as bare-metal in combination with various DNS providers.
Further usage of this controller to automate any DNS related configurations, like Ingress etc. is planned for following releases.
2022-04-13 16:02:14 +00:00
### Container runtime
Cri-o now uses crun rather than runc, which reduces the memory overhead *per pod* from 16M to 4M, details at [crun intro](https://www.redhat.com/sysadmin/introduction-crun)
2022-04-13 14:22:03 +00:00
2022-04-13 16:02:14 +00:00
## Version upgrades
- Istio to 1.13.2 using new upstream Helm charts
2022-04-13 14:22:03 +00:00
- aws-termination-handler to 1.16
2022-04-13 16:02:14 +00:00
- aws-iam-authenticator to 0.5.7, required for >1.22 allows using the latest version on the client side again
2022-04-13 14:22:03 +00:00
2022-04-13 16:02:14 +00:00
## Misc
2022-04-13 14:22:03 +00:00
- new metrics and dashboards for openEBS LVM CSI drivers
2022-04-13 13:08:26 +00:00
- new node label `node.kubernetes.io/instance-type` for all nodes containing the EC2 instance type
2022-04-13 16:02:14 +00:00
- kubelet root moved to `/var/lib/containers` to ensure ephemeral storage is allocated from the configurable volume rather than the root fs of the worker
2022-04-13 13:08:26 +00:00
2022-04-13 16:02:14 +00:00
# Upgrade
`(No, really, you MUST read this before you upgrade)`
2022-04-13 13:08:26 +00:00
2022-04-13 16:02:14 +00:00
- Ensure your Kube context points to the correct cluster !
- Ensure any usage of Kiam has been migrated to OIDC providers as any remaining Kiam components will be deleted as part of the upgrade
2022-02-02 16:08:13 +00:00
2022-02-02 16:16:14 +00:00
1. Trigger the cluster upgrade:
2022-04-13 16:02:14 +00:00
`./release/v1.22/upgrade_cluster.sh`
2022-02-02 16:08:13 +00:00
2022-02-02 16:16:14 +00:00
2. Upgrade CFN stacks for the control plane and all worker groups
2022-04-13 13:08:26 +00:00
Change Kubernetes version in controller config from `1.21.9` to `1.22.8`
2022-02-02 16:08:13 +00:00
2022-02-02 16:16:14 +00:00
3. Reboot controller(s) one by one
Wait each time for controller to join and all pods running.
Might take a while ...
2022-02-02 16:08:13 +00:00
2022-04-13 13:08:26 +00:00
4. Migrate ArgoCD config for the cluster
2022-02-02 16:16:14 +00:00
`./migrate_argo.sh <cluster/env/kubezero/application.yaml>`
Adjust as needed, eg. ensure eck-operator is enabled if needed.
git add / commit / push
2022-02-02 16:08:13 +00:00
Watch ArgoCD do its work.
2022-04-13 13:08:26 +00:00
5. Replace worker nodes
2022-02-02 16:16:14 +00:00
Eg. by doubling `desired` for each worker ASG,
once all new workers joined, drain old workers one by one,
finally reset `desired` for each worker ASG which will terminate the old workers.
2022-02-02 16:08:13 +00:00
2022-04-13 14:23:16 +00:00
## Known issues
2022-02-02 16:16:14 +00:00