chore: test markdown

This commit is contained in:
Stefan Reimer 2022-04-13 16:23:16 +02:00
parent 750be8a5f3
commit 2659b03402

View File

@ -1,9 +1,8 @@
--- # KubeZero 1.22
title: Release notes
author: Stefan Reimer
---
# Custom AMIs ## Release notes
### Custom AMIs
Starting with 1.22, all KubeZero nodes will boot from custom pre-baked AMIs. These AMIs will be provided and shared by the Zero Down Time for all customers, all sources and build pipeline are freely [available](https://git.zero-downtime.net/ZeroDownTime/alpine-zdt-images). Starting with 1.22, all KubeZero nodes will boot from custom pre-baked AMIs. These AMIs will be provided and shared by the Zero Down Time for all customers, all sources and build pipeline are freely [available](https://git.zero-downtime.net/ZeroDownTime/alpine-zdt-images).
This eliminates *ALL* dependencies at boot time other than container registries. Gone are the days when Ubuntu, SuSE or Github decided to ruin your morning coffee. This eliminates *ALL* dependencies at boot time other than container registries. Gone are the days when Ubuntu, SuSE or Github decided to ruin your morning coffee.
@ -11,25 +10,25 @@ This eliminates *ALL* dependencies at boot time other than container registries.
KubeZero also migrates from Ubuntu 20.04 LTS to [Alpine v3.15](https://www.alpinelinux.org/releases/) as its base OS, which reduces the root file system size from 8GB to 2GB. KubeZero also migrates from Ubuntu 20.04 LTS to [Alpine v3.15](https://www.alpinelinux.org/releases/) as its base OS, which reduces the root file system size from 8GB to 2GB.
Additionally all AMIs are encrypted, which is ensures encryption at rest even for every instance's root file system. This closes the last gaps in achieving *full encryption at rest* for every volume within a default KubeZero deployment. Additionally all AMIs are encrypted, which is ensures encryption at rest even for every instance's root file system. This closes the last gaps in achieving *full encryption at rest* for every volume within a default KubeZero deployment.
# DNS ### DNS
The [external-dns](https://github.com/kubernetes-sigs/external-dns) controller got integrated and is used to provide DNS based loadbalacing for the apiserver itself. This allows high available control planes on AWS as well as bare-metal in combination with various DNS providers. The [external-dns](https://github.com/kubernetes-sigs/external-dns) controller got integrated and is used to provide DNS based loadbalacing for the apiserver itself. This allows high available control planes on AWS as well as bare-metal in combination with various DNS providers.
Further usage of this controller to automate any DNS related configurations, like Ingress etc. is planned for following releases. Further usage of this controller to automate any DNS related configurations, like Ingress etc. is planned for following releases.
# crun - container runtime ### crun - container runtime
got migrated from runc to crun, which reduces the memory overhead *per pod* from 16M to 4M, details at [crun intro](https://www.redhat.com/sysadmin/introduction-crun) got migrated from runc to crun, which reduces the memory overhead *per pod* from 16M to 4M, details at [crun intro](https://www.redhat.com/sysadmin/introduction-crun)
# Version upgrades ### Version upgrades
- Istio to 1.13.2 - Istio to 1.13.2
- aws-termination-handler to 1.16 - aws-termination-handler to 1.16
- aws-iam-authenticator to 0.5.7 - aws-iam-authenticator to 0.5.7
# Misc ### Misc
- new metrics and dashboards for openEBS LVM CSI drivers - new metrics and dashboards for openEBS LVM CSI drivers
- new node label `node.kubernetes.io/instance-type` for all nodes containing the EC2 instance type - new node label `node.kubernetes.io/instance-type` for all nodes containing the EC2 instance type
# Upgrade ## Upgrade
*Ensure your Kube context points to the correct cluster !!!* *Ensure your Kube context points to the correct cluster !!!*
@ -54,5 +53,5 @@ Eg. by doubling `desired` for each worker ASG,
once all new workers joined, drain old workers one by one, once all new workers joined, drain old workers one by one,
finally reset `desired` for each worker ASG which will terminate the old workers. finally reset `desired` for each worker ASG which will terminate the old workers.
# Known issues ## Known issues