Compare commits

...

36 Commits

Author SHA1 Message Date
7b3981c5ec docs: update release schedule 2025-05-05 13:48:57 +00:00
e64af1c659 fix: keycloak memlimit, docs 2025-05-05 12:57:55 +00:00
b85d846873 Merge pull request 'chore(deps): update helm release argo-cd to v7.9.0' (#71) from renovate/kubezero-argo-kubezero-argo-dependencies into main
Reviewed-on: #71
2025-05-05 12:25:48 +00:00
a26c9a5c1b chore(deps): update helm release argo-cd to v7.9.0 2025-04-29 03:06:19 +00:00
b4a5ee40c2 Merge pull request 'chore(deps): update keycloak docker tag to v24.6.1' (#34) from renovate/kubezero-auth-kubezero-auth-dependencies into main
Reviewed-on: #34
2025-04-25 11:34:36 +00:00
47765e0906 chore(deps): update keycloak docker tag to v24.6.1 2025-04-23 18:30:20 +00:00
2b4b7d343e feat: First working V1.32 control-plane 2025-04-23 18:26:04 +00:00
164d59f2f8 Merge pull request 'chore(deps): update kubezero-storage-dependencies' (#53) from renovate/kubezero-storage-kubezero-storage-dependencies into main
Reviewed-on: #53
2025-04-23 16:16:41 +00:00
b01adab4eb chore(deps): update kubezero-storage-dependencies 2025-04-23 16:16:41 +00:00
3e031b9190 Merge pull request 'chore(deps): update ghcr.io/k8snetworkplumbingwg/multus-cni docker tag to v4.2.0' (#69) from renovate/ghcr.io-k8snetworkplumbingwg-multus-cni-4.x into main
Reviewed-on: #69
2025-04-23 16:15:49 +00:00
3c18f18adf chore(deps): update ghcr.io/k8snetworkplumbingwg/multus-cni docker tag to v4.2.0 2025-04-23 16:15:49 +00:00
16b2e28fb1 Merge pull request 'chore(deps): update kubezero-network-dependencies' (#46) from renovate/kubezero-network-kubezero-network-dependencies into main
Reviewed-on: #46
2025-04-23 16:14:49 +00:00
91af021525 chore(deps): update kubezero-network-dependencies 2025-04-23 16:14:49 +00:00
1e18fd2471 Merge pull request 'chore(deps): update kubezero-addons-dependencies' (#51) from renovate/kubezero-addons-kubezero-addons-dependencies into main
Reviewed-on: #51
2025-04-23 15:13:43 +00:00
dd861eeed7 chore(deps): update kubezero-addons-dependencies 2025-04-23 15:13:43 +00:00
1ef25fce2b Merge pull request 'chore(deps): update natsio/prometheus-nats-exporter docker tag to v0.17.2' (#70) from renovate/natsio-prometheus-nats-exporter-0.x into main
Reviewed-on: #70
2025-04-22 20:19:47 +00:00
94a38201c3 chore(deps): update natsio/prometheus-nats-exporter docker tag to v0.17.2 2025-04-22 20:19:47 +00:00
c675e7aa1b Merge latest ci-tools-lib 2025-04-17 23:00:48 +00:00
00daef3b0b Squashed '.ci/' changes from a5cd89d7..9725c2ef
9725c2ef fix: ensure we dont remove rc builds

git-subtree-dir: .ci
git-subtree-split: 9725c2ef8842467951ec60adb1b45dfeca7618f5
2025-04-17 23:00:48 +00:00
9bb0e0e91a feat: reorg cluster upgrade scripts to allow support for KubeZero only clusters like GKE 2025-04-17 22:42:39 +00:00
e022db091c Squashed '.ci/' changes from 15e4d1f5..a5cd89d7
a5cd89d7 feat: improve tag parsing, ensure dirty is added if needed

git-subtree-dir: .ci
git-subtree-split: a5cd89d73157c829eaf12f91a68f73826fbb35e7
2025-04-17 22:37:10 +00:00
da2510c8df Merge latest ci-tools-lib 2025-04-17 22:37:10 +00:00
c4aab252e8 Squashed '.ci/' changes from a3928364..15e4d1f5
15e4d1f5 ci: make work with main branch
3feaf6fa chore: migrate to main branch

git-subtree-dir: .ci
git-subtree-split: 15e4d1f589c8e055944b2a4b58a9a50728e245b4
2025-04-17 22:00:32 +00:00
0a813c525c Merge latest ci-tools-lib 2025-04-17 22:00:32 +00:00
9d28705079 docs: typos 2025-04-17 12:08:06 +00:00
024a0fcfaf feat: ensure central secret keys exists 2025-04-17 13:06:28 +01:00
f88d6a2f0d docs: update 2025-04-17 10:35:40 +00:00
2c47a28e10 feat: MQ various version bumps 2025-04-17 10:35:13 +00:00
dfdf50f85f Merge pull request 'chore(deps): update kubezero-mq-dependencies' (#8) from renovate/kubezero-mq-kubezero-mq-dependencies into main
Reviewed-on: #8
2025-04-14 13:03:42 +00:00
d4ba1d1a01 chore(deps): update kubezero-mq-dependencies 2025-04-14 13:03:42 +00:00
fa06c13805 Merge pull request 'chore(deps): update nats docker tag to v2.11.1' (#9) from renovate/nats-2.x into main
Reviewed-on: #9
2025-04-14 13:03:18 +00:00
7f2208fea4 chore(deps): update nats docker tag to v2.11.1 2025-04-14 13:03:18 +00:00
c427e73f79 Merge pull request 'chore(deps): update kubezero-cache-dependencies' (#42) from renovate/kubezero-cache-kubezero-cache-dependencies into main
Reviewed-on: #42
2025-04-14 12:26:09 +00:00
2fd775624b chore(deps): update kubezero-cache-dependencies 2025-04-14 12:26:09 +00:00
ffaf037483 Merge pull request 'chore(deps): update helm release neo4j to v2025' (#62) from renovate/kubezero-graph-major-kubezero-graph-dependencies into main
Reviewed-on: #62
2025-04-14 11:40:03 +00:00
0664b2bed3 chore(deps): update helm release neo4j to v2025 2025-04-14 11:40:03 +00:00
69 changed files with 1720 additions and 381 deletions

View File

@ -14,7 +14,7 @@ include .ci/podman.mk
Add subtree to your project:
```
git subtree add --prefix .ci https://git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash
git subtree add --prefix .ci https://git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git main --squash
```

View File

@ -41,7 +41,8 @@ for image in sorted(images, key=lambda d: d['imagePushedAt'], reverse=True):
_delete = True
for tag in image["imageTags"]:
# Look for at least one tag NOT beign a SemVer dev tag
if "-" not in tag:
# untagged dev builds get tagged as <tag>-g<commit>
if "-g" not in tag and "dirty" not in tag:
_delete = False
if _delete:
print("Deleting development image {}".format(image["imageTags"]))

View File

@ -8,8 +8,8 @@ SHELL := bash
.PHONY: all # All targets are accessible for user
.DEFAULT: help # Running Make will run the help target
# Parse version from latest git semver tag
GIT_TAG ?= $(shell git describe --tags --match v*.*.* 2>/dev/null || git rev-parse --short HEAD 2>/dev/null)
# Parse version from latest git semver tag, use short commit otherwise
GIT_TAG ?= $(shell git describe --tags --match v*.*.* --dirty 2>/dev/null || git describe --match="" --always --dirty 2>/dev/null)
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
TAG ::= $(GIT_TAG)
@ -85,7 +85,7 @@ rm-image:
## some useful tasks during development
ci-pull-upstream: ## pull latest shared .ci subtree
git subtree pull --prefix .ci ssh://git@git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash -m "Merge latest ci-tools-lib"
git subtree pull --prefix .ci ssh://git@git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git main --squash -m "Merge latest ci-tools-lib"
create-repo: ## create new AWS ECR public repository
aws ecr-public create-repository --repository-name $(IMAGE) --region $(REGION)

View File

@ -3,7 +3,7 @@ ARG ALPINE_VERSION=3.21
FROM docker.io/alpine:${ALPINE_VERSION}
ARG ALPINE_VERSION
ARG KUBE_VERSION=1.31
ARG KUBE_VERSION=1.32
ARG SOPS_VERSION="3.10.1"
ARG VALS_VERSION="0.40.1"

View File

@ -24,7 +24,7 @@ Any 1.31.X-Y release of Kubezero supports any Kubernetes cluster 1.31.X.
KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed.
```mermaid
%%{init: {'theme':'dark'}}%%
%%{init: {'theme': 'dark', 'gantt': {'fontSize': '20','sectionFontSize':'20'}}}%%
gantt
title KubeZero Support Timeline
dateFormat YYYY-MM-DD
@ -33,10 +33,10 @@ gantt
release :after 130b, 2025-04-30
section 1.31
beta :131b, 2024-12-01, 2025-02-28
release :after 131b, 2025-06-30
release :after 131b, 2025-07-31
section 1.32
beta :132b, 2025-04-01, 2025-05-19
release :after 132b, 2025-09-30
beta :132b, 2025-05-01, 2025-06-01
release :after 132b, 2025-10-31
```
[Upstream release policy](https://kubernetes.io/releases/)

View File

@ -17,7 +17,7 @@ post_control_plane_upgrade_cluster() {
# delete previous root app controlled by kubezero module
kubectl delete application kubezero-git-sync -n argocd || true
# Patch appproject to keep SyncWindow in place
# only patch appproject to keep SyncWindow in place
kubectl patch appproject kubezero -n argocd --type json -p='[{"op": "remove", "path": "/metadata/labels"}]' || true
kubectl patch appproject kubezero -n argocd --type json -p='[{"op": "remove", "path": "/metadata/annotations"}]' || true
}

28
admin/hooks-1.32.sh Normal file
View File

@ -0,0 +1,28 @@
### v1.32
# All things BEFORE the first controller / control plane upgrade
pre_control_plane_upgrade_cluster() {
echo
}
# All things after the first controller / control plane upgrade
post_control_plane_upgrade_cluster() {
echo
}
# All things AFTER all contollers are on the new version
pre_cluster_upgrade_final() {
set +e
echo
set -e
}
# Last call
post_cluster_upgrade_final() {
echo
}

View File

@ -57,6 +57,7 @@ render_kubeadm() {
local phase=$1
helm template $CHARTS/kubeadm --output-dir ${WORKDIR} \
--kube-version $KUBE_VERSION \
-f ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml \
--set patches=/etc/kubernetes/patches
@ -111,35 +112,44 @@ post_kubeadm() {
}
# Control plane upgrade
control_plane_upgrade() {
CMD=$1
# Migrate KubeZero Config to current version
upgrade_kubezero_config() {
ARGOCD=$(argo_used)
# get current values, argo app over cm
get_kubezero_values $ARGOCD
# tumble new config through migrate.py
migrate_argo_values.py < "$WORKDIR"/kubezero-values.yaml > "$WORKDIR"/new-kubezero-values.yaml \
&& mv "$WORKDIR"/new-kubezero-values.yaml "$WORKDIR"/kubezero-values.yaml
update_kubezero_cm
if [ "$ARGOCD" == "true" ]; then
# update argo app
export kubezero_chart_version=$(yq .version $CHARTS/kubezero/Chart.yaml)
kubectl get application kubezero -n argocd -o yaml | \
yq ".spec.source.helm.valuesObject |= load(\"$WORKDIR/kubezero-values.yaml\") | .spec.source.targetRevision = strenv(kubezero_chart_version)" \
> $WORKDIR/new-argocd-app.yaml
kubectl replace -f $WORKDIR/new-argocd-app.yaml $(field_manager $ARGOCD)
fi
}
# Control plane upgrade
kubeadm_upgrade() {
ARGOCD=$(argo_used)
render_kubeadm upgrade
if [[ "$CMD" =~ ^(cluster)$ ]]; then
# Check if we already have all controllers on the current version
OLD_CONTROLLERS=$(kubectl get nodes -l "node-role.kubernetes.io/control-plane=" --no-headers=true | grep -cv $KUBE_VERSION || true)
# run control plane upgrade
if [ "$OLD_CONTROLLERS" != "0" ]; then
pre_control_plane_upgrade_cluster
# get current values, argo app over cm
get_kubezero_values $ARGOCD
# tumble new config through migrate.py
migrate_argo_values.py < "$WORKDIR"/kubezero-values.yaml > "$WORKDIR"/new-kubezero-values.yaml \
&& mv "$WORKDIR"/new-kubezero-values.yaml "$WORKDIR"/kubezero-values.yaml
update_kubezero_cm
if [ "$ARGOCD" == "true" ]; then
# update argo app
export kubezero_chart_version=$(yq .version $CHARTS/kubezero/Chart.yaml)
kubectl get application kubezero -n argocd -o yaml | \
yq ".spec.source.helm.valuesObject |= load(\"$WORKDIR/kubezero-values.yaml\") | .spec.source.targetRevision = strenv(kubezero_chart_version)" \
> $WORKDIR/new-argocd-app.yaml
kubectl replace -f $WORKDIR/new-argocd-app.yaml $(field_manager $ARGOCD)
fi
pre_kubeadm
_kubeadm init phase upload-config kubeadm
@ -155,12 +165,11 @@ control_plane_upgrade() {
echo "Successfully upgraded KubeZero control plane to $KUBE_VERSION using kubeadm."
elif [[ "$CMD" =~ ^(final)$ ]]; then
# All controllers already on current version
else
pre_cluster_upgrade_final
# Finally upgrade addons last, with 1.32 we can ONLY call addon phase
#_kubeadm upgrade apply phase addon all $KUBE_VERSION
_kubeadm upgrade apply $KUBE_VERSION
_kubeadm upgrade apply phase addon all $KUBE_VERSION
post_cluster_upgrade_final
@ -196,10 +205,6 @@ control_plane_node() {
# Put PKI in place
cp -r ${WORKDIR}/pki ${HOSTFS}/etc/kubernetes
### 1.31 only to clean up previous aws-iam-auth certs
rm -f ${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.key ${HOSTFS}/etc/kubernetes/pki/aws-iam-authenticator.crt
###
# Always use kubeadm kubectl config to never run into chicken egg with custom auth hooks
cp ${WORKDIR}/super-admin.conf ${HOSTFS}/root/.kube/config
@ -333,9 +338,7 @@ apply_module() {
[ -f $CHARTS/kubezero/hooks.d/pre-install.sh ] && . $CHARTS/kubezero/hooks.d/pre-install.sh
kubectl replace -f $WORKDIR/kubezero/templates $(field_manager $ARGOCD)
else
#_helm apply $t
# During 1.31 we change the ArgoCD tracking so replace
_helm replace $t
_helm apply $t
fi
done
@ -349,7 +352,9 @@ delete_module() {
get_kubezero_values $ARGOCD
# Always use embedded kubezero chart
helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --version ~$KUBE_VERSION --devel --output-dir $WORKDIR
helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml \
--kube-version $KUBE_VERSION \
--version ~$KUBE_VERSION --devel --output-dir $WORKDIR
for t in $MODULES; do
_helm delete $t
@ -411,12 +416,8 @@ for t in $@; do
bootstrap) control_plane_node bootstrap;;
join) control_plane_node join;;
restore) control_plane_node restore;;
kubeadm_upgrade)
control_plane_upgrade cluster
;;
finalize_cluster_upgrade)
control_plane_upgrade final
;;
upgrade_control_plane) kubeadm_upgrade;;
upgrade_kubezero) upgrade_kubezero_config;;
apply_*)
ARGOCD=$(argo_used)
apply_module "${t##apply_}";;

View File

@ -80,6 +80,19 @@ function get_kubezero_secret() {
get_secret_val kubezero kubezero-secrets "$1"
}
function ensure_kubezero_secret_key() {
local secret="$(kubectl get secret -n kubezero kubezero-secrets -o yaml)"
local key=""
local val=""
for key in $@; do
val=$(echo "$secret" | yq ".data.\"$key\"")
if [ "$val" == "null" ]; then
kubectl patch secret -n kubezero kubezero-secrets --patch="{\"data\": { \"$key\": \"\" }}"
fi
done
}
function set_kubezero_secret() {
local key="$1"
@ -340,17 +353,17 @@ EOF
}
function control_plane_upgrade() {
function admin_job() {
TASKS="$1"
[ -z "$KUBE_VERSION" ] && KUBE_VERSION="latest"
ADMIN_TAG=${ADMIN_TAG:-$KUBE_VERSION}
echo "Deploy cluster admin task: $TASKS"
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: kubezero-upgrade
name: kubezero-admin-job
namespace: kube-system
labels:
app: kubezero-upgrade
@ -360,7 +373,7 @@ spec:
hostPID: true
containers:
- name: kubezero-admin
image: public.ecr.aws/zero-downtime/kubezero-admin:${KUBE_VERSION}
image: public.ecr.aws/zero-downtime/kubezero-admin:${ADMIN_TAG}
imagePullPolicy: Always
command: ["kubezero.sh"]
args: [$TASKS]
@ -395,10 +408,10 @@ spec:
restartPolicy: Never
EOF
kubectl wait pod kubezero-upgrade -n kube-system --timeout 120s --for=condition=initialized 2>/dev/null
kubectl wait pod kubezero-admin-job -n kube-system --timeout 120s --for=condition=initialized 2>/dev/null
while true; do
kubectl logs kubezero-upgrade -n kube-system -f 2>/dev/null && break
kubectl logs kubezero-admin-job -n kube-system -f 2>/dev/null && break
sleep 3
done
kubectl delete pod kubezero-upgrade -n kube-system
kubectl delete pod kubezero-admin-job -n kube-system
}

View File

@ -15,37 +15,28 @@ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
ARGOCD=$(argo_used)
echo "Checking that all pods in kube-system are running ..."
#waitSystemPodsRunning
waitSystemPodsRunning
[ "$ARGOCD" == "true" ] && disable_argo
# Check if we already have all controllers on the current version
#OLD_CONTROLLERS=$(kubectl get nodes -l "node-role.kubernetes.io/control-plane=" --no-headers=true | grep -cv $KUBE_VERSION || true)
if [ "$OLD_CONTROLLERS" == "0" ]; then
# All controllers already on current version
control_plane_upgrade finalize_cluster_upgrade
else
# Otherwise run control plane upgrade
control_plane_upgrade kubeadm_upgrade
fi
echo "<Return> to continue"
read -r
admin_job "upgrade_control_plane, upgrade_kubezero"
#echo "Adjust kubezero values as needed:"
# shellcheck disable=SC2015
#[ "$ARGOCD" == "true" ] && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kubezero
#echo "<Return> to continue"
#read -r
# upgrade modules
control_plane_upgrade "apply_kubezero, apply_network, apply_addons, apply_storage, apply_operators"
admin_job "apply_kubezero, apply_network, apply_addons, apply_storage, apply_operators"
echo "Checking that all pods in kube-system are running ..."
waitSystemPodsRunning
echo "Applying remaining KubeZero modules..."
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo"
admin_job "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo"
# we replace the project during v1.31 so disable again
[ "$ARGOCD" == "true" ] && disable_argo
@ -60,6 +51,12 @@ while true; do
sleep 1
done
echo "Once all controller nodes are running on $KUBE_VERSION, <return> to continue"
read -r
# Final control plane upgrades
admin_job "upgrade_control_plane"
echo "Please commit $ARGO_APP as the updated kubezero/application.yaml for your cluster."
echo "Then head over to ArgoCD for this cluster and sync all KubeZero modules to apply remaining upgrades."

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubeadm
description: KubeZero Kubeadm cluster config
type: application
version: 1.31.6
version: 1.32.3
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -11,4 +11,4 @@ keywords:
maintainers:
- name: Stefan Reimer
email: stefan@zero-downtime.net
kubeVersion: ">= 1.31.0-0"
kubeVersion: ">= 1.32.0-0"

View File

@ -1,6 +1,6 @@
# kubeadm
![Version: 1.25.8](https://img.shields.io/badge/Version-1.25.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 1.32.3](https://img.shields.io/badge/Version-1.32.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Kubeadm cluster config
@ -14,19 +14,18 @@ KubeZero Kubeadm cluster config
## Requirements
Kubernetes: `>= 1.25.0`
Kubernetes: `>= 1.32.0-0`
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| api.apiAudiences | string | `"istio-ca"` | |
| api.awsIamAuth.enabled | bool | `false` | |
| api.awsIamAuth.kubeAdminRole | string | `"arn:aws:iam::000000000000:role/KubernetesNode"` | |
| api.awsIamAuth.workerNodeRole | string | `"arn:aws:iam::000000000000:role/KubernetesNode"` | |
| api.awsIamAuth | bool | `false` | |
| api.endpoint | string | `"kube-api.changeme.org:6443"` | |
| api.etcdServers | string | `"https://etcd:2379"` | |
| api.extraArgs | object | `{}` | |
| api.falco.enabled | bool | `false` | |
| api.listenPort | int | `6443` | |
| api.oidcEndpoint | string | `""` | s3://${CFN[ConfigBucket]}/k8s/$CLUSTERNAME |
| api.serviceAccountIssuer | string | `""` | https://s3.${REGION}.amazonaws.com/${CFN[ConfigBucket]}/k8s/$CLUSTERNAME |

View File

@ -4,6 +4,7 @@ kubernetesVersion: {{ .Chart.Version }}
clusterName: {{ .Values.global.clusterName }}
featureGates:
ControlPlaneKubeletLocalMode: true
NodeLocalCRISocket: true
controlPlaneEndpoint: {{ .Values.api.endpoint }}
networking:
podSubnet: 10.244.0.0/16
@ -119,6 +120,8 @@ apiServer:
value: {{ include "kubeadm.featuregates" ( dict "return" "csv" ) | trimSuffix "," | quote }}
- name: authorization-config
value: /etc/kubernetes/apiserver/authz-config.yaml
- name: authentication-config
value: /etc/kubernetes/apiserver/authn-config.yaml
- name: enable-admission-plugins
value: DenyServiceExternalIPs,NodeRestriction,EventRateLimit,ExtendedResourceToleration
{{- if .Values.global.highAvailable }}
@ -127,6 +130,11 @@ apiServer:
{{- end }}
- name: logging-format
value: json
# Required for MutatingAdmissionPolicy
# Required for VolumeAttributesClass
# Required for CoordinatedLeaderElection - coordination.k8s.io/v1alpha1=true
- name: runtime-config
value: admissionregistration.k8s.io/v1alpha1=true,storage.k8s.io/v1beta1=true
{{- with .Values.api.extraArgs }}
{{- toYaml . | nindent 4 }}
{{- end }}

View File

@ -1,9 +1,9 @@
{{- /* Feature gates for all control plane components */ -}}
{{- /* Issues: MemoryQoS */ -}}
{{- /* v1.28: PodAndContainerStatsFromCRI still not working */ -}}
{{- /* v1.28: UnknownVersionInteroperabilityProxy requires StorageVersionAPI which is still alpha in 1.30 */ -}}
{{- /* v1.32: not required? working ? "DisableNodeKubeProxyVersion" "CoordinatedLeaderElection" */ -}}
{{- define "kubeadm.featuregates" }}
{{- $gates := list "CustomCPUCFSQuotaPeriod" "AuthorizeWithSelectors" "AuthorizeNodeWithSelectors" "ConsistentListFromCache" "VolumeAttributesClass" "WatchList" }}
{{- $gates := list "CustomCPUCFSQuotaPeriod" "VolumeAttributesClass" "MutatingAdmissionPolicy" }}
{{- if eq .return "csv" }}
{{- range $key := $gates }}
{{- $key }}=true,

View File

@ -1,7 +1,5 @@
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
metadata:
name: kubezero-admissionconfiguration
plugins:
- name: EventRateLimit
path: /etc/kubernetes/apiserver/event-config.yaml

View File

@ -0,0 +1,10 @@
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
anonymous:
enabled: true
conditions:
- path: /livez
- path: /readyz
- path: /healthz
- path: /.well-known/openid-configuration
- path: /openid/v1/jwks

View File

@ -1,4 +1,4 @@
apiVersion: apiserver.config.k8s.io/v1beta1
apiVersion: apiserver.config.k8s.io/v1
kind: AuthorizationConfiguration
authorizers:
- type: Node

View File

@ -8,3 +8,6 @@ json:
- op: replace
path: /spec/containers/0/startupProbe/httpGet/host
value: {{ .Values.listenAddress }}
- op: replace
path: /spec/containers/0/readinessProbe/httpGet/host
value: {{ .Values.listenAddress }}

9
charts/kubeadm/update.sh Executable file
View File

@ -0,0 +1,9 @@
#!/bin/bash
set -ex
. ../../scripts/lib-update.sh
login_ecr_public
update_helm
update_docs

View File

@ -2,8 +2,8 @@ apiVersion: v2
name: kubezero-addons
description: KubeZero umbrella chart for various optional cluster addons
type: application
version: 0.8.13
appVersion: v1.30
version: 0.8.14
appVersion: v1.31
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -21,15 +21,15 @@ maintainers:
email: stefan@zero-downtime.net
dependencies:
- name: external-dns
version: 1.15.1
version: 1.16.1
repository: https://kubernetes-sigs.github.io/external-dns/
condition: external-dns.enabled
- name: cluster-autoscaler
version: 9.46.0
version: 9.46.6
repository: https://kubernetes.github.io/autoscaler
condition: cluster-autoscaler.enabled
- name: nvidia-device-plugin
version: 0.17.0
version: 0.17.1
# https://github.com/NVIDIA/k8s-device-plugin
repository: https://nvidia.github.io/k8s-device-plugin
condition: nvidia-device-plugin.enabled
@ -39,11 +39,11 @@ dependencies:
repository: oci://public.ecr.aws/neuron #/neuron-helm-chart
condition: neuron-helm-chart.enabled
- name: sealed-secrets
version: 2.17.1
version: 2.17.2
repository: https://bitnami-labs.github.io/sealed-secrets
condition: sealed-secrets.enabled
- name: aws-node-termination-handler
version: 0.26.0
version: 0.27.0
repository: "oci://public.ecr.aws/aws-ec2/helm"
condition: aws-node-termination-handler.enabled
- name: aws-eks-asg-rolling-update-handler
@ -51,7 +51,7 @@ dependencies:
repository: https://twin.github.io/helm-charts
condition: aws-eks-asg-rolling-update-handler.enabled
- name: py-kube-downscaler
version: 0.2.12
version: 0.3.2
repository: https://caas-team.github.io/helm-charts/
condition: py-kube-downscaler.enabled
kubeVersion: ">= 1.30.0-0"

View File

@ -1,6 +1,6 @@
# kubezero-addons
![Version: 0.8.13](https://img.shields.io/badge/Version-0.8.13-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.30](https://img.shields.io/badge/AppVersion-v1.30-informational?style=flat-square)
![Version: 0.8.14](https://img.shields.io/badge/Version-0.8.14-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.31](https://img.shields.io/badge/AppVersion-v1.31-informational?style=flat-square)
KubeZero umbrella chart for various optional cluster addons
@ -18,13 +18,13 @@ Kubernetes: `>= 1.30.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://bitnami-labs.github.io/sealed-secrets | sealed-secrets | 2.17.1 |
| https://caas-team.github.io/helm-charts/ | py-kube-downscaler | 0.2.12 |
| https://kubernetes-sigs.github.io/external-dns/ | external-dns | 1.15.1 |
| https://kubernetes.github.io/autoscaler | cluster-autoscaler | 9.46.0 |
| https://nvidia.github.io/k8s-device-plugin | nvidia-device-plugin | 0.17.0 |
| https://bitnami-labs.github.io/sealed-secrets | sealed-secrets | 2.17.2 |
| https://caas-team.github.io/helm-charts/ | py-kube-downscaler | 0.3.2 |
| https://kubernetes-sigs.github.io/external-dns/ | external-dns | 1.16.1 |
| https://kubernetes.github.io/autoscaler | cluster-autoscaler | 9.46.6 |
| https://nvidia.github.io/k8s-device-plugin | nvidia-device-plugin | 0.17.1 |
| https://twin.github.io/helm-charts | aws-eks-asg-rolling-update-handler | 1.5.0 |
| oci://public.ecr.aws/aws-ec2/helm | aws-node-termination-handler | 0.26.0 |
| oci://public.ecr.aws/aws-ec2/helm | aws-node-termination-handler | 0.27.0 |
| oci://public.ecr.aws/neuron | neuron-helm-chart | 1.1.1 |
# MetalLB
@ -109,7 +109,7 @@ Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/)
| cluster-autoscaler.extraArgs.scan-interval | string | `"30s"` | |
| cluster-autoscaler.extraArgs.skip-nodes-with-local-storage | bool | `false` | |
| cluster-autoscaler.image.repository | string | `"registry.k8s.io/autoscaling/cluster-autoscaler"` | |
| cluster-autoscaler.image.tag | string | `"v1.31.1"` | |
| cluster-autoscaler.image.tag | string | `"v1.32.1"` | |
| cluster-autoscaler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cluster-autoscaler.podDisruptionBudget | bool | `false` | |
| cluster-autoscaler.prometheusRule.enabled | bool | `false` | |

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.24.0
appVersion: 1.25.0
description: A Helm chart for the AWS Node Termination Handler.
home: https://github.com/aws/aws-node-termination-handler/
icon: https://raw.githubusercontent.com/aws/eks-charts/master/docs/logo/aws.png
@ -21,4 +21,4 @@ name: aws-node-termination-handler
sources:
- https://github.com/aws/aws-node-termination-handler/
type: application
version: 0.26.0
version: 0.27.0

View File

@ -95,6 +95,7 @@ The configuration in this table applies to all AWS Node Termination Handler mode
| `webhookTemplateConfigMapName` | Pass the webhook template file as a configmap. | "``" |
| `webhookTemplateConfigMapKey` | Name of the Configmap key storing the template file. | `""` |
| `enableSqsTerminationDraining` | If `true`, this turns on queue-processor mode which drains nodes when an SQS termination event is received. | `false` |
| `enableOutOfServiceTaint` | If `true`, this will add out-of-service taint to node after cordon/drain process which would forcefully evict pods without matching tolerations and detach persistent volumes. | `false` |
### Queue-Processor Mode Configuration
@ -120,6 +121,9 @@ The configuration in this table applies to AWS Node Termination Handler in queue
| `managedAsgTag` | [DEPRECATED](Use `managedTag` instead) The node tag to check if `checkASGTagBeforeDraining` is `true`.
| `useProviderId` | If `true`, fetch node name through Kubernetes node spec ProviderID instead of AWS event PrivateDnsHostname. | `false` |
| `topologySpreadConstraints` | [Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) for pod scheduling. Useful with a highly available deployment to reduce the risk of running multiple replicas on the same Node | `[]` |
| `heartbeatInterval` | The time period in seconds between consecutive heartbeat signals. Valid range: 30-3600 seconds (30 seconds to 1 hour). | `-1` |
| `heartbeatUntil` | The duration in seconds over which heartbeat signals are sent. Valid range: 60-172800 seconds (1 minute to 48 hours). | `-1` |
### IMDS Mode Configuration
The configuration in this table applies to AWS Node Termination Handler in IMDS mode.

View File

@ -99,6 +99,8 @@ spec:
value: {{ .Values.cordonOnly | quote }}
- name: TAINT_NODE
value: {{ .Values.taintNode | quote }}
- name: ENABLE_OUT_OF_SERVICE_TAINT
value: {{ .Values.enableOutOfServiceTaint | quote }}
- name: EXCLUDE_FROM_LOAD_BALANCERS
value: {{ .Values.excludeFromLoadBalancers | quote }}
- name: DELETE_LOCAL_DATA

View File

@ -99,6 +99,8 @@ spec:
value: {{ .Values.cordonOnly | quote }}
- name: TAINT_NODE
value: {{ .Values.taintNode | quote }}
- name: ENABLE_OUT_OF_SERVICE_TAINT
value: {{ .Values.enableOutOfServiceTaint | quote }}
- name: EXCLUDE_FROM_LOAD_BALANCERS
value: {{ .Values.excludeFromLoadBalancers | quote }}
- name: DELETE_LOCAL_DATA

View File

@ -102,6 +102,8 @@ spec:
value: {{ .Values.cordonOnly | quote }}
- name: TAINT_NODE
value: {{ .Values.taintNode | quote }}
- name: ENABLE_OUT_OF_SERVICE_TAINT
value: {{ .Values.enableOutOfServiceTaint | quote }}
- name: EXCLUDE_FROM_LOAD_BALANCERS
value: {{ .Values.excludeFromLoadBalancers | quote }}
- name: DELETE_LOCAL_DATA

View File

@ -86,6 +86,9 @@ cordonOnly: false
# Taint node upon spot interruption termination notice.
taintNode: false
# Add out-of-service taint to node after cordon/drain process which would forcefully evict pods without matching tolerations and detach persistent volumes.
enableOutOfServiceTaint: false
# Exclude node from load balancer before cordoning via the ServiceNodeExclusion feature gate.
excludeFromLoadBalancers: false
@ -285,6 +288,12 @@ enableRebalanceDraining: false
# deleteSqsMsgIfNodeNotFound If true, delete the SQS Message from the SQS Queue if the targeted node(s) are not found. Only used in Queue Processor mode.
deleteSqsMsgIfNodeNotFound: false
# The time period in seconds between consecutive heartbeat signals. Valid range: 30-3600 seconds (30 seconds to 1 hour).
heartbeatInterval: -1
# The duration in seconds over which heartbeat signals are sent. Valid range: 60-172800 seconds (1 minute to 48 hours).
heartbeatUntil: -1
# ---------------------------------------------------------------------------------------------------------------------
# Testing
# ---------------------------------------------------------------------------------------------------------------------

View File

@ -219,7 +219,7 @@ cluster-autoscaler:
image:
repository: registry.k8s.io/autoscaling/cluster-autoscaler
tag: v1.31.1
tag: v1.32.1
autoDiscovery:
clusterName: ""

View File

@ -1,7 +1,7 @@
apiVersion: v2
description: KubeZero Argo - Events, Workflow, CD
name: kubezero-argo
version: 0.3.2
version: 0.3.3
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -22,7 +22,7 @@ dependencies:
repository: https://argoproj.github.io/argo-helm
condition: argo-events.enabled
- name: argo-cd
version: 7.8.23
version: 7.9.0
repository: https://argoproj.github.io/argo-helm
condition: argo-cd.enabled
- name: argocd-image-updater

View File

@ -1,6 +1,6 @@
# kubezero-argo
![Version: 0.3.2](https://img.shields.io/badge/Version-0.3.2-informational?style=flat-square)
![Version: 0.3.3](https://img.shields.io/badge/Version-0.3.3-informational?style=flat-square)
KubeZero Argo - Events, Workflow, CD
@ -18,7 +18,7 @@ Kubernetes: `>= 1.30.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://argoproj.github.io/argo-helm | argo-cd | 7.8.23 |
| https://argoproj.github.io/argo-helm | argo-cd | 7.9.0 |
| https://argoproj.github.io/argo-helm | argo-events | 2.4.15 |
| https://argoproj.github.io/argo-helm | argocd-image-updater | 0.12.1 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
@ -84,8 +84,8 @@ Kubernetes: `>= 1.30.0-0`
| argo-events.configs.jetstream.streamConfig.maxMsgs | int | `1000000` | Maximum number of messages before expiring oldest message |
| argo-events.configs.jetstream.streamConfig.replicas | int | `1` | Number of replicas, defaults to 3 and requires minimal 3 |
| argo-events.configs.jetstream.versions[0].configReloaderImage | string | `"natsio/nats-server-config-reloader:0.14.1"` | |
| argo-events.configs.jetstream.versions[0].metricsExporterImage | string | `"natsio/prometheus-nats-exporter:0.16.0"` | |
| argo-events.configs.jetstream.versions[0].natsImage | string | `"nats:2.10.11-scratch"` | |
| argo-events.configs.jetstream.versions[0].metricsExporterImage | string | `"natsio/prometheus-nats-exporter:0.17.2"` | |
| argo-events.configs.jetstream.versions[0].natsImage | string | `"nats:2.11.1-scratch"` | |
| argo-events.configs.jetstream.versions[0].startCommand | string | `"/nats-server"` | |
| argo-events.configs.jetstream.versions[0].version | string | `"2.10.11"` | |
| argo-events.enabled | bool | `false` | |

View File

@ -21,3 +21,6 @@ fi
# Redis secret
kubectl get secret argocd-redis -n argocd || kubectl create secret generic argocd-redis -n argocd \
--from-literal=auth=$(date +%s | sha256sum | base64 | head -c 16 ; echo)
# required keys in kubezero-secrets, as --ignore-missing-values in helm-secrets doesnt work with vals ;-(
ensure_kubezero_secret_key argo-cd.kubezero.username argo-cd.kubezero.password argo-cd.kubezero.sshPrivateKey

View File

@ -25,8 +25,8 @@ argo-events:
# do NOT use -alpine tag as the entrypoint differs
versions:
- version: 2.10.11
natsImage: nats:2.10.11-scratch
metricsExporterImage: natsio/prometheus-nats-exporter:0.16.0
natsImage: nats:2.11.1-scratch
metricsExporterImage: natsio/prometheus-nats-exporter:0.17.2
configReloaderImage: natsio/nats-server-config-reloader:0.14.1
startCommand: /nats-server

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-auth
description: KubeZero umbrella chart for all things Authentication and Identity management
type: application
version: 0.6.1
version: 0.6.2
appVersion: 26.0.5
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
@ -18,6 +18,6 @@ dependencies:
repository: https://cdn.zero-downtime.net/charts/
- name: keycloak
repository: "oci://registry-1.docker.io/bitnamicharts"
version: 24.2.1
version: 24.6.1
condition: keycloak.enabled
kubeVersion: ">= 1.26.0"
kubeVersion: ">= 1.30.0-0"

View File

@ -1,6 +1,6 @@
# kubezero-auth
![Version: 0.6.1](https://img.shields.io/badge/Version-0.6.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 26.0.5](https://img.shields.io/badge/AppVersion-26.0.5-informational?style=flat-square)
![Version: 0.6.2](https://img.shields.io/badge/Version-0.6.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 26.0.5](https://img.shields.io/badge/AppVersion-26.0.5-informational?style=flat-square)
KubeZero umbrella chart for all things Authentication and Identity management
@ -14,12 +14,12 @@ KubeZero umbrella chart for all things Authentication and Identity management
## Requirements
Kubernetes: `>= 1.26.0`
Kubernetes: `>= 1.30.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| oci://registry-1.docker.io/bitnamicharts | keycloak | 24.2.1 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
| oci://registry-1.docker.io/bitnamicharts | keycloak | 24.6.1 |
# Keycloak
@ -62,6 +62,6 @@ https://github.com/keycloak/keycloak-benchmark/tree/main/provision/minikube/keyc
| keycloak.production | bool | `true` | |
| keycloak.proxyHeaders | string | `"xforwarded"` | |
| keycloak.replicaCount | int | `1` | |
| keycloak.resources.limits.memory | string | `"768Mi"` | |
| keycloak.resources.limits.memory | string | `"1024Mi"` | |
| keycloak.resources.requests.cpu | string | `"100m"` | |
| keycloak.resources.requests.memory | string | `"512Mi"` | |

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-cache
description: KubeZero Cache module
type: application
version: 0.1.0
version: 0.1.1
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -17,11 +17,11 @@ dependencies:
version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/
- name: redis
version: 20.0.3
version: 20.11.5
repository: https://charts.bitnami.com/bitnami
condition: redis.enabled
- name: redis-cluster
version: 11.0.2
version: 11.5.0
repository: https://charts.bitnami.com/bitnami
condition: redis-cluster.enabled

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-graph
description: KubeZero GraphQL and GraphDB
type: application
version: 0.1.0
version: 0.1.1
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -16,7 +16,7 @@ dependencies:
version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/
- name: neo4j
version: 5.26.0
version: 2025.3.0
repository: https://helm.neo4j.com/neo4j
condition: neo4j.enabled

View File

@ -1,6 +1,6 @@
# kubezero-graph
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.1.1](https://img.shields.io/badge/Version-0.1.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero GraphQL and GraphDB
@ -18,8 +18,8 @@ Kubernetes: `>= 1.29.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.2.1 |
| https://helm.neo4j.com/neo4j | neo4j | 5.26.0 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
| https://helm.neo4j.com/neo4j | neo4j | 2025.3.0 |
## Values
@ -28,6 +28,8 @@ Kubernetes: `>= 1.29.0-0`
| neo4j.disableLookups | bool | `true` | |
| neo4j.enabled | bool | `false` | |
| neo4j.neo4j.name | string | `"test-db"` | |
| neo4j.neo4j.password | string | `"secret"` | |
| neo4j.neo4j.passwordFromSecret | string | `"neo4j-admin"` | |
| neo4j.serviceMonitor.enabled | bool | `false` | |
| neo4j.services.neo4j.enabled | bool | `false` | |
| neo4j.volumes.data.mode | string | `"defaultStorageClass"` | |

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-mq
description: KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
type: application
version: 0.3.10
version: 0.3.11
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -17,11 +17,11 @@ dependencies:
version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/
- name: nats
version: 1.2.2
version: 1.3.3
repository: https://nats-io.github.io/k8s/helm/charts/
condition: nats.enabled
- name: rabbitmq
version: 14.6.6
version: 14.7.0
repository: https://charts.bitnami.com/bitnami
condition: rabbitmq.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-mq
![Version: 0.3.10](https://img.shields.io/badge/Version-0.3.10-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.3.11](https://img.shields.io/badge/Version-0.3.11-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
@ -18,9 +18,9 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.bitnami.com/bitnami | rabbitmq | 14.6.6 |
| https://nats-io.github.io/k8s/helm/charts/ | nats | 1.2.2 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
| https://charts.bitnami.com/bitnami | rabbitmq | 14.7.0 |
| https://nats-io.github.io/k8s/helm/charts/ | nats | 1.3.3 |
## Values
@ -34,13 +34,6 @@ Kubernetes: `>= 1.26.0`
| nats.natsBox.enabled | bool | `false` | |
| nats.promExporter.enabled | bool | `false` | |
| nats.promExporter.podMonitor.enabled | bool | `false` | |
| rabbitmq-cluster-operator.clusterOperator.metrics.enabled | bool | `false` | |
| rabbitmq-cluster-operator.clusterOperator.metrics.serviceMonitor.enabled | bool | `true` | |
| rabbitmq-cluster-operator.enabled | bool | `false` | |
| rabbitmq-cluster-operator.msgTopologyOperator.metrics.enabled | bool | `false` | |
| rabbitmq-cluster-operator.msgTopologyOperator.metrics.serviceMonitor.enabled | bool | `true` | |
| rabbitmq-cluster-operator.rabbitmqImage.tag | string | `"3.11.4-debian-11-r0"` | |
| rabbitmq-cluster-operator.useCertManager | bool | `true` | |
| rabbitmq.auth.existingErlangSecret | string | `"rabbitmq"` | |
| rabbitmq.auth.existingPasswordSecret | string | `"rabbitmq"` | |
| rabbitmq.auth.tls.enabled | bool | `false` | |

View File

@ -1,4 +1,4 @@
{{- if .Values.nats.exporter.serviceMonitor.enabled }}
{{- if .Values.nats.promExporter.podMonitor.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:

View File

@ -6,6 +6,12 @@ nats:
jetstream:
enabled: true
podTemplate:
topologySpreadConstraints:
kubernetes.io/hostname:
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
natsBox:
enabled: false

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-network
description: KubeZero umbrella chart for all things network
type: application
version: 0.5.7
version: 0.5.8
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -19,7 +19,7 @@ dependencies:
version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/
- name: cilium
version: 1.16.6
version: 1.17.3
repository: https://helm.cilium.io/
condition: cilium.enabled
- name: metallb
@ -27,7 +27,7 @@ dependencies:
repository: https://metallb.github.io/metallb
condition: metallb.enabled
- name: haproxy
version: 1.23.0
version: 1.24.0
repository: https://haproxytech.github.io/helm-charts
condition: haproxy.enabled
kubeVersion: ">= 1.29.0-0"
kubeVersion: ">= 1.30.0-0"

View File

@ -1,6 +1,6 @@
# kubezero-network
![Version: 0.5.7](https://img.shields.io/badge/Version-0.5.7-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.5.8](https://img.shields.io/badge/Version-0.5.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for all things network
@ -14,13 +14,13 @@ KubeZero umbrella chart for all things network
## Requirements
Kubernetes: `>= 1.29.0-0`
Kubernetes: `>= 1.30.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://haproxytech.github.io/helm-charts | haproxy | 1.23.0 |
| https://helm.cilium.io/ | cilium | 1.16.6 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
| https://haproxytech.github.io/helm-charts | haproxy | 1.24.0 |
| https://helm.cilium.io/ | cilium | 1.17.3 |
| https://metallb.github.io/metallb | metallb | 0.14.9 |
## Values
@ -116,5 +116,5 @@ Kubernetes: `>= 1.29.0-0`
| multus.defaultNetworks | list | `[]` | |
| multus.enabled | bool | `false` | |
| multus.image.repository | string | `"ghcr.io/k8snetworkplumbingwg/multus-cni"` | |
| multus.image.tag | string | `"v3.9.3"` | |
| multus.image.tag | string | `"v4.2.0"` | |
| multus.readinessindicatorfile | string | `"/etc/cni/net.d/05-cilium.conflist"` | |

File diff suppressed because one or more lines are too long

View File

@ -18,7 +18,7 @@ multus:
enabled: false
image:
repository: ghcr.io/k8snetworkplumbingwg/multus-cni
tag: v4.1.4
tag: v4.2.0
clusterNetwork: "cilium"
defaultNetworks: []

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-storage
description: KubeZero umbrella chart for all things storage incl. AWS EBS/EFS, openEBS-lvm, gemini
type: application
version: 0.8.10
version: 0.8.11
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -24,11 +24,11 @@ dependencies:
condition: lvm-localpv.enabled
repository: https://openebs.github.io/lvm-localpv
- name: aws-ebs-csi-driver
version: 2.39.3
version: 2.42.0
condition: aws-ebs-csi-driver.enabled
repository: https://kubernetes-sigs.github.io/aws-ebs-csi-driver
- name: aws-efs-csi-driver
version: 3.1.6
version: 2.5.7
condition: aws-efs-csi-driver.enabled
repository: https://kubernetes-sigs.github.io/aws-efs-csi-driver
- name: gemini
@ -36,7 +36,7 @@ dependencies:
condition: gemini.enabled
repository: https://charts.fairwinds.com/stable
- name: k8up
version: 4.8.3
version: 4.8.4
condition: k8up.enabled
repository: https://k8up-io.github.io/k8up
kubeVersion: ">= 1.26.0"
kubeVersion: ">= 1.30.0-0"

View File

@ -0,0 +1,32 @@
diff -rtuN charts/aws-efs-csi-driver.orig/templates/controller-deployment.yaml charts/aws-efs-csi-driver/templates/controller-deployment.yaml
--- charts/aws-efs-csi-driver.orig/templates/controller-deployment.yaml 2023-08-23 11:32:48.964952023 +0000
+++ charts/aws-efs-csi-driver/templates/controller-deployment.yaml 2023-08-23 11:32:48.968285371 +0000
@@ -76,9 +76,14 @@
- name: AWS_USE_FIPS_ENDPOINT
value: "true"
{{- end }}
+ {{- if .Values.controller.extraEnv }}
+ {{- toYaml .Values.controller.extraEnv | nindent 12 }}
+ {{- end }}
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
+ - name: aws-token
+ mountPath: /var/run/secrets/sts.amazonaws.com/serviceaccount/
ports:
- name: healthz
containerPort: {{ .Values.controller.healthPort }}
@@ -137,6 +142,13 @@
volumes:
- name: socket-dir
emptyDir: {}
+ - name: aws-token
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: token
+ expirationSeconds: 86400
+ audience: "sts.amazonaws.com"
{{- with .Values.controller.affinity }}
affinity: {{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -1,5 +1,62 @@
# Helm chart
## v2.42.0
### Feature
- Set internal traffic policy to local for node metric service ([#2432](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2432), [@ElijahQuinones](https://github.com/ElijahQuinones))
## v2.41.0
### Feature
- Add `enabled` flag to schema for use in sub-charting ([#2361](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2361), [@ConnorJC3](https://github.com/ConnorJC3))
- Add Prometheus Annotations to the Node Service ([#2363](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2363), [@mdzraf](https://github.com/mdzraf))
### Bug or regression
- Prevent nil pointer deref in Helm chart when `node.enableWindows` and `node.otelTracing` are both set ([#2357](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2357), [@ConnorJC3](https://github.com/ConnorJC3))
## v2.40.3
### Feature
- Upgrade csi-attacher to v4.8.1, csi-snapshotter to v8.2.1, csi-resizer to v1.13.2
### Bug or regression
- Fix incorrect schema entry for controller.podDisruptionBudget.unhealthyPodEvictionPolicy ([#2389](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2389),[@jamesalford](https://github.com/jamesalford))
## v2.40.2
### Bug or Regression
- Add enabled flag to schema for sub-charting ([#2359](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2359), [@ConnorJC3](https://github.com/ConnorJC3))
## v2.40.1
### Bug or Regression
- Prevent null deref when enableWindows and otelTracing enabled on node ([#2357](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2357), [@ConnorJC3](https://github.com/ConnorJC3))
- Fix incorrect properties validation in Helm schema ([#2356](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2356), [@ConnorJC3](https://github.com/ConnorJC3))
## v2.40.0
#### Default for enable windows changed
The default value for enableWindows has been changed from false to true. This change makes it so the node damemonset will be scheduled on windows nodes by default. If you wish to not have the node daemonset scheduled on your windows nodes you will need to change enableWindows to false.
### Feature
- Add values.schema.json to validate changes in values.yaml. ([#2286](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2286), [@ElijahQuinones](https://github.com/ElijahQuinones))
### Bug or Regression
- Fix helm regression with values.schema.yaml. ([#2322](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2322), [@ElijahQuinones](https://github.com/ElijahQuinones))
- `global` has been added to the values schema, allowing aws-ebs-csi-driver to be used in a Helm sub chart ([#2321](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2321), [@kejne](https://github.com/kejne))
- Reconcile some differences between helm chart and values.schema.json ([#2335](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2335), [@ElijahQuinones](https://github.com/ElijahQuinones))
- Fix helm regression with a1CompatibilityDaemonSet=true ([#2316](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/2316), [@AndrewSirenko](https://github.com/AndrewSirenko))
## v2.39.3
### Urgent Upgrade Notes

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.39.0
appVersion: 1.42.0
description: A Helm chart for AWS EBS CSI Driver
home: https://github.com/kubernetes-sigs/aws-ebs-csi-driver
keywords:
@ -13,4 +13,4 @@ maintainers:
name: aws-ebs-csi-driver
sources:
- https://github.com/kubernetes-sigs/aws-ebs-csi-driver
version: 2.39.3
version: 2.42.0

View File

@ -2,6 +2,6 @@ To verify that aws-ebs-csi-driver has started, run:
kubectl get pod -n {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "aws-ebs-csi-driver.name" . }},app.kubernetes.io/instance={{ .Release.Name }}"
[ACTION REQUIRED] Update to the EBS CSI Driver IAM Policy
[Deprecation announcement] AWS Snow Family device support for the EBS CSI Driver
Due to an upcoming change in handling of IAM polices for the CreateVolume API when creating a volume from an EBS snapshot, a change to your EBS CSI Driver policy may be needed. For more information and remediation steps, see GitHub issue #2190 (https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/2190). This change affects all versions of the EBS CSI Driver and action may be required even on clusters where the driver is not upgraded.
Support for the EBS CSI Driver on [AWS Snow Family devices](https://aws.amazon.com/snowball/) is deprecated, effective immediately. No further Snow-specific bugfixes or feature requests will be merged. The existing functionality for Snow devices will be removed the 1.44 release of the EBS CSI Driver. This announcement does not affect the support of the EBS CSI Driver on other platforms, such as [Amazon EC2](https://aws.amazon.com/ec2/) or EC2 on [AWS Outposts](https://aws.amazon.com/outposts/). For any questions related to this announcement, please comment on this issue [#2365](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/2365) or open a new issue.

View File

@ -17,7 +17,7 @@ spec:
app: {{ .NodeName }}
{{- include "aws-ebs-csi-driver.selectorLabels" . | nindent 6 }}
updateStrategy:
{{ toYaml .Values.node.updateStrategy | nindent 4 }}
{{- toYaml .Values.node.updateStrategy | nindent 4 }}
template:
metadata:
labels:
@ -111,11 +111,11 @@ spec:
value: {{ .otelServiceName }}
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: {{ .otelExporterEndpoint }}
{{- end }}
{{- if .Values.fips }}
- name: AWS_USE_FIPS_ENDPOINT
value: "true"
{{- end }}
{{- end }}
{{- with .Values.node.env }}
{{- . | toYaml | nindent 12 }}
{{- end }}

View File

@ -429,6 +429,9 @@ spec:
{{- if not (regexMatch "(-timeout)" (join " " .Values.sidecars.resizer.additionalArgs)) }}
- --timeout=60s
{{- end }}
{{- if .Values.controller.extraCreateMetadata }}
- --extra-modify-metadata
{{- end}}
- --csi-address=$(ADDRESS)
- --v={{ .Values.sidecars.resizer.logLevel }}
- --handle-volume-inuse-error=false

View File

@ -47,6 +47,9 @@ kind: Service
metadata:
name: ebs-csi-node
namespace: {{ .Release.Namespace }}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "3302"
labels:
app: ebs-csi-node
spec:
@ -56,5 +59,6 @@ spec:
- name: metrics
port: 3302
targetPort: 3302
internalTrafficPolicy: Local
type: ClusterIP
{{- end }}

File diff suppressed because it is too large Load Diff

View File

@ -11,9 +11,9 @@ image:
customLabels: {}
# k8s-app: aws-ebs-csi-driver
# Instruct the AWS SDK to use AWS FIPS endpoints, and deploy container built with BoringCrypto (a FIPS-validated cryptographic library) instead of the Go default
# Instruct the AWS SDK to use AWS FIPS endpoints, and deploy container built with Boring Crypto (a FIPS-validated cryptographic library) instead of the Go default
#
# The EBS CSI Driver FIPS images have not undergone FIPS certification, and no official guarnatee is made about the compliance of these images under the FIPS standard
# The EBS CSI Driver FIPS images have not undergone FIPS certification, and no official guarantee is made about the compliance of these images under the FIPS standard
# Users relying on these images for FIPS compliance should perform their own independent evaluation
fips: false
sidecars:
@ -22,7 +22,7 @@ sidecars:
image:
pullPolicy: IfNotPresent
repository: public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner
tag: "v5.1.0-eks-1-31-12"
tag: "v5.2.0-eks-1-33-1"
logLevel: 2
# Additional parameters provided by external-provisioner.
additionalArgs: []
@ -49,7 +49,7 @@ sidecars:
image:
pullPolicy: IfNotPresent
repository: public.ecr.aws/eks-distro/kubernetes-csi/external-attacher
tag: "v4.8.0-eks-1-31-12"
tag: "v4.8.1-eks-1-33-1"
# Tune leader lease election for csi-attacher.
# Leader election is on by default.
leaderElection:
@ -78,7 +78,7 @@ sidecars:
image:
pullPolicy: IfNotPresent
repository: public.ecr.aws/eks-distro/kubernetes-csi/external-snapshotter/csi-snapshotter
tag: "v8.2.0-eks-1-31-12"
tag: "v8.2.1-eks-1-33-1"
logLevel: 2
# Additional parameters provided by csi-snapshotter.
additionalArgs: []
@ -94,7 +94,7 @@ sidecars:
image:
pullPolicy: IfNotPresent
repository: public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe
tag: "v2.14.0-eks-1-31-12"
tag: "v2.15.0-eks-1-33-1"
# Additional parameters provided by livenessprobe.
additionalArgs: []
resources: {}
@ -106,7 +106,7 @@ sidecars:
image:
pullPolicy: IfNotPresent
repository: public.ecr.aws/eks-distro/kubernetes-csi/external-resizer
tag: "v1.12.0-eks-1-31-11"
tag: "v1.13.2-eks-1-33-1"
# Tune leader lease election for csi-resizer.
# Leader election is on by default.
leaderElection:
@ -133,7 +133,7 @@ sidecars:
image:
pullPolicy: IfNotPresent
repository: public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar
tag: "v2.13.0-eks-1-31-12"
tag: "v2.13.0-eks-1-33-1"
logLevel: 2
# Additional parameters provided by node-driver-registrar.
additionalArgs: []
@ -220,7 +220,7 @@ controller:
env: []
# Use envFrom to reference ConfigMaps and Secrets across all containers in the deployment
envFrom: []
# If set, add pv/pvc metadata to plugin create requests as parameters.
# If set, add pv/pvc metadata to plugin create and modify requests as parameters.
extraCreateMetadata: true
# Extra volume tags to attach to each dynamically provisioned volume.
# ---
@ -337,7 +337,7 @@ controller:
# Example:
#
# - name: wait
# image: busybox
# image: public.ecr.aws/amazonlinux/amazonlinux
# command: [ 'sh', '-c', "sleep 20" ]
# Enable opentelemetry tracing for the plugin running on the daemonset
otelTracing: {}
@ -405,7 +405,7 @@ node:
automountServiceAccountToken: true
# Enable the linux daemonset creation
enableLinux: true
enableWindows: false
enableWindows: true
# Warning: This option will be removed in a future release. It is a temporary workaround for users unable to immediately migrate off of older kernel versions.
# Formats XFS volumes with bigtime=0,inobtcount=0,reflink=0, for mounting onto nodes with linux kernel version <= 5.4.
# Note that XFS volumes formatted with this option will only have timestamp records until 2038.
@ -454,7 +454,7 @@ node:
# Example:
#
# - name: wait
# image: busybox
# image: public.ecr.aws/amazonlinux/amazonlinux
# command: [ 'sh', '-c', "sleep 20" ]
# Enable opentelemetry tracing for the plugin running on the daemonset
otelTracing: {}
@ -511,4 +511,4 @@ nodeComponentOnly: false
helmTester:
enabled: true
# Supply a custom image to the ebs-csi-driver-test pod in helm-tester.yaml
image: "us-central1-docker.pkg.dev/k8s-staging-test-infra/images/kubekins-e2e:v20241230-3006692a6f-master"
image: "us-central1-docker.pkg.dev/k8s-staging-test-infra/images/kubekins-e2e:v20250411-0688312353-master"

View File

@ -1,38 +1,4 @@
# Helm chart
# v3.1.6
* Bump app/driver version to `v2.1.5`
# v3.1.5
* Bump app/driver version to `v2.1.4`
# v3.1.4
* Bump app/driver version to `v2.1.3`
# v3.1.3
* Bump app/driver version to `v2.1.2`
# v3.1.2
* Bump app/driver version to `v2.1.1`
# v3.1.1
* Bump app/driver version to `v2.1.0`
# v3.1.0
* Bump app/driver version to `v2.0.9`
# v3.0.9
* Bump app/driver version to `v2.0.8`
# v3.0.8
* Bump app/driver version to `v2.0.7`
# v3.0.7
* Bump app/driver version to `v2.0.6`
# v3.0.6
* Bump app/driver version to `v2.0.5`
# v3.0.5
* Bump app/driver version to `v2.0.4`
# v3.0.4
* Bump app/driver version to `v2.0.3`
# v3.0.3
* Bump app/driver version to `v2.0.2`
# v3.0.2
* Update Helm to use the image from Public ECR rather than DockerHub
# v3.0.1
* Bump app/driver version to `v2.0.1`
# v3.0.0
* Bump app/driver version to `v2.0.0`
# v2.5.7
* Bump app/driver version to `v1.7.7`
# v2.5.6
@ -244,4 +210,4 @@ for Controller deployment and Node daemonset
* Fixing Controller deployment using `podAnnotations` and `tolerations` values from Node daemonset
* Let the user define the whole `tolerations` array, default to `- operator: Exists`
* Default `logLevel` lowered from `5` to `2`
* Default `imagePullPolicy` everywhere set to `IfNotPresent`
* Default `imagePullPolicy` everywhere set to `IfNotPresent`

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 2.1.5
appVersion: 1.7.7
description: A Helm chart for AWS EFS CSI Driver
home: https://github.com/kubernetes-sigs/aws-efs-csi-driver
keywords:
@ -15,4 +15,4 @@ maintainers:
name: aws-efs-csi-driver
sources:
- https://github.com/kubernetes-sigs/aws-efs-csi-driver
version: 3.1.6
version: 2.5.7

View File

@ -3,18 +3,17 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Values.controller.name }}
namespace: {{ .Release.Namespace }}
name: efs-csi-controller
labels:
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
{{- with .Values.controller.additionalLabels }}
{{ toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.controller.replicaCount }}
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.controller.name }}
app: efs-csi-controller
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- with .Values.controller.updateStrategy }}
@ -24,7 +23,7 @@ spec:
template:
metadata:
labels:
app: {{ .Values.controller.name }}
app: efs-csi-controller
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- with .Values.controller.podLabels }}
@ -94,17 +93,14 @@ spec:
- name: AWS_USE_FIPS_ENDPOINT
value: "true"
{{- end }}
- name: PORT_RANGE_UPPER_BOUND
value: "{{ .Values.portRangeUpperBound }}"
{{- with .Values.controller.env }}
{{- toYaml . | nindent 12 }}
{{- if .Values.controller.extraEnv }}
{{- toYaml .Values.controller.extraEnv | nindent 12 }}
{{- end }}
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
{{- with .Values.controller.volumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
- name: aws-token
mountPath: /var/run/secrets/sts.amazonaws.com/serviceaccount/
ports:
- name: healthz
containerPort: {{ .Values.controller.healthPort }}
@ -137,16 +133,13 @@ spec:
{{- if hasKey .Values.controller "leaderElectionLeaseDuration" }}
- --leader-election-lease-duration={{ .Values.controller.leaderElectionLeaseDuration }}
{{- end }}
{{- range .Values.sidecars.csiProvisioner.additionalArgs }}
- {{ . }}
{{- end }}
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
{{- with default .Values.controller.resources .Values.sidecars.csiProvisioner.resources }}
{{- with .Values.sidecars.csiProvisioner.resources }}
resources: {{ toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.sidecars.csiProvisioner.securityContext }}
@ -162,10 +155,7 @@ spec:
volumeMounts:
- name: socket-dir
mountPath: /csi
{{- with .Values.controller.volumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with default .Values.controller.resources .Values.sidecars.livenessProbe.resources }}
{{- with .Values.sidecars.livenessProbe.resources }}
resources: {{ toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.sidecars.livenessProbe.securityContext }}
@ -175,19 +165,14 @@ spec:
volumes:
- name: socket-dir
emptyDir: {}
{{- with .Values.controller.volumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
- name: aws-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 86400
audience: "sts.amazonaws.com"
{{- with .Values.controller.affinity }}
affinity: {{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.controller.topologySpreadConstraints }}
{{- $tscLabelSelector := dict "labelSelector" ( dict "matchLabels" ( dict "app" "efs-csi-controller" ) ) }}
{{- $constraints := list }}
{{- range .Values.controller.topologySpreadConstraints }}
{{- $constraints = mustAppend $constraints (mergeOverwrite . $tscLabelSelector) }}
{{- end }}
topologySpreadConstraints:
{{- $constraints | toYaml | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -1,24 +0,0 @@
{{- if .Values.controller.podDisruptionBudget.enabled -}}
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ .Values.controller.name }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "aws-efs-csi-driver.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
app: {{ .Values.controller.name }}
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.controller.podDisruptionBudget.unhealthyPodEvictionPolicy }}
unhealthyPodEvictionPolicy: {{ .Values.controller.podDisruptionBudget.unhealthyPodEvictionPolicy }}
{{- end }}
{{- if .Values.controller.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.controller.podDisruptionBudget.maxUnavailable }}
{{- end }}
{{- if .Values.controller.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.controller.podDisruptionBudget.minAvailable }}
{{- end }}
{{- end -}}

View File

@ -3,7 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.controller.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
{{- with .Values.controller.serviceAccount.annotations }}
@ -22,7 +21,7 @@ metadata:
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "patch", "delete"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
@ -75,7 +74,6 @@ kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: efs-csi-provisioner-binding-describe-secrets
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
subjects:

View File

@ -3,10 +3,8 @@ kind: CSIDriver
metadata:
name: efs.csi.aws.com
annotations:
{{- if .Values.useHelmHooksForCSIDriver }}
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation
{{- end }}
"helm.sh/resource-policy": keep
spec:
attachRequired: false

View File

@ -3,12 +3,8 @@ kind: DaemonSet
apiVersion: apps/v1
metadata:
name: efs-csi-node
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
{{- with .Values.node.additionalLabels }}
{{ toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
@ -25,9 +21,6 @@ spec:
app: efs-csi-node
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- with .Values.node.podLabels }}
{{ toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.node.podAnnotations }}
annotations: {{ toYaml .Values.node.podAnnotations | nindent 8 }}
{{- end }}
@ -60,7 +53,7 @@ spec:
dnsConfig: {{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Values.node.serviceAccount.name }}
priorityClassName: {{ .Values.node.priorityClassName}}
priorityClassName: system-node-critical
{{- with .Values.node.tolerations }}
tolerations: {{- toYaml . | nindent 8 }}
{{- end }}
@ -92,14 +85,9 @@ spec:
- name: AWS_USE_FIPS_ENDPOINT
value: "true"
{{- end }}
- name: PORT_RANGE_UPPER_BOUND
value: "{{ .Values.portRangeUpperBound }}"
{{- with .Values.node.env }}
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- name: kubelet-dir
mountPath: {{ .Values.node.kubeletPath }}
mountPath: /var/lib/kubelet
mountPropagation: "Bidirectional"
- name: plugin-dir
mountPath: /csi
@ -109,9 +97,6 @@ spec:
mountPath: /var/amazon/efs
- name: efs-utils-config-legacy
mountPath: /etc/amazon/efs-legacy
{{- with .Values.node.volumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
- name: healthz
containerPort: {{ .Values.node.healthPort }}
@ -138,7 +123,7 @@ spec:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: {{ printf "%s/plugins/efs.csi.aws.com/csi.sock" (trimSuffix "/" .Values.node.kubeletPath) }}
value: /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
@ -175,15 +160,15 @@ spec:
volumes:
- name: kubelet-dir
hostPath:
path: {{ .Values.node.kubeletPath }}
path: /var/lib/kubelet
type: Directory
- name: plugin-dir
hostPath:
path: {{ printf "%s/plugins/efs.csi.aws.com/" (trimSuffix "/" .Values.node.kubeletPath) }}
path: /var/lib/kubelet/plugins/efs.csi.aws.com/
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: {{ printf "%s/plugins_registry/" (trimSuffix "/" .Values.node.kubeletPath) }}
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: efs-state-dir
hostPath:
@ -197,6 +182,3 @@ spec:
hostPath:
path: /etc/amazon/efs
type: DirectoryOrCreate
{{- with .Values.node.volumes }}
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -3,7 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.node.serviceAccount.name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "aws-efs-csi-driver.name" . }}
{{- with .Values.node.serviceAccount.annotations }}

View File

@ -5,20 +5,20 @@
nameOverride: ""
fullnameOverride: ""
replicaCount: 2
useFIPS: false
portRangeUpperBound: "21049"
image:
repository: public.ecr.aws/efs-csi-driver/amazon/aws-efs-csi-driver
tag: "v2.1.5"
repository: amazon/aws-efs-csi-driver
tag: "v1.7.7"
pullPolicy: IfNotPresent
sidecars:
livenessProbe:
image:
repository: public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe
tag: v2.14.0-eks-1-31-5
tag: v2.11.0-eks-1-29-2
pullPolicy: IfNotPresent
resources: {}
securityContext:
@ -27,7 +27,7 @@ sidecars:
nodeDriverRegistrar:
image:
repository: public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar
tag: v2.12.0-eks-1-31-5
tag: v2.9.3-eks-1-29-2
pullPolicy: IfNotPresent
resources: {}
securityContext:
@ -36,13 +36,12 @@ sidecars:
csiProvisioner:
image:
repository: public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner
tag: v5.1.0-eks-1-31-5
tag: v3.6.3-eks-1-29-2
pullPolicy: IfNotPresent
resources: {}
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
additionalArgs: []
imagePullSecrets: []
@ -51,10 +50,6 @@ imagePullSecrets: []
controller:
# Specifies whether a deployment should be created
create: true
# Name of the CSI controller service
name: efs-csi-controller
# Number of replicas for the CSI controller service deployment
replicaCount: 2
# Number for the log level verbosity
logLevel: 2
# If set, add pv/pvc metadata to plugin create requests as parameters.
@ -68,7 +63,7 @@ controller:
# path on efs when deleteing an access point
deleteAccessPointRootDir: false
podAnnotations: {}
podLabels: {}
podLabel: {}
hostNetwork: false
priorityClassName: system-cluster-critical
dnsPolicy: ClusterFirst
@ -94,9 +89,6 @@ controller:
- key: efs.csi.aws.com/agent-not-ready
operator: Exists
affinity: {}
env: []
volumes: []
volumeMounts: []
# Specifies whether a service account should be created
serviceAccount:
create: true
@ -106,12 +98,6 @@ controller:
# eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/efs-csi-role
healthPort: 9909
regionalStsEndpoints: false
# Pod Disruption Budget
podDisruptionBudget:
enabled: false
# maxUnavailable: 1
minAvailable: 1
unhealthyPodEvictionPolicy: IfHealthyBudget
# securityContext on the controller pod
securityContext:
runAsNonRoot: false
@ -124,18 +110,7 @@ controller:
privileged: true
leaderElectionRenewDeadline: 10s
leaderElectionLeaseDuration: 15s
# TSCs without the label selector stanza
#
# Example:
#
# topologySpreadConstraints:
# - maxSkew: 1
# topologyKey: topology.kubernetes.io/zone
# whenUnsatisfiable: ScheduleAnyway
# - maxSkew: 1
# topologyKey: kubernetes.io/hostname
# whenUnsatisfiable: ScheduleAnyway
topologySpreadConstraints: []
## Node daemonset variables
@ -155,7 +130,6 @@ node:
# "fs-01234567":
# ip: 10.10.2.2
# region: us-east-2
priorityClassName: system-node-critical
dnsPolicy: ClusterFirst
dnsConfig:
{}
@ -164,9 +138,7 @@ node:
# dnsConfig:
# nameservers:
# - 169.254.169.253
podLabels: {}
podAnnotations: {}
additionalLabels: {}
resources:
{}
# limits:
@ -176,8 +148,7 @@ node:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
updateStrategy:
{}
updateStrategy: {}
# Override default strategy (RollingUpdate) to speed up deployment.
# This can be useful if helm timeouts are observed.
# type: OnDelete
@ -192,7 +163,6 @@ node:
operator: NotIn
values:
- fargate
- hybrid
# Specifies whether a service account should be created
serviceAccount:
create: true
@ -208,10 +178,6 @@ node:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
env: []
volumes: []
volumeMounts: []
kubeletPath: /var/lib/kubelet
storageClasses: []
# Add StorageClass resources like:
@ -232,6 +198,3 @@ storageClasses: []
# ensureUniqueDirectory: true
# reclaimPolicy: Delete
# volumeBindingMode: Immediate
# Specifies wether to use helm hooks to apply the CSI driver
useHelmHooksForCSIDriver: true

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin"
}
},
"version": "8c52b414f324d6369b77096af98d8f0416fe20cb",
"version": "8f933a5b5867d078c714fd6a9584aa47f450d8d0",
"sum": "XmXkOCriQIZmXwlIIFhqlJMa0e6qGWdxZD+ZDYaN0Po="
},
{
@ -78,8 +78,18 @@
"subdir": "grafana-builder"
}
},
"version": "393630ca7ba9b25258c098f1fd4c81962e3ca046",
"sum": "yxqWcq/N3E/a/XreeU6EuE6X7kYPnG0AspAQFKOjASo="
"version": "42da78cf7f2735c0cf57dee8f80cc52e9e7e57d8",
"sum": "G7B6E5sqWirDbMWRhifbLRfGgRFbIh9WCYa6X3kMh6g="
},
{
"source": {
"git": {
"remote": "https://github.com/grafana/jsonnet-libs.git",
"subdir": "mixin-utils"
}
},
"version": "42da78cf7f2735c0cf57dee8f80cc52e9e7e57d8",
"sum": "SRElwa/XrKAN8aZA9zvdRUx8iebl2It7KNQ7VFvMcBA="
},
{
"source": {
@ -98,8 +108,8 @@
"subdir": ""
}
},
"version": "1199b50e9d2ff53d4bb5fb2304ad1fb69d38e609",
"sum": "LfbgcJbilu4uBdKYZSvmkoOTPwEAzg10L3/VqKAIWtA="
"version": "4eee017d21cb63a303925d1dcd9fc5c496809b46",
"sum": "Kh0GbIycNmJPzk6IOMXn1BbtLNyaiiimclYk7+mvsns="
},
{
"source": {
@ -108,8 +118,8 @@
"subdir": ""
}
},
"version": "4ff562d5e8145940cf355f62cf2308895c4dca81",
"sum": "kiL19fTbXOtNglsmT62kOzIf/Xpu+YwoiMPAApDXhkE="
"version": "aad557d746a4e05d028a2ce542f61dde3b13c621",
"sum": "H+gpR450rmG2/USp9Y4vMfiz9FCUhKiG7xgqPNB1FJk="
},
{
"source": {
@ -118,7 +128,7 @@
"subdir": "jsonnet/kube-state-metrics"
}
},
"version": "2a95d4649b2fea55799032fb9c0b571c4ba7f776",
"version": "0b01e3abce1da521b5e620b8aaa76774bb0fda87",
"sum": "3bioG7CfTfY9zeu5xU4yon6Zt3kYvNkyl492nOhQxnM="
},
{
@ -128,7 +138,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin"
}
},
"version": "2a95d4649b2fea55799032fb9c0b571c4ba7f776",
"version": "0b01e3abce1da521b5e620b8aaa76774bb0fda87",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
},
{
@ -138,8 +148,8 @@
"subdir": ""
}
},
"version": "d2dc72021d0247a5199007ed6e425d4615f9fa5c",
"sum": "rHh5ItS3fs1kwz8GKNEPiBBn58m4Bn5v9KAdBU+tf1U="
"version": "9abc7566be4b58233d7b2aa29665bf47425b30e6",
"sum": "lL17qG4Ejhae7giWBzD2y6HDSxaNgkg8kX7p0i4eUNA="
},
{
"source": {
@ -148,8 +158,8 @@
"subdir": "jsonnet/kube-prometheus"
}
},
"version": "1eea946a1532f1e8cccfceea98d907bf3a10b1d9",
"sum": "17LhiwefVfoNDsF3DcFZw/UL4PMU7YpNNUaOdaYd1gE="
"version": "696ce89f1f4d9107bd3a3b026178b320bac03b8e",
"sum": "NYKZ3k27E/3sk27DCNct1X7gqv8tmYxqACnOm96W7pc="
},
{
"source": {
@ -158,7 +168,7 @@
"subdir": "jsonnet/mixin"
}
},
"version": "7deab71d6d5921eeaf8c79e3ae8e31efe63783a9",
"version": "8ce76ccb32d054cb26898f498ec6bc947cd87d6c",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin"
},
@ -169,8 +179,8 @@
"subdir": "jsonnet/prometheus-operator"
}
},
"version": "7deab71d6d5921eeaf8c79e3ae8e31efe63783a9",
"sum": "LctDdofQostvviE5y8vpRKWGGO1ZKO3dgJe7P9xifW0="
"version": "8ce76ccb32d054cb26898f498ec6bc947cd87d6c",
"sum": "D8bNt3/sB6EO2AirgMZDt1M/5MwbLMpiQtKqCzfTrE4="
},
{
"source": {
@ -179,8 +189,8 @@
"subdir": "doc/alertmanager-mixin"
}
},
"version": "b5d1a64ad5bb0ff879705714d1e40cea82efbd5c",
"sum": "Mf4h1BYLle2nrgjf/HXrBbl0Zk8N+xaoEM017o0BC+k=",
"version": "79805945102a7ba3566f38a627ca3f1edd27756e",
"sum": "j5prvRrJdoCv7n45l5Uy2ghl1IDb9BBUqjwCDs4ZJoQ=",
"name": "alertmanager"
},
{
@ -190,8 +200,8 @@
"subdir": "docs/node-mixin"
}
},
"version": "11365f97bef6cb0e6259d536a7e21c49e3f5c065",
"sum": "xYj6VYFT/eafsbleNlC+Z2VfLy1CndyYrJs9BcTmnX8="
"version": "38d32a397720dfdaf547429ea1b40ab8cfa57e85",
"sum": "NcpQ0Hz0qciUqmOYoAR0X8GUK5pH/QiUXm1aDNgvua0="
},
{
"source": {
@ -200,7 +210,7 @@
"subdir": "documentation/prometheus-mixin"
}
},
"version": "a5ffa83be83be22e2ec9fd1d4765299d8d16119e",
"version": "9659e30dec7073703fb8548e7b0ad80dd0df48f0",
"sum": "2c+wttfee9TwuQJZIkNV7Tekem74Qgc7iZ842P28rNw=",
"name": "prometheus"
},
@ -222,7 +232,7 @@
"subdir": "mixin"
}
},
"version": "346d18bb0f8011c63d7106de494cf3b9253161a1",
"version": "7d7ea650b76cd201de8ee2c73f31497914026293",
"sum": "ieCD4eMgGbOlrI8GmckGPHBGQDcLasE1rULYq56W/bs=",
"name": "thanos-mixin"
}

View File

@ -1432,6 +1432,9 @@ spec:
type: object
type: array
type: object
clusterName:
description: ClusterName sets the kubernetes cluster name to send to pushgateway for grouping metrics
type: string
failedJobsHistoryLimit:
description: |-
FailedJobsHistoryLimit amount of failed jobs to keep for later analysis.
@ -1444,6 +1447,56 @@ spec:
Deprecated: Use FailedJobsHistoryLimit and SuccessfulJobsHistoryLimit respectively.
type: integer
labelSelectors:
description: |-
LabelSelectors is a list of selectors that we filter for.
When defined, only PVCs and PreBackupPods matching them are backed up.
items:
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
type: array
podConfigRef:
description: |-
PodConfigRef describes the pod spec with wich this action shall be executed.
@ -2346,6 +2399,9 @@ spec:
type: object
type: array
type: object
clusterName:
description: ClusterName sets the kubernetes cluster name to send to pushgateway for grouping metrics
type: string
failedJobsHistoryLimit:
description: |-
FailedJobsHistoryLimit amount of failed jobs to keep for later analysis.
@ -20718,6 +20774,9 @@ spec:
type: object
type: array
type: object
clusterName:
description: ClusterName sets the kubernetes cluster name to send to pushgateway for grouping metrics
type: string
concurrentRunsAllowed:
type: boolean
failedJobsHistoryLimit:
@ -20732,6 +20791,56 @@ spec:
Deprecated: Use FailedJobsHistoryLimit and SuccessfulJobsHistoryLimit respectively.
type: integer
labelSelectors:
description: |-
LabelSelectors is a list of selectors that we filter for.
When defined, only PVCs and PreBackupPods matching them are backed up.
items:
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
type: array
podConfigRef:
description: |-
PodConfigRef describes the pod spec with wich this action shall be executed.
@ -21504,6 +21613,9 @@ spec:
type: object
type: array
type: object
clusterName:
description: ClusterName sets the kubernetes cluster name to send to pushgateway for grouping metrics
type: string
concurrentRunsAllowed:
type: boolean
failedJobsHistoryLimit:

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero
description: KubeZero - Root App of Apps chart
type: application
version: 1.31.6
version: 1.32.3
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -15,4 +15,4 @@ dependencies:
- name: kubezero-lib
version: 0.2.1
repository: https://cdn.zero-downtime.net/charts
kubeVersion: ">= 1.31.0-0"
kubeVersion: ">= 1.32.0-0"

View File

@ -1,6 +1,6 @@
# kubezero
![Version: 1.31.6](https://img.shields.io/badge/Version-1.31.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 1.32.3](https://img.shields.io/badge/Version-1.32.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero - Root App of Apps chart
@ -14,7 +14,7 @@ KubeZero - Root App of Apps chart
## Requirements
Kubernetes: `>= 1.31.0-0`
Kubernetes: `>= 1.32.0-0`
| Repository | Name | Version |
|------------|------|---------|
@ -38,14 +38,15 @@ Kubernetes: `>= 1.31.0-0`
| argo.argocd-image-updater.enabled | bool | `false` | |
| argo.enabled | bool | `false` | |
| argo.namespace | string | `"argocd"` | |
| argo.targetRevision | string | `"0.3.1"` | |
| argo.targetRevision | string | `"0.3.2"` | |
| cert-manager.enabled | bool | `false` | |
| cert-manager.namespace | string | `"cert-manager"` | |
| cert-manager.targetRevision | string | `"0.9.12"` | |
| falco.enabled | bool | `false` | |
| falco.k8saudit.enabled | bool | `false` | |
| falco.targetRevision | string | `"0.1.2"` | |
| global.aws | object | `{}` | |
| global.aws.accountId | string | `"123456789012"` | |
| global.aws.region | string | `"the-moon"` | |
| global.clusterName | string | `"zdt-trial-cluster"` | |
| global.gcp | object | `{}` | |
| global.highAvailable | bool | `false` | |

View File

@ -13,7 +13,7 @@ global:
addons:
enabled: true
targetRevision: 0.8.13
targetRevision: 0.8.14
external-dns:
enabled: false
forseti:
@ -32,7 +32,7 @@ addons:
network:
enabled: true
retain: true
targetRevision: 0.5.7
targetRevision: 0.5.8
cilium:
cluster: {}
@ -43,7 +43,7 @@ cert-manager:
storage:
enabled: false
targetRevision: 0.8.10
targetRevision: 0.8.11
lvm-localpv:
enabled: false
aws-ebs-csi-driver: