Compare commits

..

42 Commits

Author SHA1 Message Date
8eec28d76d chore(deps): update kubezero-storage-dependencies 2025-04-18 03:03:12 +00:00
c675e7aa1b Merge latest ci-tools-lib 2025-04-17 23:00:48 +00:00
00daef3b0b Squashed '.ci/' changes from a5cd89d7..9725c2ef
9725c2ef fix: ensure we dont remove rc builds

git-subtree-dir: .ci
git-subtree-split: 9725c2ef8842467951ec60adb1b45dfeca7618f5
2025-04-17 23:00:48 +00:00
9bb0e0e91a feat: reorg cluster upgrade scripts to allow support for KubeZero only clusters like GKE 2025-04-17 22:42:39 +00:00
e022db091c Squashed '.ci/' changes from 15e4d1f5..a5cd89d7
a5cd89d7 feat: improve tag parsing, ensure dirty is added if needed

git-subtree-dir: .ci
git-subtree-split: a5cd89d73157c829eaf12f91a68f73826fbb35e7
2025-04-17 22:37:10 +00:00
da2510c8df Merge latest ci-tools-lib 2025-04-17 22:37:10 +00:00
c4aab252e8 Squashed '.ci/' changes from a3928364..15e4d1f5
15e4d1f5 ci: make work with main branch
3feaf6fa chore: migrate to main branch

git-subtree-dir: .ci
git-subtree-split: 15e4d1f589c8e055944b2a4b58a9a50728e245b4
2025-04-17 22:00:32 +00:00
0a813c525c Merge latest ci-tools-lib 2025-04-17 22:00:32 +00:00
9d28705079 docs: typos 2025-04-17 12:08:06 +00:00
024a0fcfaf feat: ensure central secret keys exists 2025-04-17 13:06:28 +01:00
f88d6a2f0d docs: update 2025-04-17 10:35:40 +00:00
2c47a28e10 feat: MQ various version bumps 2025-04-17 10:35:13 +00:00
dfdf50f85f Merge pull request 'chore(deps): update kubezero-mq-dependencies' (#8) from renovate/kubezero-mq-kubezero-mq-dependencies into main
Reviewed-on: #8
2025-04-14 13:03:42 +00:00
d4ba1d1a01 chore(deps): update kubezero-mq-dependencies 2025-04-14 13:03:42 +00:00
fa06c13805 Merge pull request 'chore(deps): update nats docker tag to v2.11.1' (#9) from renovate/nats-2.x into main
Reviewed-on: #9
2025-04-14 13:03:18 +00:00
7f2208fea4 chore(deps): update nats docker tag to v2.11.1 2025-04-14 13:03:18 +00:00
c427e73f79 Merge pull request 'chore(deps): update kubezero-cache-dependencies' (#42) from renovate/kubezero-cache-kubezero-cache-dependencies into main
Reviewed-on: #42
2025-04-14 12:26:09 +00:00
2fd775624b chore(deps): update kubezero-cache-dependencies 2025-04-14 12:26:09 +00:00
ffaf037483 Merge pull request 'chore(deps): update helm release neo4j to v2025' (#62) from renovate/kubezero-graph-major-kubezero-graph-dependencies into main
Reviewed-on: #62
2025-04-14 11:40:03 +00:00
0664b2bed3 chore(deps): update helm release neo4j to v2025 2025-04-14 11:40:03 +00:00
6f69dfd8e9 docs: README 2025-04-11 15:22:00 +00:00
461d0a939e fix: typo 2025-04-11 15:17:15 +00:00
79074905e2 feat: latest ArgoCD incl. custom cmp 2025-04-11 15:08:14 +00:00
3391ed65d5 fix: ensure use right platform 2025-04-10 23:03:48 +00:00
88aa742dfd feat: introduce vals cmp plugin for argoCD 2025-04-10 22:50:08 +00:00
b48bef599c feat: more argoCD tuning for vals on AWS 2025-04-09 22:51:04 +00:00
3e3560afad Merge pull request 'chore(deps): update kubezero-argo-dependencies' (#68) from renovate/kubezero-argo-kubezero-argo-dependencies into main
Reviewed-on: #68
2025-04-09 22:27:19 +00:00
1d2af7e3d9 chore(deps): update kubezero-argo-dependencies 2025-04-09 22:27:19 +00:00
c8dd7fd2cc feat: tooling cleanup, first bootstrap draft, argo tweaks 2025-04-08 14:33:54 +00:00
daf70c9bfb fix: argocd bootstrap fix 2025-03-26 16:47:24 +00:00
eb059883c1 fix: ensure pre-install hook is run for kubezero 2025-03-25 11:17:30 +01:00
bca7f5fd45 fix: another argo migration fix 2025-03-24 22:10:38 +01:00
68997b535d fix: type in hook 2025-03-24 18:18:37 +00:00
ca69b55492 fix: allow multi-line secret val 2025-03-24 19:02:19 +01:00
01832f2e41 fix: improve argocd secret handling 2025-03-24 18:54:56 +01:00
94dd2f395e fix: kubezero root module fixes 2025-03-24 18:11:26 +01:00
6a7c0b6085 feat: more cluster bootstrap work 2025-03-24 16:44:11 +00:00
10de3a1047 Merge pull request 'chore(deps): update kubezero-argo-dependencies' (#63) from renovate/kubezero-argo-kubezero-argo-dependencies into main
Reviewed-on: #63
2025-03-21 13:51:46 +00:00
5a47b6be43 chore(deps): update kubezero-argo-dependencies 2025-03-21 13:51:46 +00:00
63eb787599 Merge pull request 'chore(deps): update public.ecr.aws/zero-downtime/zdt-argocd docker tag to v2.14.7' (#67) from renovate/public.ecr.aws-zero-downtime-zdt-argocd-2.x into main
Reviewed-on: #67
2025-03-21 13:51:29 +00:00
120072a34b chore(deps): update public.ecr.aws/zero-downtime/zdt-argocd docker tag to v2.14.7 2025-03-21 03:02:01 +00:00
63f96e58ba fix: ensure root app is re-created 2025-03-19 12:39:06 +01:00
47 changed files with 467 additions and 338 deletions

View File

@ -14,7 +14,7 @@ include .ci/podman.mk
Add subtree to your project: Add subtree to your project:
``` ```
git subtree add --prefix .ci https://git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash git subtree add --prefix .ci https://git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git main --squash
``` ```

View File

@ -41,7 +41,8 @@ for image in sorted(images, key=lambda d: d['imagePushedAt'], reverse=True):
_delete = True _delete = True
for tag in image["imageTags"]: for tag in image["imageTags"]:
# Look for at least one tag NOT beign a SemVer dev tag # Look for at least one tag NOT beign a SemVer dev tag
if "-" not in tag: # untagged dev builds get tagged as <tag>-g<commit>
if "-g" not in tag and "dirty" not in tag:
_delete = False _delete = False
if _delete: if _delete:
print("Deleting development image {}".format(image["imageTags"])) print("Deleting development image {}".format(image["imageTags"]))

View File

@ -8,8 +8,8 @@ SHELL := bash
.PHONY: all # All targets are accessible for user .PHONY: all # All targets are accessible for user
.DEFAULT: help # Running Make will run the help target .DEFAULT: help # Running Make will run the help target
# Parse version from latest git semver tag # Parse version from latest git semver tag, use short commit otherwise
GIT_TAG ?= $(shell git describe --tags --match v*.*.* 2>/dev/null || git rev-parse --short HEAD 2>/dev/null) GIT_TAG ?= $(shell git describe --tags --match v*.*.* --dirty 2>/dev/null || git describe --match="" --always --dirty 2>/dev/null)
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null) GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
TAG ::= $(GIT_TAG) TAG ::= $(GIT_TAG)
@ -85,7 +85,7 @@ rm-image:
## some useful tasks during development ## some useful tasks during development
ci-pull-upstream: ## pull latest shared .ci subtree ci-pull-upstream: ## pull latest shared .ci subtree
git subtree pull --prefix .ci ssh://git@git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash -m "Merge latest ci-tools-lib" git subtree pull --prefix .ci ssh://git@git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git main --squash -m "Merge latest ci-tools-lib"
create-repo: ## create new AWS ECR public repository create-repo: ## create new AWS ECR public repository
aws ecr-public create-repository --repository-name $(IMAGE) --region $(REGION) aws ecr-public create-repository --repository-name $(IMAGE) --region $(REGION)

View File

@ -5,9 +5,9 @@ FROM docker.io/alpine:${ALPINE_VERSION}
ARG ALPINE_VERSION ARG ALPINE_VERSION
ARG KUBE_VERSION=1.31 ARG KUBE_VERSION=1.31
ARG SOPS_VERSION="3.9.4" ARG SOPS_VERSION="3.10.1"
ARG VALS_VERSION="0.39.1" ARG VALS_VERSION="0.40.1"
ARG HELM_SECRETS_VERSION="4.6.2" ARG HELM_SECRETS_VERSION="4.6.3"
RUN cd /etc/apk/keys && \ RUN cd /etc/apk/keys && \
wget "https://cdn.zero-downtime.net/alpine/stefan@zero-downtime.net-61bb6bfb.rsa.pub" && \ wget "https://cdn.zero-downtime.net/alpine/stefan@zero-downtime.net-61bb6bfb.rsa.pub" && \
@ -24,6 +24,7 @@ RUN cd /etc/apk/keys && \
py3-yaml \ py3-yaml \
restic \ restic \
helm \ helm \
apache2-utils \
ytt@testing \ ytt@testing \
etcd-ctl@edge-community \ etcd-ctl@edge-community \
cri-tools@kubezero \ cri-tools@kubezero \

View File

@ -19,7 +19,7 @@ KubeZero is a Kubernetes distribution providing an integrated container platform
# Version / Support Matrix # Version / Support Matrix
KubeZero releases track the same *minor* version of Kubernetes. KubeZero releases track the same *minor* version of Kubernetes.
Any 1.30.X-Y release of Kubezero supports any Kubernetes cluster 1.30.X. Any 1.31.X-Y release of Kubezero supports any Kubernetes cluster 1.31.X.
KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed. KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed.
@ -28,15 +28,15 @@ KubeZero is distributed as a collection of versioned Helm charts, allowing custo
gantt gantt
title KubeZero Support Timeline title KubeZero Support Timeline
dateFormat YYYY-MM-DD dateFormat YYYY-MM-DD
section 1.29
beta :129b, 2024-07-01, 2024-07-31
release :after 129b, 2024-11-30
section 1.30 section 1.30
beta :130b, 2024-09-01, 2024-10-31 beta :130b, 2024-09-01, 2024-10-31
release :after 130b, 2025-02-28 release :after 130b, 2025-04-30
section 1.31 section 1.31
beta :131b, 2024-12-01, 2025-01-30 beta :131b, 2024-12-01, 2025-02-28
release :after 131b, 2025-04-30 release :after 131b, 2025-06-30
section 1.32
beta :132b, 2025-04-01, 2025-05-19
release :after 132b, 2025-09-30
``` ```
[Upstream release policy](https://kubernetes.io/releases/) [Upstream release policy](https://kubernetes.io/releases/)
@ -44,7 +44,7 @@ gantt
# Components # Components
## OS ## OS
- all compute nodes are running on Alpine V3.20 - all compute nodes are running on Alpine V3.21
- 1 or 2 GB encrypted root file system - 1 or 2 GB encrypted root file system
- no external dependencies at boot time, apart from container registries - no external dependencies at boot time, apart from container registries
- focused on security and minimal footprint - focused on security and minimal footprint

44
admin/cluster_bootstrap.sh Executable file
View File

@ -0,0 +1,44 @@
#!/bin/bash
set -eEx
set -o pipefail
set -x
VALUES=$1
WORKDIR=$(mktemp -p /tmp -d kubezero.XXX)
[ -z "$DEBUG" ] && trap 'rm -rf $WORKDIR' ERR EXIT
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
# shellcheck disable=SC1091
. "$SCRIPT_DIR"/libhelm.sh
CHARTS="$(dirname $SCRIPT_DIR)/charts"
KUBE_VERSION="$(get_kube_version)"
PLATFORM="$(get_kubezero_platform)"
if [ -z "$KUBE_VERSION" ]; then
echo "Cannot contact cluster, cannot parse version!"
exit 1
fi
# Upload values into kubezero-values
kubectl create ns kubezero || true
kubectl create cm -n kubezero kubezero-values \
--from-file values.yaml=$VALUES || \
kubectl get cm -n kubezero kubezero-values -o=yaml | \
yq e ".data.\"values.yaml\" |= load_str($1)" | \
kubectl replace -f -
### Main
get_kubezero_values $ARGOCD
# Always use embedded kubezero chart
helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --kube-version $KUBE_VERSION --name-template kubezero --version ~$KUBE_VERSION --devel --output-dir $WORKDIR
ARTIFACTS=(network addons cert-manager storage argo)
for t in ${ARTIFACTS[@]}; do
_helm crds $t || true
_helm apply $t || true
done

View File

@ -9,34 +9,23 @@ ARGOCD="${3:-true}"
LOCAL_DEV=1 LOCAL_DEV=1
#VERSION="latest"
KUBE_VERSION="$(kubectl version -o json | jq -r .serverVersion.gitVersion)"
WORKDIR=$(mktemp -p /tmp -d kubezero.XXX) WORKDIR=$(mktemp -p /tmp -d kubezero.XXX)
[ -z "$DEBUG" ] && trap 'rm -rf $WORKDIR' ERR EXIT [ -z "$DEBUG" ] && trap 'rm -rf $WORKDIR' ERR EXIT
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
# shellcheck disable=SC1091 # shellcheck disable=SC1091
. "$SCRIPT_DIR"/libhelm.sh . "$SCRIPT_DIR"/libhelm.sh
CHARTS="$(dirname $SCRIPT_DIR)/charts" CHARTS="$(dirname $SCRIPT_DIR)/charts"
# Guess platform from current context KUBE_VERSION="$(get_kube_version)"
_auth_cmd=$(kubectl config view | yq .users[0].user.exec.command) PLATFORM="$(get_kubezero_platform)"
if [ "$_auth_cmd" == "gke-gcloud-auth-plugin" ]; then
PLATFORM=gke if [ -z "$KUBE_VERSION" ]; then
elif [ "$_auth_cmd" == "aws-iam-authenticator" ]; then echo "Cannot contact cluster, cannot parse version!"
PLATFORM=aws exit 1
else
PLATFORM=nocloud
fi fi
parse_version() {
echo $([[ $1 =~ ^v[0-9]+\.[0-9]+\.[0-9]+ ]] && echo "${BASH_REMATCH[0]//v/}")
}
KUBE_VERSION=$(parse_version $KUBE_VERSION)
### Main ### Main
get_kubezero_values $ARGOCD get_kubezero_values $ARGOCD
@ -45,6 +34,7 @@ helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --kube-version $
# Root KubeZero apply directly and exit # Root KubeZero apply directly and exit
if [ ${ARTIFACTS[0]} == "kubezero" ]; then if [ ${ARTIFACTS[0]} == "kubezero" ]; then
[ -f $CHARTS/kubezero/hooks.d/pre-install.sh ] && . $CHARTS/kubezero/hooks.d/pre-install.sh
kubectl replace -f $WORKDIR/kubezero/templates $(field_manager $ARGOCD) kubectl replace -f $WORKDIR/kubezero/templates $(field_manager $ARGOCD)
exit $? exit $?

View File

@ -14,7 +14,12 @@ pre_control_plane_upgrade_cluster() {
# All things after the first controller / control plane upgrade # All things after the first controller / control plane upgrade
post_control_plane_upgrade_cluster() { post_control_plane_upgrade_cluster() {
echo # delete previous root app controlled by kubezero module
kubectl delete application kubezero-git-sync -n argocd || true
# only patch appproject to keep SyncWindow in place
kubectl patch appproject kubezero -n argocd --type json -p='[{"op": "remove", "path": "/metadata/labels"}]' || true
kubectl patch appproject kubezero -n argocd --type json -p='[{"op": "remove", "path": "/metadata/annotations"}]' || true
} }

View File

@ -111,35 +111,42 @@ post_kubeadm() {
} }
# Control plane upgrade # Migrate KubeZero Config to current version
control_plane_upgrade() { upgrade_kubezero_config() {
CMD=$1 # get current values, argo app over cm
get_kubezero_values $ARGOCD
# tumble new config through migrate.py
migrate_argo_values.py < "$WORKDIR"/kubezero-values.yaml > "$WORKDIR"/new-kubezero-values.yaml \
&& mv "$WORKDIR"/new-kubezero-values.yaml "$WORKDIR"/kubezero-values.yaml
update_kubezero_cm
if [ "$ARGOCD" == "true" ]; then
# update argo app
export kubezero_chart_version=$(yq .version $CHARTS/kubezero/Chart.yaml)
kubectl get application kubezero -n argocd -o yaml | \
yq ".spec.source.helm.valuesObject |= load(\"$WORKDIR/kubezero-values.yaml\") | .spec.source.targetRevision = strenv(kubezero_chart_version)" \
> $WORKDIR/new-argocd-app.yaml
kubectl replace -f $WORKDIR/new-argocd-app.yaml $(field_manager $ARGOCD)
fi
}
# Control plane upgrade
kubeadm_upgrade() {
ARGOCD=$(argo_used) ARGOCD=$(argo_used)
render_kubeadm upgrade render_kubeadm upgrade
if [[ "$CMD" =~ ^(cluster)$ ]]; then # Check if we already have all controllers on the current version
OLD_CONTROLLERS=$(kubectl get nodes -l "node-role.kubernetes.io/control-plane=" --no-headers=true | grep -cv $KUBE_VERSION || true)
# run control plane upgrade
if [ "$OLD_CONTROLLERS" != "0" ]; then
pre_control_plane_upgrade_cluster pre_control_plane_upgrade_cluster
# get current values, argo app over cm
get_kubezero_values $ARGOCD
# tumble new config through migrate.py
migrate_argo_values.py < "$WORKDIR"/kubezero-values.yaml > "$WORKDIR"/new-kubezero-values.yaml \
&& mv "$WORKDIR"/new-kubezero-values.yaml "$WORKDIR"/kubezero-values.yaml
update_kubezero_cm
if [ "$ARGOCD" == "true" ]; then
# update argo app
export kubezero_chart_version=$(yq .version $CHARTS/kubezero/Chart.yaml)
kubectl get application kubezero -n argocd -o yaml | \
yq ".spec.source.helm.valuesObject |= load(\"$WORKDIR/kubezero-values.yaml\") | .spec.source.targetRevision = strenv(kubezero_chart_version)" \
> $WORKDIR/new-argocd-app.yaml
kubectl replace -f $WORKDIR/new-argocd-app.yaml $(field_manager $ARGOCD)
fi
pre_kubeadm pre_kubeadm
_kubeadm init phase upload-config kubeadm _kubeadm init phase upload-config kubeadm
@ -155,7 +162,8 @@ control_plane_upgrade() {
echo "Successfully upgraded KubeZero control plane to $KUBE_VERSION using kubeadm." echo "Successfully upgraded KubeZero control plane to $KUBE_VERSION using kubeadm."
elif [[ "$CMD" =~ ^(final)$ ]]; then # All controllers already on current version
else
pre_cluster_upgrade_final pre_cluster_upgrade_final
# Finally upgrade addons last, with 1.32 we can ONLY call addon phase # Finally upgrade addons last, with 1.32 we can ONLY call addon phase
@ -320,7 +328,7 @@ apply_module() {
get_kubezero_values $ARGOCD get_kubezero_values $ARGOCD
# Always use embedded kubezero chart # Always use embedded kubezero chart
helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --version ~$KUBE_VERSION --devel --output-dir $WORKDIR helm template $CHARTS/kubezero -f $WORKDIR/kubezero-values.yaml --kube-version $KUBE_VERSION --name-template kubezero --version ~$KUBE_VERSION --devel --output-dir $WORKDIR
# CRDs first # CRDs first
for t in $MODULES; do for t in $MODULES; do
@ -330,6 +338,7 @@ apply_module() {
for t in $MODULES; do for t in $MODULES; do
# apply/replace app of apps directly # apply/replace app of apps directly
if [ $t == "kubezero" ]; then if [ $t == "kubezero" ]; then
[ -f $CHARTS/kubezero/hooks.d/pre-install.sh ] && . $CHARTS/kubezero/hooks.d/pre-install.sh
kubectl replace -f $WORKDIR/kubezero/templates $(field_manager $ARGOCD) kubectl replace -f $WORKDIR/kubezero/templates $(field_manager $ARGOCD)
else else
#_helm apply $t #_helm apply $t
@ -410,12 +419,8 @@ for t in $@; do
bootstrap) control_plane_node bootstrap;; bootstrap) control_plane_node bootstrap;;
join) control_plane_node join;; join) control_plane_node join;;
restore) control_plane_node restore;; restore) control_plane_node restore;;
kubeadm_upgrade) upgrade_control_plane) kubeadm_upgrade;;
control_plane_upgrade cluster upgrade_kubezero) upgrade_kubezero_config;;
;;
finalize_cluster_upgrade)
control_plane_upgrade final
;;
apply_*) apply_*)
ARGOCD=$(argo_used) ARGOCD=$(argo_used)
apply_module "${t##apply_}";; apply_module "${t##apply_}";;

View File

@ -44,10 +44,53 @@ function field_manager() {
} }
function get_kubezero_secret() { function get_kube_version() {
export _key="$1" local git_version="$(kubectl version -o json | jq -r .serverVersion.gitVersion)"
echo $([[ $git_version =~ ^v[0-9]+\.[0-9]+\.[0-9]+ ]] && echo "${BASH_REMATCH[0]//v/}")
}
kubectl get secrets -n kubezero kubezero-secrets -o yaml | yq '.data.[env(_key)]' | base64 -d -w0
function get_kubezero_platform() {
_auth_cmd=$(kubectl config view | yq .users[0].user.exec.command)
if [ "$_auth_cmd" == "gke-gcloud-auth-plugin" ]; then
PLATFORM=gke
elif [ "$_auth_cmd" == "aws-iam-authenticator" ]; then
PLATFORM=aws
else
PLATFORM=nocloud
fi
echo $PLATFORM
}
function get_secret_val() {
local ns=$1
local secret=$2
local val=$(kubectl get secret -n $ns $secret -o yaml | yq ".data.\"$3\"")
if [ "$val" != "null" ]; then
echo -n $val | base64 -d -w0
else
echo ""
fi
}
function get_kubezero_secret() {
get_secret_val kubezero kubezero-secrets "$1"
}
function ensure_kubezero_secret_key() {
local secret="$(kubectl get secret -n kubezero kubezero-secrets -o yaml)"
local key=""
local val=""
for key in $@; do
val=$(echo "$secret" | yq ".data.\"$key\"")
if [ "$val" == "null" ]; then
kubectl patch secret -n kubezero kubezero-secrets --patch="{\"data\": { \"$key\": \"\" }}"
fi
done
} }
@ -55,7 +98,9 @@ function set_kubezero_secret() {
local key="$1" local key="$1"
local val="$2" local val="$2"
kubectl patch secret -n kubezero kubezero-secrets --patch="{\"data\": { \"$key\": \"$(echo -n $val |base64 -w0)\" }}" if [ -n "$val" ]; then
kubectl patch secret -n kubezero kubezero-secrets --patch="{\"data\": { \"$key\": \"$(echo -n "$val" |base64 -w0)\" }}"
fi
} }
@ -78,6 +123,7 @@ function update_kubezero_cm() {
kubectl replace -f - kubectl replace -f -
} }
# sync kubezero-values CM from ArgoCD app # sync kubezero-values CM from ArgoCD app
function sync_kubezero_cm_from_argo() { function sync_kubezero_cm_from_argo() {
get_kubezero_values true get_kubezero_values true
@ -140,7 +186,7 @@ function delete_ns() {
# Extract crds via helm calls # Extract crds via helm calls
function crds() { function crds() {
helm secrets --evaluate-templates template $(chart_location $chart) -n $namespace --name-template $module $targetRevision --include-crds -f $WORKDIR/values.yaml $API_VERSIONS --kube-version $KUBE_VERSION $@ | python3 -c ' helm template $(chart_location $chart) -n $namespace --name-template $module $targetRevision --include-crds -f $WORKDIR/values.yaml $API_VERSIONS --kube-version $KUBE_VERSION $@ | python3 -c '
#!/usr/bin/python3 #!/usr/bin/python3
import yaml import yaml
import sys import sys
@ -212,7 +258,7 @@ function _helm() {
if [ $action == "crds" ]; then if [ $action == "crds" ]; then
# Pre-crd hook # Pre-crd hook
[ -f $WORKDIR/$chart/hooks.d/pre-crds.sh ] && (cd $WORKDIR; bash ./$chart/hooks.d/pre-crds.sh) [ -f $WORKDIR/$chart/hooks.d/pre-crds.sh ] && . $WORKDIR/$chart/hooks.d/pre-crds.sh
crds crds
@ -224,7 +270,7 @@ function _helm() {
create_ns $namespace create_ns $namespace
# Optional pre hook # Optional pre hook
[ -f $WORKDIR/$chart/hooks.d/pre-install.sh ] && (cd $WORKDIR; bash ./$chart/hooks.d/pre-install.sh) [ -f $WORKDIR/$chart/hooks.d/pre-install.sh ] && . $WORKDIR/$chart/hooks.d/pre-install.sh
render render
[ $action == "replace" ] && kubectl replace -f $WORKDIR/helm.yaml $(field_manager $ARGOCD) && rc=$? || rc=$? [ $action == "replace" ] && kubectl replace -f $WORKDIR/helm.yaml $(field_manager $ARGOCD) && rc=$? || rc=$?
@ -233,7 +279,7 @@ function _helm() {
[ $action == "apply" -o $rc -ne 0 ] && kubectl apply -f $WORKDIR/helm.yaml --server-side --force-conflicts $(field_manager $ARGOCD) && rc=$? || rc=$? [ $action == "apply" -o $rc -ne 0 ] && kubectl apply -f $WORKDIR/helm.yaml --server-side --force-conflicts $(field_manager $ARGOCD) && rc=$? || rc=$?
# Optional post hook # Optional post hook
[ -f $WORKDIR/$chart/hooks.d/post-install.sh ] && (cd $WORKDIR; bash ./$chart/hooks.d/post-install.sh) [ -f $WORKDIR/$chart/hooks.d/post-install.sh ] && . $WORKDIR/$chart/hooks.d/post-install.sh
elif [ $action == "delete" ]; then elif [ $action == "delete" ]; then
render render
@ -246,6 +292,7 @@ function _helm() {
return 0 return 0
} }
function all_nodes_upgrade() { function all_nodes_upgrade() {
CMD="$1" CMD="$1"
@ -306,7 +353,7 @@ EOF
} }
function control_plane_upgrade() { function admin_job() {
TASKS="$1" TASKS="$1"
[ -z "$KUBE_VERSION" ] && KUBE_VERSION="latest" [ -z "$KUBE_VERSION" ] && KUBE_VERSION="latest"
@ -316,7 +363,7 @@ function control_plane_upgrade() {
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: kubezero-upgrade name: kubezero-admin-job
namespace: kube-system namespace: kube-system
labels: labels:
app: kubezero-upgrade app: kubezero-upgrade
@ -361,10 +408,10 @@ spec:
restartPolicy: Never restartPolicy: Never
EOF EOF
kubectl wait pod kubezero-upgrade -n kube-system --timeout 120s --for=condition=initialized 2>/dev/null kubectl wait pod kubezero-admin-job -n kube-system --timeout 120s --for=condition=initialized 2>/dev/null
while true; do while true; do
kubectl logs kubezero-upgrade -n kube-system -f 2>/dev/null && break kubectl logs kubezero-admin-job -n kube-system -f 2>/dev/null && break
sleep 3 sleep 3
done done
kubectl delete pod kubezero-upgrade -n kube-system kubectl delete pod kubezero-admin-job -n kube-system
} }

View File

@ -15,37 +15,31 @@ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
ARGOCD=$(argo_used) ARGOCD=$(argo_used)
echo "Checking that all pods in kube-system are running ..." echo "Checking that all pods in kube-system are running ..."
#waitSystemPodsRunning waitSystemPodsRunning
[ "$ARGOCD" == "true" ] && disable_argo [ "$ARGOCD" == "true" ] && disable_argo
# Check if we already have all controllers on the current version admin_job "upgrade_control_plane, upgrade_kubezero"
OLD_CONTROLLERS=$(kubectl get nodes -l "node-role.kubernetes.io/control-plane=" --no-headers=true | grep -cv $KUBE_VERSION || true)
# All controllers already on current version
if [ "$OLD_CONTROLLERS" == "0" ]; then
control_plane_upgrade finalize_cluster_upgrade
# Otherwise run control plane upgrade
else
control_plane_upgrade kubeadm_upgrade
fi
echo "<Return> to continue"
read -r
#echo "Adjust kubezero values as needed:" #echo "Adjust kubezero values as needed:"
# shellcheck disable=SC2015 # shellcheck disable=SC2015
#[ "$ARGOCD" == "true" ] && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kubezero #[ "$ARGOCD" == "true" ] && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kubezero
#echo "<Return> to continue"
#read -r
# upgrade modules # upgrade modules
control_plane_upgrade "apply_network, apply_addons, apply_storage, apply_operators" admin_job "apply_kubezero, apply_network, apply_addons, apply_storage, apply_operators"
echo "Checking that all pods in kube-system are running ..." echo "Checking that all pods in kube-system are running ..."
waitSystemPodsRunning waitSystemPodsRunning
echo "Applying remaining KubeZero modules..." echo "Applying remaining KubeZero modules..."
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo" admin_job "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo"
# we replace the project during v1.31 so disable again
[ "$ARGOCD" == "true" ] && disable_argo
# Final step is to commit the new argocd kubezero app # Final step is to commit the new argocd kubezero app
kubectl get app kubezero -n argocd -o yaml | yq 'del(.status) | del(.metadata) | del(.operation) | .metadata.name="kubezero" | .metadata.namespace="argocd"' | yq 'sort_keys(..)' > $ARGO_APP kubectl get app kubezero -n argocd -o yaml | yq 'del(.status) | del(.metadata) | del(.operation) | .metadata.name="kubezero" | .metadata.namespace="argocd"' | yq 'sort_keys(..)' > $ARGO_APP
@ -57,6 +51,12 @@ while true; do
sleep 1 sleep 1
done done
echo "Once all controller nodes are running on $KUBE_VERSION, <return> to continue"
read -r
# Final control plane upgrades
admin_job "upgrade_control_plane"
echo "Please commit $ARGO_APP as the updated kubezero/application.yaml for your cluster." echo "Please commit $ARGO_APP as the updated kubezero/application.yaml for your cluster."
echo "Then head over to ArgoCD for this cluster and sync all KubeZero modules to apply remaining upgrades." echo "Then head over to ArgoCD for this cluster and sync all KubeZero modules to apply remaining upgrades."

View File

@ -3,7 +3,7 @@ name: kubezero-addons
description: KubeZero umbrella chart for various optional cluster addons description: KubeZero umbrella chart for various optional cluster addons
type: application type: application
version: 0.8.13 version: 0.8.13
appVersion: v1.30 appVersion: v1.31
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:

View File

@ -25,3 +25,4 @@
README.md.gotmpl README.md.gotmpl
dashboards.yaml dashboards.yaml
jsonnet jsonnet
update.sh

View File

@ -1,7 +1,7 @@
apiVersion: v2 apiVersion: v2
description: KubeZero Argo - Events, Workflow, CD description: KubeZero Argo - Events, Workflow, CD
name: kubezero-argo name: kubezero-argo
version: 0.3.0 version: 0.3.2
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -18,15 +18,15 @@ dependencies:
version: 0.2.1 version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: argo-events - name: argo-events
version: 2.4.13 version: 2.4.15
repository: https://argoproj.github.io/argo-helm repository: https://argoproj.github.io/argo-helm
condition: argo-events.enabled condition: argo-events.enabled
- name: argo-cd - name: argo-cd
version: 7.8.9 version: 7.8.23
repository: https://argoproj.github.io/argo-helm repository: https://argoproj.github.io/argo-helm
condition: argo-cd.enabled condition: argo-cd.enabled
- name: argocd-image-updater - name: argocd-image-updater
version: 0.12.0 version: 0.12.1
repository: https://argoproj.github.io/argo-helm repository: https://argoproj.github.io/argo-helm
condition: argocd-image-updater.enabled condition: argocd-image-updater.enabled
kubeVersion: ">= 1.30.0-0" kubeVersion: ">= 1.30.0-0"

View File

@ -1,6 +1,6 @@
# kubezero-argo # kubezero-argo
![Version: 0.3.0](https://img.shields.io/badge/Version-0.3.0-informational?style=flat-square) ![Version: 0.3.2](https://img.shields.io/badge/Version-0.3.2-informational?style=flat-square)
KubeZero Argo - Events, Workflow, CD KubeZero Argo - Events, Workflow, CD
@ -18,9 +18,9 @@ Kubernetes: `>= 1.30.0-0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://argoproj.github.io/argo-helm | argo-cd | 7.8.9 | | https://argoproj.github.io/argo-helm | argo-cd | 7.8.23 |
| https://argoproj.github.io/argo-helm | argo-events | 2.4.13 | | https://argoproj.github.io/argo-helm | argo-events | 2.4.15 |
| https://argoproj.github.io/argo-helm | argocd-image-updater | 0.12.0 | | https://argoproj.github.io/argo-helm | argocd-image-updater | 0.12.1 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
## Values ## Values
@ -42,6 +42,7 @@ Kubernetes: `>= 1.30.0-0`
| argo-cd.configs.params."controller.sync.timeout.seconds" | int | `1800` | | | argo-cd.configs.params."controller.sync.timeout.seconds" | int | `1800` | |
| argo-cd.configs.params."server.enable.gzip" | bool | `true` | | | argo-cd.configs.params."server.enable.gzip" | bool | `true` | |
| argo-cd.configs.params."server.insecure" | bool | `true` | | | argo-cd.configs.params."server.insecure" | bool | `true` | |
| argo-cd.configs.secret.argocdServerAdminPassword | string | `"secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.adminPassword"` | |
| argo-cd.configs.secret.createSecret | bool | `false` | | | argo-cd.configs.secret.createSecret | bool | `false` | |
| argo-cd.configs.ssh.extraHosts | string | `"git.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7UgK7Z4dDcuIW1uMOsuwhrqdkJCvYG/ZjHtLM7WaKFxVRnzNnNkQJNncWIGNDUQ1xxrbsoSNRZDtk0NlOjNtx2aApSWl4iWghkpXELvsZtOZ7I9FSC/E6ImLC3KWfK7P0mhZaF6kHPfpu8Y6pjUyLBTpV1AaVwr0I8onyqGazJOVotTFaBFEi/sT0O2FUk7agwZYfj61w3JGOy3c+fmBcK3lXf/QM90tosOpJNuJ7n5Vk5FDDLkl9rO4XR/+mXHFvITiWb8F5C50YAwjYcy36yWSSryUAAHAuqpgotwh65vSG6fZvFhmEwO2BrCkOV5+k8iRfhy/yZODJzZ5V/5cbMbdZrY6lm/p5/S1wv8BEyPekBGdseqQjEO0IQiQHcMrfgTrrQ7ndbZzVZRByZI+wbGFkBCzNSJcNsoiHjs2EblxYyuW0qUvvrBxLnySvaxyPm4BOukSAZAOEaUrajpQlnHdnY1CGcgbwxw0LNv3euKQ3tDJSUlKO0Wd8d85PRv1THW4Ui9Lhsmv+BPA2vJZDOkx/n0oyPFAB0oyd5JNM38eFxLCmPC2OE63gDP+WmzVO61YCVTnvhpQjEOLawEWVFsk0y25R5z5BboDqJaOFnZF6i517O96cn17z3Ls4hxw3+0rlKczYRoyfUHs7KQENa4mY8YlJweNTBgld//RMUQ=="` | | | argo-cd.configs.ssh.extraHosts | string | `"git.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7UgK7Z4dDcuIW1uMOsuwhrqdkJCvYG/ZjHtLM7WaKFxVRnzNnNkQJNncWIGNDUQ1xxrbsoSNRZDtk0NlOjNtx2aApSWl4iWghkpXELvsZtOZ7I9FSC/E6ImLC3KWfK7P0mhZaF6kHPfpu8Y6pjUyLBTpV1AaVwr0I8onyqGazJOVotTFaBFEi/sT0O2FUk7agwZYfj61w3JGOy3c+fmBcK3lXf/QM90tosOpJNuJ7n5Vk5FDDLkl9rO4XR/+mXHFvITiWb8F5C50YAwjYcy36yWSSryUAAHAuqpgotwh65vSG6fZvFhmEwO2BrCkOV5+k8iRfhy/yZODJzZ5V/5cbMbdZrY6lm/p5/S1wv8BEyPekBGdseqQjEO0IQiQHcMrfgTrrQ7ndbZzVZRByZI+wbGFkBCzNSJcNsoiHjs2EblxYyuW0qUvvrBxLnySvaxyPm4BOukSAZAOEaUrajpQlnHdnY1CGcgbwxw0LNv3euKQ3tDJSUlKO0Wd8d85PRv1THW4Ui9Lhsmv+BPA2vJZDOkx/n0oyPFAB0oyd5JNM38eFxLCmPC2OE63gDP+WmzVO61YCVTnvhpQjEOLawEWVFsk0y25R5z5BboDqJaOFnZF6i517O96cn17z3Ls4hxw3+0rlKczYRoyfUHs7KQENa4mY8YlJweNTBgld//RMUQ=="` | |
| argo-cd.configs.styles | string | `".sidebar__logo img { content: url(https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png); }\n.sidebar__logo__text-logo { height: 0em; }\n.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }\n"` | | | argo-cd.configs.styles | string | `".sidebar__logo img { content: url(https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png); }\n.sidebar__logo__text-logo { height: 0em; }\n.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }\n"` | |
@ -53,30 +54,25 @@ Kubernetes: `>= 1.30.0-0`
| argo-cd.dex.enabled | bool | `false` | | | argo-cd.dex.enabled | bool | `false` | |
| argo-cd.enabled | bool | `false` | | | argo-cd.enabled | bool | `false` | |
| argo-cd.global.image.repository | string | `"public.ecr.aws/zero-downtime/zdt-argocd"` | | | argo-cd.global.image.repository | string | `"public.ecr.aws/zero-downtime/zdt-argocd"` | |
| argo-cd.global.image.tag | string | `"v2.14.5"` | | | argo-cd.global.image.tag | string | `"v2.14.9-1"` | |
| argo-cd.global.logging.format | string | `"json"` | | | argo-cd.global.logging.format | string | `"json"` | |
| argo-cd.global.networkPolicy.create | bool | `true` | | | argo-cd.global.networkPolicy.create | bool | `true` | |
| argo-cd.istio.enabled | bool | `false` | | | argo-cd.istio.enabled | bool | `false` | |
| argo-cd.istio.gateway | string | `"istio-ingress/ingressgateway"` | | | argo-cd.istio.gateway | string | `"istio-ingress/ingressgateway"` | |
| argo-cd.istio.ipBlocks | list | `[]` | | | argo-cd.istio.ipBlocks | list | `[]` | |
| argo-cd.kubezero.bootstrap | bool | `false` | | | argo-cd.kubezero.bootstrap | bool | `false` | deploy the KubeZero Project and GitSync Root App |
| argo-cd.kubezero.password | string | `"secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.kubezero.password"` | |
| argo-cd.kubezero.path | string | `"/"` | | | argo-cd.kubezero.path | string | `"/"` | |
| argo-cd.kubezero.repoUrl | string | `"https://git.my.org/thiscluster"` | | | argo-cd.kubezero.repoUrl | string | `""` | |
| argo-cd.kubezero.sshPrivateKey | string | `"secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.kubezero.sshPrivateKey"` | |
| argo-cd.kubezero.targetRevision | string | `"HEAD"` | | | argo-cd.kubezero.targetRevision | string | `"HEAD"` | |
| argo-cd.kubezero.username | string | `"secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.kubezero.username"` | |
| argo-cd.notifications.enabled | bool | `false` | | | argo-cd.notifications.enabled | bool | `false` | |
| argo-cd.redisSecretInit.enabled | bool | `false` | | | argo-cd.redisSecretInit.enabled | bool | `false` | |
| argo-cd.repoServer.clusterRoleRules.enabled | bool | `true` | |
| argo-cd.repoServer.clusterRoleRules.rules[0].apiGroups[0] | string | `""` | |
| argo-cd.repoServer.clusterRoleRules.rules[0].resources[0] | string | `"secrets"` | |
| argo-cd.repoServer.clusterRoleRules.rules[0].verbs[0] | string | `"get"` | |
| argo-cd.repoServer.clusterRoleRules.rules[0].verbs[1] | string | `"watch"` | |
| argo-cd.repoServer.clusterRoleRules.rules[0].verbs[2] | string | `"list"` | |
| argo-cd.repoServer.metrics.enabled | bool | `false` | | | argo-cd.repoServer.metrics.enabled | bool | `false` | |
| argo-cd.repoServer.metrics.serviceMonitor.enabled | bool | `true` | | | argo-cd.repoServer.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.repoServer.volumeMounts[0].mountPath | string | `"/home/argocd/.kube"` | |
| argo-cd.repoServer.volumeMounts[0].name | string | `"kubeconfigs"` | |
| argo-cd.repoServer.volumes[0].emptyDir | object | `{}` | | | argo-cd.repoServer.volumes[0].emptyDir | object | `{}` | |
| argo-cd.repoServer.volumes[0].name | string | `"kubeconfigs"` | | | argo-cd.repoServer.volumes[0].name | string | `"cmp-tmp"` | |
| argo-cd.server.metrics.enabled | bool | `false` | | | argo-cd.server.metrics.enabled | bool | `false` | |
| argo-cd.server.metrics.serviceMonitor.enabled | bool | `true` | | | argo-cd.server.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.server.service.servicePortHttpsName | string | `"grpc"` | | | argo-cd.server.service.servicePortHttpsName | string | `"grpc"` | |

28
charts/kubezero-argo/hooks.d/pre-install.sh Normal file → Executable file
View File

@ -1,6 +1,26 @@
#!/bin/sh # Bootstrap kubezero-git-sync app only if it doesnt exist yet
kubectl get application kubezero-git-sync -n argocd || \
yq -i '.argo-cd.kubezero.bootstrap=true' $WORKDIR/values.yaml
# Bootstrap kubezero-git-sync app if it doenst exist # Ensure we have an adminPassword or migrate existing one
kubectl get application kubezero-git-sync -n argocd && rc=$? || rc=$? PW=$(get_kubezero_secret argo-cd.adminPassword)
if [ -z "$PW" ]; then
# Check for existing password in actual secret
NEW_PW=$(get_secret_val argocd argocd-secret "admin.password")
[ $rc != 0 ] && yq -i '.argo-cd.kubezero.bootstrap=true' values.yaml if [ -z "$NEW_PW" ];then
ARGO_PWD=$(date +%s | sha256sum | base64 | head -c 12 ; echo)
NEW_PW=$(htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/')
set_kubezero_secret argo-cd.adminPasswordClear $ARGO_PWD
fi
set_kubezero_secret argo-cd.adminPassword "$NEW_PW"
fi
# Redis secret
kubectl get secret argocd-redis -n argocd || kubectl create secret generic argocd-redis -n argocd \
--from-literal=auth=$(date +%s | sha256sum | base64 | head -c 16 ; echo)
# required keys in kubezero-secrets, as --ignore-missing-values in helm-secrets doesnt work with vals ;-(
ensure_kubezero_secret_key argo-cd.kubezero.username argo-cd.kubezero.password argo-cd.kubezero.sshPrivateKey

View File

@ -1,22 +0,0 @@
# KubeZero secrets
#
test: supergeheim
secrets:
- name: argocd-secret
optional: false
data:
admin.password: test
admin.passwordMtime: now
server.secretkey: boohoo
- name: zero-downtime-gitea
optional: true
data:
name: zero-downtime-gitea
type: git
url: ssh://git@git.zero-downtime.net/quark/kube-grandnagus.git
sshPrivateKey: |
boohooKey
metadata:
labels:
argocd.argoproj.io/secret-type: repository

View File

@ -0,0 +1,13 @@
{{- if index .Values "argo-cd" "enabled" }}
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
namespace: argocd
labels:
{{- include "kubezero-lib.labels" . | nindent 4 }}
type: Opaque
stringData:
admin.password: {{ index .Values "argo-cd" "configs" "secret" "argocdServerAdminPassword" }}
admin.passwordMtime: "2006-01-02T15:04:05Z"
{{- end }}

View File

@ -4,6 +4,8 @@ kind: Application
metadata: metadata:
name: kubezero-git-sync name: kubezero-git-sync
namespace: argocd namespace: argocd
labels:
{{- include "kubezero-lib.labels" . | nindent 4 }}
annotations: annotations:
argocd.argoproj.io/sync-wave: "-20" argocd.argoproj.io/sync-wave: "-20"
spec: spec:
@ -17,12 +19,15 @@ spec:
targetRevision: {{ .targetRevision }} targetRevision: {{ .targetRevision }}
path: {{ .path }} path: {{ .path }}
{{- end }} {{- end }}
directory: plugin:
recurse: true name: kubezero-git-sync
syncPolicy: syncPolicy:
automated: automated:
prune: true prune: true
syncOptions: syncOptions:
- ServerSideApply=true - ServerSideApply=true
- ApplyOutOfSyncOnly=true - ApplyOutOfSyncOnly=true
info:
- name: "Source:"
value: "https://git.zero-downtime.net/ZeroDownTime/KubeZero/src/branch/release/v1.31/"
{{- end }} {{- end }}

View File

@ -0,0 +1,21 @@
{{- if index .Values "argo-cd" "kubezero" "repoUrl" }}
apiVersion: v1
kind: Secret
metadata:
name: kubezero-git-sync
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
{{- include "kubezero-lib.labels" . | nindent 4 }}
type: Opaque
stringData:
name: kubezero-git-sync
type: git
url: {{ index .Values "argo-cd" "kubezero" "repoUrl" }}
{{- if hasPrefix "https" (index .Values "argo-cd" "kubezero" "repoUrl") }}
username: {{ index .Values "argo-cd" "kubezero" "username" }}
password: {{ index .Values "argo-cd" "kubezero" "password" }}
{{- else }}
sshPrivateKey: {{ index .Values "argo-cd" "kubezero" "sshPrivateKey" }}
{{- end }}
{{- end }}

View File

@ -4,6 +4,8 @@ kind: AppProject
metadata: metadata:
name: kubezero name: kubezero
namespace: argocd namespace: argocd
labels:
{{- include "kubezero-lib.labels" . | nindent 4 }}
spec: spec:
clusterResourceWhitelist: clusterResourceWhitelist:
- group: '*' - group: '*'
@ -15,4 +17,10 @@ spec:
sourceRepos: sourceRepos:
- https://cdn.zero-downtime.net/charts - https://cdn.zero-downtime.net/charts
- {{ index .Values "argo-cd" "kubezero" "repoUrl" }} - {{ index .Values "argo-cd" "kubezero" "repoUrl" }}
syncWindows:
- kind: deny
schedule: '0 * * * *'
duration: 24h
namespaces:
- '*'
{{- end }} {{- end }}

View File

@ -25,7 +25,7 @@ argo-events:
# do NOT use -alpine tag as the entrypoint differs # do NOT use -alpine tag as the entrypoint differs
versions: versions:
- version: 2.10.11 - version: 2.10.11
natsImage: nats:2.10.11-scratch natsImage: nats:2.11.1-scratch
metricsExporterImage: natsio/prometheus-nats-exporter:0.16.0 metricsExporterImage: natsio/prometheus-nats-exporter:0.16.0
configReloaderImage: natsio/nats-server-config-reloader:0.14.1 configReloaderImage: natsio/nats-server-config-reloader:0.14.1
startCommand: /nats-server startCommand: /nats-server
@ -38,7 +38,7 @@ argo-cd:
format: json format: json
image: image:
repository: public.ecr.aws/zero-downtime/zdt-argocd repository: public.ecr.aws/zero-downtime/zdt-argocd
tag: v2.14.5 tag: v2.14.9-1
networkPolicy: networkPolicy:
create: true create: true
@ -81,10 +81,9 @@ argo-cd:
secret: secret:
createSecret: false createSecret: false
# `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/' | base64 -w0` # `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/' | base64 -w0`
# argocdServerAdminPassword: "$2a$10$ivKzaXVxMqdeDSfS3nqi1Od3iDbnL7oXrixzDfZFRHlXHnAG6LydG" argocdServerAdminPassword: secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.adminPassword
# argocdServerAdminPassword: "ref+file://secrets.yaml#/test"
# argocdServerAdminPasswordMtime: "2020-04-24T15:33:09BST"
ssh: ssh:
extraHosts: "git.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7UgK7Z4dDcuIW1uMOsuwhrqdkJCvYG/ZjHtLM7WaKFxVRnzNnNkQJNncWIGNDUQ1xxrbsoSNRZDtk0NlOjNtx2aApSWl4iWghkpXELvsZtOZ7I9FSC/E6ImLC3KWfK7P0mhZaF6kHPfpu8Y6pjUyLBTpV1AaVwr0I8onyqGazJOVotTFaBFEi/sT0O2FUk7agwZYfj61w3JGOy3c+fmBcK3lXf/QM90tosOpJNuJ7n5Vk5FDDLkl9rO4XR/+mXHFvITiWb8F5C50YAwjYcy36yWSSryUAAHAuqpgotwh65vSG6fZvFhmEwO2BrCkOV5+k8iRfhy/yZODJzZ5V/5cbMbdZrY6lm/p5/S1wv8BEyPekBGdseqQjEO0IQiQHcMrfgTrrQ7ndbZzVZRByZI+wbGFkBCzNSJcNsoiHjs2EblxYyuW0qUvvrBxLnySvaxyPm4BOukSAZAOEaUrajpQlnHdnY1CGcgbwxw0LNv3euKQ3tDJSUlKO0Wd8d85PRv1THW4Ui9Lhsmv+BPA2vJZDOkx/n0oyPFAB0oyd5JNM38eFxLCmPC2OE63gDP+WmzVO61YCVTnvhpQjEOLawEWVFsk0y25R5z5BboDqJaOFnZF6i517O96cn17z3Ls4hxw3+0rlKczYRoyfUHs7KQENa4mY8YlJweNTBgld//RMUQ==" extraHosts: "git.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7UgK7Z4dDcuIW1uMOsuwhrqdkJCvYG/ZjHtLM7WaKFxVRnzNnNkQJNncWIGNDUQ1xxrbsoSNRZDtk0NlOjNtx2aApSWl4iWghkpXELvsZtOZ7I9FSC/E6ImLC3KWfK7P0mhZaF6kHPfpu8Y6pjUyLBTpV1AaVwr0I8onyqGazJOVotTFaBFEi/sT0O2FUk7agwZYfj61w3JGOy3c+fmBcK3lXf/QM90tosOpJNuJ7n5Vk5FDDLkl9rO4XR/+mXHFvITiWb8F5C50YAwjYcy36yWSSryUAAHAuqpgotwh65vSG6fZvFhmEwO2BrCkOV5+k8iRfhy/yZODJzZ5V/5cbMbdZrY6lm/p5/S1wv8BEyPekBGdseqQjEO0IQiQHcMrfgTrrQ7ndbZzVZRByZI+wbGFkBCzNSJcNsoiHjs2EblxYyuW0qUvvrBxLnySvaxyPm4BOukSAZAOEaUrajpQlnHdnY1CGcgbwxw0LNv3euKQ3tDJSUlKO0Wd8d85PRv1THW4Ui9Lhsmv+BPA2vJZDOkx/n0oyPFAB0oyd5JNM38eFxLCmPC2OE63gDP+WmzVO61YCVTnvhpQjEOLawEWVFsk0y25R5z5BboDqJaOFnZF6i517O96cn17z3Ls4hxw3+0rlKczYRoyfUHs7KQENa4mY8YlJweNTBgld//RMUQ=="
@ -117,14 +116,8 @@ argo-cd:
serviceMonitor: serviceMonitor:
enabled: true enabled: true
volumes:
- name: kubeconfigs
emptyDir: {}
volumeMounts:
- mountPath: /home/argocd/.kube
name: kubeconfigs
# Allow vals to read internal secrets across all namespaces # Allow vals to read internal secrets across all namespaces
# @ignored
clusterRoleRules: clusterRoleRules:
enabled: true enabled: true
rules: rules:
@ -132,6 +125,34 @@ argo-cd:
resources: ["secrets"] resources: ["secrets"]
verbs: ["get", "watch", "list"] verbs: ["get", "watch", "list"]
# cmp kubezero-git-sync plugin
# @ignored
extraContainers:
- name: cmp-kubezero-git-sync
image: '{{ default .Values.global.image.repository .Values.repoServer.image.repository }}:{{ default (include "argo-cd.defaultTag" .) .Values.repoServer.image.tag }}'
imagePullPolicy: '{{ default .Values.global.image.imagePullPolicy .Values.repoServer.image.imagePullPolicy }}'
command: ["/var/run/argocd/argocd-cmp-server"]
volumeMounts:
- mountPath: /var/run/argocd
name: var-files
- mountPath: /home/argocd/cmp-server/plugins
name: plugins
- mountPath: /tmp
name: cmp-tmp
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
runAsUser: 999
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
volumes:
- name: cmp-tmp
emptyDir: {}
server: server:
# Rename former https port to grpc, works with istio + insecure # Rename former https port to grpc, works with istio + insecure
service: service:
@ -163,12 +184,16 @@ argo-cd:
ipBlocks: [] ipBlocks: []
kubezero: kubezero:
# only set this once initially to prevent the circular dependency # -- deploy the KubeZero Project and GitSync Root App
bootstrap: false bootstrap: false
repoUrl: "https://git.my.org/thiscluster" # valid git+ssh repository url
repoUrl: ""
path: "/" path: "/"
targetRevision: HEAD targetRevision: HEAD
sshPrivateKey: secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.kubezero.sshPrivateKey
username: secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.kubezero.username
password: secretref+k8s://v1/Secret/kubezero/kubezero-secrets/argo-cd.kubezero.password
argocd-image-updater: argocd-image-updater:
enabled: false enabled: false

View File

@ -19,7 +19,7 @@ keycloak:
resources: resources:
limits: limits:
#cpu: 750m #cpu: 750m
memory: 768Mi memory: 1024Mi
requests: requests:
cpu: 100m cpu: 100m
memory: 512Mi memory: 512Mi

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-cache name: kubezero-cache
description: KubeZero Cache module description: KubeZero Cache module
type: application type: application
version: 0.1.0 version: 0.1.1
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -17,11 +17,11 @@ dependencies:
version: 0.2.1 version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: redis - name: redis
version: 20.0.3 version: 20.11.5
repository: https://charts.bitnami.com/bitnami repository: https://charts.bitnami.com/bitnami
condition: redis.enabled condition: redis.enabled
- name: redis-cluster - name: redis-cluster
version: 11.0.2 version: 11.5.0
repository: https://charts.bitnami.com/bitnami repository: https://charts.bitnami.com/bitnami
condition: redis-cluster.enabled condition: redis-cluster.enabled

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-graph name: kubezero-graph
description: KubeZero GraphQL and GraphDB description: KubeZero GraphQL and GraphDB
type: application type: application
version: 0.1.0 version: 0.1.1
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -16,7 +16,7 @@ dependencies:
version: 0.2.1 version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: neo4j - name: neo4j
version: 5.26.0 version: 2025.3.0
repository: https://helm.neo4j.com/neo4j repository: https://helm.neo4j.com/neo4j
condition: neo4j.enabled condition: neo4j.enabled

View File

@ -1,6 +1,6 @@
# kubezero-graph # kubezero-graph
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.1.1](https://img.shields.io/badge/Version-0.1.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero GraphQL and GraphDB KubeZero GraphQL and GraphDB
@ -18,8 +18,8 @@ Kubernetes: `>= 1.29.0-0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.2.1 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
| https://helm.neo4j.com/neo4j | neo4j | 5.26.0 | | https://helm.neo4j.com/neo4j | neo4j | 2025.3.0 |
## Values ## Values
@ -28,6 +28,8 @@ Kubernetes: `>= 1.29.0-0`
| neo4j.disableLookups | bool | `true` | | | neo4j.disableLookups | bool | `true` | |
| neo4j.enabled | bool | `false` | | | neo4j.enabled | bool | `false` | |
| neo4j.neo4j.name | string | `"test-db"` | | | neo4j.neo4j.name | string | `"test-db"` | |
| neo4j.neo4j.password | string | `"secret"` | |
| neo4j.neo4j.passwordFromSecret | string | `"neo4j-admin"` | |
| neo4j.serviceMonitor.enabled | bool | `false` | | | neo4j.serviceMonitor.enabled | bool | `false` | |
| neo4j.services.neo4j.enabled | bool | `false` | | | neo4j.services.neo4j.enabled | bool | `false` | |
| neo4j.volumes.data.mode | string | `"defaultStorageClass"` | | | neo4j.volumes.data.mode | string | `"defaultStorageClass"` | |

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-mq name: kubezero-mq
description: KubeZero umbrella chart for MQ systems like NATS, RabbitMQ description: KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
type: application type: application
version: 0.3.10 version: 0.3.11
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -17,11 +17,11 @@ dependencies:
version: 0.2.1 version: 0.2.1
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: nats - name: nats
version: 1.2.2 version: 1.3.3
repository: https://nats-io.github.io/k8s/helm/charts/ repository: https://nats-io.github.io/k8s/helm/charts/
condition: nats.enabled condition: nats.enabled
- name: rabbitmq - name: rabbitmq
version: 14.6.6 version: 14.7.0
repository: https://charts.bitnami.com/bitnami repository: https://charts.bitnami.com/bitnami
condition: rabbitmq.enabled condition: rabbitmq.enabled
kubeVersion: ">= 1.26.0" kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-mq # kubezero-mq
![Version: 0.3.10](https://img.shields.io/badge/Version-0.3.10-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.3.11](https://img.shields.io/badge/Version-0.3.11-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for MQ systems like NATS, RabbitMQ KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
@ -18,9 +18,9 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | 0.2.1 |
| https://charts.bitnami.com/bitnami | rabbitmq | 14.6.6 | | https://charts.bitnami.com/bitnami | rabbitmq | 14.7.0 |
| https://nats-io.github.io/k8s/helm/charts/ | nats | 1.2.2 | | https://nats-io.github.io/k8s/helm/charts/ | nats | 1.3.3 |
## Values ## Values
@ -34,13 +34,6 @@ Kubernetes: `>= 1.26.0`
| nats.natsBox.enabled | bool | `false` | | | nats.natsBox.enabled | bool | `false` | |
| nats.promExporter.enabled | bool | `false` | | | nats.promExporter.enabled | bool | `false` | |
| nats.promExporter.podMonitor.enabled | bool | `false` | | | nats.promExporter.podMonitor.enabled | bool | `false` | |
| rabbitmq-cluster-operator.clusterOperator.metrics.enabled | bool | `false` | |
| rabbitmq-cluster-operator.clusterOperator.metrics.serviceMonitor.enabled | bool | `true` | |
| rabbitmq-cluster-operator.enabled | bool | `false` | |
| rabbitmq-cluster-operator.msgTopologyOperator.metrics.enabled | bool | `false` | |
| rabbitmq-cluster-operator.msgTopologyOperator.metrics.serviceMonitor.enabled | bool | `true` | |
| rabbitmq-cluster-operator.rabbitmqImage.tag | string | `"3.11.4-debian-11-r0"` | |
| rabbitmq-cluster-operator.useCertManager | bool | `true` | |
| rabbitmq.auth.existingErlangSecret | string | `"rabbitmq"` | | | rabbitmq.auth.existingErlangSecret | string | `"rabbitmq"` | |
| rabbitmq.auth.existingPasswordSecret | string | `"rabbitmq"` | | | rabbitmq.auth.existingPasswordSecret | string | `"rabbitmq"` | |
| rabbitmq.auth.tls.enabled | bool | `false` | | | rabbitmq.auth.tls.enabled | bool | `false` | |

View File

@ -1,4 +1,4 @@
{{- if .Values.nats.exporter.serviceMonitor.enabled }} {{- if .Values.nats.promExporter.podMonitor.enabled }}
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:

View File

@ -6,6 +6,12 @@ nats:
jetstream: jetstream:
enabled: true enabled: true
podTemplate:
topologySpreadConstraints:
kubernetes.io/hostname:
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
natsBox: natsBox:
enabled: false enabled: false

View File

@ -24,7 +24,7 @@ dependencies:
condition: lvm-localpv.enabled condition: lvm-localpv.enabled
repository: https://openebs.github.io/lvm-localpv repository: https://openebs.github.io/lvm-localpv
- name: aws-ebs-csi-driver - name: aws-ebs-csi-driver
version: 2.41.0 version: 2.42.0
condition: aws-ebs-csi-driver.enabled condition: aws-ebs-csi-driver.enabled
repository: https://kubernetes-sigs.github.io/aws-ebs-csi-driver repository: https://kubernetes-sigs.github.io/aws-ebs-csi-driver
- name: aws-efs-csi-driver - name: aws-efs-csi-driver

View File

@ -21,4 +21,8 @@
.idea/ .idea/
*.tmproj *.tmproj
.vscode/ .vscode/
Chart.lock
README.md.gotmpl
dashboards.yaml
jsonnet
update.sh

View File

@ -35,11 +35,10 @@ Kubernetes: `>= 1.31.0-0`
| addons.targetRevision | string | `"0.8.13"` | | | addons.targetRevision | string | `"0.8.13"` | |
| argo.argo-cd.enabled | bool | `false` | | | argo.argo-cd.enabled | bool | `false` | |
| argo.argo-cd.istio.enabled | bool | `false` | | | argo.argo-cd.istio.enabled | bool | `false` | |
| argo.argocd-apps.enabled | bool | `false` | |
| argo.argocd-image-updater.enabled | bool | `false` | | | argo.argocd-image-updater.enabled | bool | `false` | |
| argo.enabled | bool | `false` | | | argo.enabled | bool | `false` | |
| argo.namespace | string | `"argocd"` | | | argo.namespace | string | `"argocd"` | |
| argo.targetRevision | string | `"0.2.9"` | | | argo.targetRevision | string | `"0.3.1"` | |
| cert-manager.enabled | bool | `false` | | | cert-manager.enabled | bool | `false` | |
| cert-manager.namespace | string | `"cert-manager"` | | | cert-manager.namespace | string | `"cert-manager"` | |
| cert-manager.targetRevision | string | `"0.9.12"` | | | cert-manager.targetRevision | string | `"0.9.12"` | |

View File

@ -1,41 +0,0 @@
kind: ApplicationSet
metadata:
name: kubezero
namespace: argocd
labels:
{{- include "kubezero-lib.labels" . | nindent 4 }}
spec:
generators:
- git:
repoURL: {{ .Values.kubezero.applicationSet.repoURL }}
revision: {{ .Values.kubezero.applicationSet.revision }}
files:
{{- toYaml .Values.kubezero.applicationSet.files | nindent 6 }}
template:
metadata:
name: kubezero
spec:
project: kubezero
source:
repoURL: https://cdn.zero-downtime.net/charts
chart: kubezero
targetRevision: '{{ "{{" }} kubezero.version {{ "}}" }}'
helm:
parameters:
# We use this to detect if we are called from ArgoCD
- name: argocdAppName
value: $ARGOCD_APP_NAME
# This breaks the recursion, otherwise we install another kubezero project and app
# To be removed once we applicationSet is working and AppProject is moved back to ArgoCD chart
- name: installKubeZero
value: "false"
valueFiles:
- '{{ "{{" }} kubezero.valuesPath {{ "}}" }}/kubezero.yaml'
- '{{ "{{" }} kubezero.valuesPath {{ "}}" }}/values.yaml'
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true

View File

@ -0,0 +1,6 @@
# ensure we have a basic kubezero secret for cluster bootstrap and defaults
kubectl get secret kubezero-secrets -n kubezero && rc=$? || rc=$?
if [ $rc != 0 ]; then
kubectl create secret generic kubezero-secrets -n kubezero
fi

View File

@ -1,7 +0,0 @@
#!/bin/bash
ns=$(kubectl get ns -l argocd.argoproj.io/instance | grep -v NAME | awk '{print $1}')
for n in $ns; do
kubectl label --overwrite namespace $n 'argocd.argoproj.io/instance-'
done

View File

@ -1,25 +0,0 @@
#!/usr/bin/env bash
# Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
# or more contributor license agreements. Licensed under the Elastic License;
# you may not use this file except in compliance with the Elastic License.
# Script to migrate an existing ECK 1.2.1 installation to Helm.
set -euo pipefail
RELEASE_NAMESPACE=${RELEASE_NAMESPACE:-"elastic-system"}
echo "Uninstalling ECK"
kubectl delete -n "${RELEASE_NAMESPACE}" \
serviceaccount/elastic-operator \
secret/elastic-webhook-server-cert \
clusterrole.rbac.authorization.k8s.io/elastic-operator \
clusterrole.rbac.authorization.k8s.io/elastic-operator-view \
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit \
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator \
rolebinding.rbac.authorization.k8s.io/elastic-operator \
service/elastic-webhook-server \
statefulset.apps/elastic-operator \
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co

View File

@ -25,7 +25,8 @@ spec:
repoURL: {{ default "https://cdn.zero-downtime.net/charts" (index .Values $name "repository") }} repoURL: {{ default "https://cdn.zero-downtime.net/charts" (index .Values $name "repository") }}
targetRevision: {{ default "HEAD" ( index .Values $name "targetRevision" ) | quote }} targetRevision: {{ default "HEAD" ( index .Values $name "targetRevision" ) | quote }}
helm: helm:
skipTests: true # add with 1.32
#skipTests: true
valuesObject: valuesObject:
{{- include (print $name "-values") $ | nindent 8 }} {{- include (print $name "-values") $ | nindent 8 }}
@ -41,6 +42,9 @@ spec:
- ServerSideApply=true - ServerSideApply=true
- CreateNamespace=true - CreateNamespace=true
- ApplyOutOfSyncOnly=true - ApplyOutOfSyncOnly=true
info:
- name: "Source:"
value: "https://git.zero-downtime.net/ZeroDownTime/KubeZero/src/branch/release/v1.31/charts/kubezero-{{ $name }}"
{{- include (print $name "-argo") $ }} {{- include (print $name "-argo") $ }}
{{- end }} {{- end }}

View File

@ -0,0 +1,30 @@
{{- define "aws-iam-env" -}}
- name: AWS_ROLE_ARN
value: "arn:aws:iam::{{ $.Values.global.aws.accountId }}:role/{{ $.Values.global.aws.region }}.{{ $.Values.global.clusterName }}.{{ .roleName }}"
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
- name: METADATA_TRIES
value: "0"
- name: AWS_REGION
value: {{ $.Values.global.aws.region }}
{{- end }}
{{- define "aws-iam-volumes" -}}
- name: aws-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 86400
audience: "sts.amazonaws.com"
{{- end }}
{{- define "aws-iam-volumemounts" -}}
- name: aws-token
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true
{{- end }}

View File

@ -1,6 +1,6 @@
{{- define "addons-values" }} {{- define "addons-values" }}
clusterBackup: clusterBackup:
enabled: {{ ternary "true" "false" (or (hasKey .Values.global.aws "region") .Values.addons.clusterBackup.enabled) }} enabled: {{ ternary "true" "false" (or (eq .Values.global.platform "aws") .Values.addons.clusterBackup.enabled) }}
{{- with omit .Values.addons.clusterBackup "enabled" }} {{- with omit .Values.addons.clusterBackup "enabled" }}
{{- toYaml . | nindent 2 }} {{- toYaml . | nindent 2 }}
@ -14,7 +14,7 @@ clusterBackup:
{{- end }} {{- end }}
forseti: forseti:
enabled: {{ ternary "true" "false" (or (hasKey .Values.global.aws "region") .Values.addons.forseti.enabled) }} enabled: {{ ternary "true" "false" (or (eq .Values.global.platform "aws") .Values.addons.forseti.enabled) }}
{{- with omit .Values.addons.forseti "enabled" }} {{- with omit .Values.addons.forseti "enabled" }}
{{- toYaml . | nindent 2 }} {{- toYaml . | nindent 2 }}
@ -28,7 +28,7 @@ forseti:
{{- end }} {{- end }}
external-dns: external-dns:
enabled: {{ ternary "true" "false" (or (hasKey .Values.global.aws "region") (index .Values "addons" "external-dns" "enabled")) }} enabled: {{ ternary "true" "false" (or (eq .Values.global.platform "aws") (index .Values "addons" "external-dns" "enabled")) }}
{{- with omit (index .Values "addons" "external-dns") "enabled" }} {{- with omit (index .Values "addons" "external-dns") "enabled" }}
{{- toYaml . | nindent 2 }} {{- toYaml . | nindent 2 }}
@ -42,32 +42,15 @@ external-dns:
- "--aws-zone-type=public" - "--aws-zone-type=public"
- "--aws-zones-cache-duration=1h" - "--aws-zones-cache-duration=1h"
env: env:
- name: AWS_REGION {{- include "aws-iam-env" (merge (dict "roleName" "externalDNS") .) | nindent 4 }}
value: {{ .Values.global.aws.region }}
- name: AWS_ROLE_ARN
value: "arn:aws:iam::{{ .Values.global.aws.accountId }}:role/{{ .Values.global.aws.region }}.{{ .Values.global.clusterName }}.externalDNS"
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
- name: METADATA_TRIES
value: "0"
extraVolumes: extraVolumes:
- name: aws-token {{- include "aws-iam-volumes" . | nindent 4 }}
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 86400
audience: "sts.amazonaws.com"
extraVolumeMounts: extraVolumeMounts:
- name: aws-token {{- include "aws-iam-volumemounts" . | nindent 4 }}
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true
{{- end }} {{- end }}
cluster-autoscaler: cluster-autoscaler:
enabled: {{ ternary "true" "false" (or (hasKey .Values.global.aws "region") (index .Values "addons" "cluster-autoscaler" "enabled")) }} enabled: {{ ternary "true" "false" (or (eq .Values.global.platform "aws") (index .Values "addons" "cluster-autoscaler" "enabled")) }}
autoDiscovery: autoDiscovery:
clusterName: {{ .Values.global.clusterName }} clusterName: {{ .Values.global.clusterName }}
@ -98,17 +81,9 @@ cluster-autoscaler:
AWS_WEB_IDENTITY_TOKEN_FILE: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token" AWS_WEB_IDENTITY_TOKEN_FILE: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
AWS_STS_REGIONAL_ENDPOINTS: "regional" AWS_STS_REGIONAL_ENDPOINTS: "regional"
extraVolumes: extraVolumes:
- name: aws-token {{- include "aws-iam-volumes" . | nindent 4 }}
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 86400
audience: "sts.amazonaws.com"
extraVolumeMounts: extraVolumeMounts:
- name: aws-token {{- include "aws-iam-volumemounts" . | nindent 4 }}
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true
{{- end }} {{- end }}
{{- with .Values.addons.fuseDevicePlugin }} {{- with .Values.addons.fuseDevicePlugin }}
@ -155,14 +130,7 @@ aws-node-termination-handler:
queueURL: "https://sqs.{{ .Values.global.aws.region }}.amazonaws.com/{{ .Values.global.aws.accountId }}/{{ .Values.global.clusterName }}_Nth" queueURL: "https://sqs.{{ .Values.global.aws.region }}.amazonaws.com/{{ .Values.global.aws.accountId }}/{{ .Values.global.clusterName }}_Nth"
managedTag: "zdt:kubezero:nth:{{ .Values.global.clusterName }}" managedTag: "zdt:kubezero:nth:{{ .Values.global.clusterName }}"
extraEnv: extraEnv:
- name: AWS_ROLE_ARN {{- include "aws-iam-env" (merge (dict "roleName" "awsNth") .) | nindent 4 }}
value: "arn:aws:iam::{{ .Values.global.aws.accountId }}:role/{{ .Values.global.aws.region }}.{{ .Values.global.clusterName }}.awsNth"
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
- name: METADATA_TRIES
value: "0"
aws-eks-asg-rolling-update-handler: aws-eks-asg-rolling-update-handler:
enabled: {{ default "true" (index .Values "addons" "aws-eks-asg-rolling-update-handler" "enabled") }} enabled: {{ default "true" (index .Values "addons" "aws-eks-asg-rolling-update-handler" "enabled") }}
@ -172,10 +140,9 @@ aws-eks-asg-rolling-update-handler:
{{- end }} {{- end }}
environmentVars: environmentVars:
{{- include "aws-iam-env" (merge (dict "roleName" "awsRuh") .) | nindent 4 }}
- name: CLUSTER_NAME - name: CLUSTER_NAME
value: {{ .Values.global.clusterName }} value: {{ .Values.global.clusterName }}
- name: AWS_REGION
value: {{ .Values.global.aws.region }}
- name: EXECUTION_INTERVAL - name: EXECUTION_INTERVAL
value: "60" value: "60"
- name: METRICS - name: METRICS
@ -184,12 +151,6 @@ aws-eks-asg-rolling-update-handler:
value: "true" value: "true"
- name: SLOW_MODE - name: SLOW_MODE
value: "true" value: "true"
- name: AWS_ROLE_ARN
value: "arn:aws:iam::{{ .Values.global.aws.accountId }}:role/{{ .Values.global.aws.region }}.{{ .Values.global.clusterName }}.awsRuh"
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
{{- with (index .Values "addons" "neuron-helm-chart") }} {{- with (index .Values "addons" "neuron-helm-chart") }}
neuron-helm-chart: neuron-helm-chart:

View File

@ -23,11 +23,51 @@ argo-cd:
metrics: metrics:
enabled: {{ .Values.metrics.enabled }} enabled: {{ .Values.metrics.enabled }}
repoServer: repoServer:
metrics:
enabled: {{ .Values.metrics.enabled }}
{{- with index .Values "argo" "argo-cd" "repoServer" }} {{- with index .Values "argo" "argo-cd" "repoServer" }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
metrics:
enabled: {{ .Values.metrics.enabled }}
volumes:
- name: cmp-tmp
emptyDir: {}
{{- if eq .Values.global.platform "aws" }}
{{- include "aws-iam-volumes" . | nindent 6 }}
env:
{{- include "aws-iam-env" (merge (dict "roleName" "argocd-repo-server") .) | nindent 6 }}
volumeMounts:
{{- include "aws-iam-volumemounts" . | nindent 6 }}
extraContainers:
- name: cmp-kubezero-git-sync
image: '{{ "{{" }} default .Values.global.image.repository .Values.repoServer.image.repository {{ "}}" }}:{{ "{{" }} default (include "argo-cd.defaultTag" .) .Values.repoServer.image.tag {{ "}}" }}'
imagePullPolicy: '{{ "{{" }} default .Values.global.image.imagePullPolicy .Values.repoServer.image.imagePullPolicy {{ "}}" }}'
command: ["/var/run/argocd/argocd-cmp-server"]
env:
{{- include "aws-iam-env" (merge (dict "roleName" "argocd-repo-server") .) | nindent 10 }}
volumeMounts:
- mountPath: /var/run/argocd
name: var-files
- mountPath: /home/argocd/cmp-server/plugins
name: plugins
- mountPath: /tmp
name: cmp-tmp
{{- include "aws-iam-volumemounts" . | nindent 10 }}
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
runAsUser: 999
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
{{- end }}
server: server:
metrics: metrics:
enabled: {{ .Values.metrics.enabled }} enabled: {{ .Values.metrics.enabled }}
@ -51,30 +91,13 @@ argocd-image-updater:
{{- toYaml . | nindent 2 }} {{- toYaml . | nindent 2 }}
{{- end }} {{- end }}
{{- if .Values.global.aws }} {{- if eq .Values.global.platform "aws" }}
extraEnv: extraEnv:
- name: AWS_ROLE_ARN {{- include "aws-iam-env" (merge (dict "roleName" "argocd-image-updater") .) | nindent 4 }}
value: "arn:aws:iam::{{ .Values.global.aws.accountId }}:role/{{ .Values.global.aws.region }}.{{ .Values.global.clusterName }}.argocd-image-updater"
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
- name: METADATA_TRIES
value: "0"
- name: AWS_REGION
value: {{ .Values.global.aws.region }}
volumes: volumes:
- name: aws-token {{- include "aws-iam-volumes" . | nindent 4 }}
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 86400
audience: "sts.amazonaws.com"
volumeMounts: volumeMounts:
- name: aws-token {{- include "aws-iam-volumemounts" . | nindent 4 }}
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true
{{- end }} {{- end }}
metrics: metrics:

View File

@ -1,6 +1,6 @@
{{- define "_kube-prometheus-stack" }} {{- define "_kube-prometheus-stack" }}
{{- if .global.aws.region }} {{- if eq .global.platform "aws" }}
alertmanager: alertmanager:
alertmanagerSpec: alertmanagerSpec:
podMetadata: podMetadata:

View File

@ -6,7 +6,9 @@ global:
highAvailable: false highAvailable: false
aws: {} aws:
accountId: "123456789012"
region: the-moon
gcp: {} gcp: {}
addons: addons:
@ -115,7 +117,7 @@ logging:
argo: argo:
enabled: false enabled: false
namespace: argocd namespace: argocd
targetRevision: 0.3.0 targetRevision: 0.3.2
argo-cd: argo-cd:
enabled: false enabled: false
istio: istio:

11
docs/hooks.md Normal file
View File

@ -0,0 +1,11 @@
# KubeZero Helm hooks
## Abstract
Scripts within the `hooks.d` folder of each chart are executed at the respective times when the charts are applied via libhelm.
*These hooks do NOT work via ArgoCD*
## Flow
- hooks are execute as part of the libhelm tasks like `apply`
- are running with the current kubectl context
- executed at root working directory, eg. set a value for helm the scripts can edit the `./values.yaml` file.

View File

@ -3,6 +3,7 @@
## What's new - Major themes ## What's new - Major themes
- all KubeZero and support AMIs based on [Alpine 3.21](https://alpinelinux.org/posts/Alpine-3.21.0-released.html) - all KubeZero and support AMIs based on [Alpine 3.21](https://alpinelinux.org/posts/Alpine-3.21.0-released.html)
- network policies for ArgoCD - network policies for ArgoCD
- Nvidia worker nodes are labeled with detected GPU product code
- Prometheus upgraded to V3, reducing CPU and memory requirements, see [upstream blog](https://prometheus.io/blog/2024/11/14/prometheus-3-0/) - Prometheus upgraded to V3, reducing CPU and memory requirements, see [upstream blog](https://prometheus.io/blog/2024/11/14/prometheus-3-0/)
## Features and fixes ## Features and fixes
@ -10,10 +11,10 @@
## Version upgrades ## Version upgrades
- cilium 1.16.6 - cilium 1.16.6
- istio 1.24.2 - istio 1.24.3
- ArgoCD 2.14.3 [custom ZDT image](https://git.zero-downtime.net/ZeroDownTime/zdt-argocd) - ArgoCD 2.14.5 [custom ZDT image](https://git.zero-downtime.net/ZeroDownTime/zdt-argocd)
- Prometheus 3.1.0 / Grafana 11.5.1 - Prometheus 3.1.0 / Grafana 11.5.1
- Nvidia container toolkit 1.17, drivers 565.57.01, Cuda 12.7 - Nvidia container toolkit 1.17.4, drivers 570.86.15, Cuda 12.8
## Resources ## Resources
- [Kubernetes v1.31 upstream release blog](https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/) - [Kubernetes v1.31 upstream release blog](https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/)