Compare commits

..

1 Commits

94 changed files with 2761 additions and 625 deletions

View File

@ -14,12 +14,12 @@ KubeZero is a Kubernetes distribution providing an integrated container platform
# Architecture # Architecture
![aws_architecture](docs/images/aws_architecture.png) ![aws_architecture](docs/aws_architecture.png)
# Version / Support Matrix # Version / Support Matrix
KubeZero releases track the same *minor* version of Kubernetes. KubeZero releases track the same *minor* version of Kubernetes.
Any 1.30.X-Y release of Kubezero supports any Kubernetes cluster 1.30.X. Any 1.26.X-Y release of Kubezero supports any Kubernetes cluster 1.26.X.
KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed. KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed.
@ -28,15 +28,15 @@ KubeZero is distributed as a collection of versioned Helm charts, allowing custo
gantt gantt
title KubeZero Support Timeline title KubeZero Support Timeline
dateFormat YYYY-MM-DD dateFormat YYYY-MM-DD
section 1.27
beta :127b, 2023-09-01, 2023-09-30
release :after 127b, 2024-04-30
section 1.28 section 1.28
beta :128b, 2024-03-01, 2024-04-30 beta :128b, 2024-03-01, 2024-04-30
release :after 128b, 2024-08-31 release :after 128b, 2024-08-31
section 1.29 section 1.29
beta :129b, 2024-07-01, 2024-07-31 beta :129b, 2024-07-01, 2024-08-30
release :after 129b, 2024-11-30 release :after 129b, 2024-11-30
section 1.30
beta :130b, 2024-09-01, 2024-10-31
release :after 130b, 2025-02-28
``` ```
[Upstream release policy](https://kubernetes.io/releases/) [Upstream release policy](https://kubernetes.io/releases/)
@ -44,7 +44,7 @@ gantt
# Components # Components
## OS ## OS
- all compute nodes are running on Alpine V3.20 - all compute nodes are running on Alpine V3.19
- 1 or 2 GB encrypted root file system - 1 or 2 GB encrypted root file system
- no external dependencies at boot time, apart from container registries - no external dependencies at boot time, apart from container registries
- minimal attack surface - minimal attack surface

View File

@ -19,22 +19,6 @@ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
. "$SCRIPT_DIR"/libhelm.sh . "$SCRIPT_DIR"/libhelm.sh
CHARTS="$(dirname $SCRIPT_DIR)/charts" CHARTS="$(dirname $SCRIPT_DIR)/charts"
# Guess platform from current context
_auth_cmd=$(kubectl config view | yq .users[0].user.exec.command)
if [ "$_auth_cmd" == "gke-gcloud-auth-plugin" ]; then
PLATFORM=gke
elif [ "$_auth_cmd" == "aws-iam-authenticator" ]; then
PLATFORM=aws
else
PLATFORM=nocloud
fi
parse_version() {
echo $([[ $1 =~ ^v[0-9]+\.[0-9]+\.[0-9]+ ]] && echo "${BASH_REMATCH[0]//v/}")
}
KUBE_VERSION=$(parse_version $KUBE_VERSION)
### Various hooks for modules ### Various hooks for modules
################ ################
@ -87,7 +71,7 @@ if [ ${ARTIFACTS[0]} == "all" ]; then
fi fi
# Delete in reverse order, continue even if errors # Delete in reverse order, continue even if errors
if [ "$ACTION" == "delete" ]; then if [ $ACTION == "delete" ]; then
set +e set +e
for (( idx=${#ARTIFACTS[@]}-1 ; idx>=0 ; idx-- )) ; do for (( idx=${#ARTIFACTS[@]}-1 ; idx>=0 ; idx-- )) ; do
_helm delete ${ARTIFACTS[idx]} || true _helm delete ${ARTIFACTS[idx]} || true

View File

@ -66,7 +66,6 @@ render_kubeadm() {
parse_kubezero() { parse_kubezero() {
export CLUSTERNAME=$(yq eval '.global.clusterName // .clusterName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml) export CLUSTERNAME=$(yq eval '.global.clusterName // .clusterName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export PLATFORM=$(yq eval '.global.platform // "nocloud"' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export HIGHAVAILABLE=$(yq eval '.global.highAvailable // .highAvailable // "false"' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml) export HIGHAVAILABLE=$(yq eval '.global.highAvailable // .highAvailable // "false"' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export ETCD_NODENAME=$(yq eval '.etcd.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml) export ETCD_NODENAME=$(yq eval '.etcd.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export NODENAME=$(yq eval '.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml) export NODENAME=$(yq eval '.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
@ -149,8 +148,10 @@ kubeadm_upgrade() {
post_kubeadm post_kubeadm
# install re-certed kubectl config for root # If we have a re-cert kubectl config install for root
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config if [ -f ${HOSTFS}/etc/kubernetes/super-admin.conf ]; then
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config
fi
# post upgrade hook # post upgrade hook
[ -f /var/lib/kubezero/post-upgrade.sh ] && . /var/lib/kubezero/post-upgrade.sh [ -f /var/lib/kubezero/post-upgrade.sh ] && . /var/lib/kubezero/post-upgrade.sh
@ -182,10 +183,6 @@ control_plane_node() {
# restore latest backup # restore latest backup
retry 10 60 30 restic restore latest --no-lock -t / # --tag $KUBE_VERSION_MINOR retry 10 60 30 restic restore latest --no-lock -t / # --tag $KUBE_VERSION_MINOR
# get timestamp from latest snap for debug / message
# we need a way to surface this info to eg. Slack
#snapTime="$(restic snapshots latest --json | jq -r '.[].time')"
# Make last etcd snapshot available # Make last etcd snapshot available
cp ${WORKDIR}/etcd_snapshot ${HOSTFS}/etc/kubernetes cp ${WORKDIR}/etcd_snapshot ${HOSTFS}/etc/kubernetes
@ -263,12 +260,7 @@ control_plane_node() {
_kubeadm init phase kubelet-start _kubeadm init phase kubelet-start
# Remove conditional with 1.30 cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config
if [ -f ${HOSTFS}/etc/kubernetes/super-admin.conf ]; then
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config
else
cp ${HOSTFS}/etc/kubernetes/admin.conf ${HOSTFS}/root/.kube/config
fi
# Wait for api to be online # Wait for api to be online
echo "Waiting for Kubernetes API to be online ..." echo "Waiting for Kubernetes API to be online ..."
@ -314,7 +306,7 @@ control_plane_node() {
post_kubeadm post_kubeadm
echo "${CMD}ed cluster $CLUSTERNAME successfully." echo "${1} cluster $CLUSTERNAME successfull."
} }
@ -372,9 +364,7 @@ backup() {
# pki & cluster-admin access # pki & cluster-admin access
cp -r ${HOSTFS}/etc/kubernetes/pki ${WORKDIR} cp -r ${HOSTFS}/etc/kubernetes/pki ${WORKDIR}
cp ${HOSTFS}/etc/kubernetes/admin.conf ${WORKDIR} cp ${HOSTFS}/etc/kubernetes/admin.conf ${WORKDIR}
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${WORKDIR}
# Remove conditional with 1.30
[ -f ${HOSTFS}/etc/kubernetes/super-admin.conf ] && cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${WORKDIR}
# Backup via restic # Backup via restic
restic backup ${WORKDIR} -H $CLUSTERNAME --tag $CLUSTER_VERSION restic backup ${WORKDIR} -H $CLUSTERNAME --tag $CLUSTER_VERSION

View File

@ -34,11 +34,9 @@ function argo_used() {
# get kubezero-values from ArgoCD if available or use in-cluster CM without Argo # get kubezero-values from ArgoCD if available or use in-cluster CM without Argo
function get_kubezero_values() { function get_kubezero_values() {
local _namespace="kube-system"
[ "$PLATFORM" == "gke" ] && _namespace=kubezero
argo_used && \ argo_used && \
{ kubectl get application kubezero -n argocd -o yaml | yq .spec.source.helm.values > ${WORKDIR}/kubezero-values.yaml; } || \ { kubectl get application kubezero -n argocd -o yaml | yq .spec.source.helm.values > ${WORKDIR}/kubezero-values.yaml; } || \
{ kubectl get configmap -n $_namespace kubezero-values -o yaml | yq '.data."values.yaml"' > ${WORKDIR}/kubezero-values.yaml ;} { kubectl get configmap -n kube-system kubezero-values -o yaml | yq '.data."values.yaml"' > ${WORKDIR}/kubezero-values.yaml ;}
} }
@ -171,14 +169,14 @@ function _helm() {
yq eval '.spec.source.helm.values' $WORKDIR/kubezero/templates/${module}.yaml > $WORKDIR/values.yaml yq eval '.spec.source.helm.values' $WORKDIR/kubezero/templates/${module}.yaml > $WORKDIR/values.yaml
echo "using values to $action of module $module: "
cat $WORKDIR/values.yaml
if [ $action == "crds" ]; then if [ $action == "crds" ]; then
# Allow custom CRD handling # Allow custom CRD handling
declare -F ${module}-crds && ${module}-crds || _crds declare -F ${module}-crds && ${module}-crds || _crds
elif [ $action == "apply" ]; then elif [ $action == "apply" ]; then
echo "using values to $action of module $module: "
cat $WORKDIR/values.yaml
# namespace must exist prior to apply # namespace must exist prior to apply
create_ns $namespace create_ns $namespace

View File

@ -21,9 +21,6 @@ argo_used && disable_argo
control_plane_upgrade kubeadm_upgrade control_plane_upgrade kubeadm_upgrade
echo "Control plane upgraded, <Return> to continue"
read -r
#echo "Adjust kubezero values as needed:" #echo "Adjust kubezero values as needed:"
# shellcheck disable=SC2015 # shellcheck disable=SC2015
#argo_used && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kube-system #argo_used && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kube-system
@ -41,9 +38,6 @@ echo "Applying remaining KubeZero modules..."
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo" control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo"
# Final step is to commit the new argocd kubezero app
kubectl get app kubezero -n argocd -o yaml | yq 'del(.status) | del(.metadata) | del(.operation) | .metadata.name="kubezero" | .metadata.namespace="argocd"' | yq 'sort_keys(..) | .spec.source.helm.values |= (from_yaml | to_yaml)' > $ARGO_APP
# Trigger backup of upgraded cluster state # Trigger backup of upgraded cluster state
kubectl create job --from=cronjob/kubezero-backup kubezero-backup-$KUBE_VERSION -n kube-system kubectl create job --from=cronjob/kubezero-backup kubezero-backup-$KUBE_VERSION -n kube-system
while true; do while true; do
@ -51,6 +45,9 @@ while true; do
sleep 1 sleep 1
done done
# Final step is to commit the new argocd kubezero app
kubectl get app kubezero -n argocd -o yaml | yq 'del(.status) | del(.metadata) | del(.operation) | .metadata.name="kubezero" | .metadata.namespace="argocd"' | yq 'sort_keys(..) | .spec.source.helm.values |= (from_yaml | to_yaml)' > $ARGO_APP
echo "Please commit $ARGO_APP as the updated kubezero/application.yaml for your cluster." echo "Please commit $ARGO_APP as the updated kubezero/application.yaml for your cluster."
echo "Then head over to ArgoCD for this cluster and sync all KubeZero modules to apply remaining upgrades." echo "Then head over to ArgoCD for this cluster and sync all KubeZero modules to apply remaining upgrades."

View File

@ -1,6 +1,6 @@
# kubezero-addons # kubezero-addons
![Version: 0.8.8](https://img.shields.io/badge/Version-0.8.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.29](https://img.shields.io/badge/AppVersion-v1.29-informational?style=flat-square) ![Version: 0.8.8](https://img.shields.io/badge/Version-0.8.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.28](https://img.shields.io/badge/AppVersion-v1.28-informational?style=flat-square)
KubeZero umbrella chart for various optional cluster addons KubeZero umbrella chart for various optional cluster addons
@ -63,7 +63,7 @@ Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/)
| aws-eks-asg-rolling-update-handler.environmentVars[8].name | string | `"AWS_STS_REGIONAL_ENDPOINTS"` | | | aws-eks-asg-rolling-update-handler.environmentVars[8].name | string | `"AWS_STS_REGIONAL_ENDPOINTS"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[8].value | string | `"regional"` | | | aws-eks-asg-rolling-update-handler.environmentVars[8].value | string | `"regional"` | |
| aws-eks-asg-rolling-update-handler.image.repository | string | `"twinproduction/aws-eks-asg-rolling-update-handler"` | | | aws-eks-asg-rolling-update-handler.image.repository | string | `"twinproduction/aws-eks-asg-rolling-update-handler"` | |
| aws-eks-asg-rolling-update-handler.image.tag | string | `"v1.8.4"` | | | aws-eks-asg-rolling-update-handler.image.tag | string | `"v1.8.3"` | |
| aws-eks-asg-rolling-update-handler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | | | aws-eks-asg-rolling-update-handler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| aws-eks-asg-rolling-update-handler.resources.limits.memory | string | `"128Mi"` | | | aws-eks-asg-rolling-update-handler.resources.limits.memory | string | `"128Mi"` | |
| aws-eks-asg-rolling-update-handler.resources.requests.cpu | string | `"10m"` | | | aws-eks-asg-rolling-update-handler.resources.requests.cpu | string | `"10m"` | |

View File

@ -33,4 +33,4 @@ dependencies:
version: 0.11.0 version: 0.11.0
repository: https://argoproj.github.io/argo-helm repository: https://argoproj.github.io/argo-helm
condition: argocd-image-updater.enabled condition: argocd-image-updater.enabled
kubeVersion: ">= 1.26.0-0" kubeVersion: ">= 1.26.0"

View File

@ -14,7 +14,7 @@ KubeZero Argo - Events, Workflow, CD
## Requirements ## Requirements
Kubernetes: `>= 1.26.0-0` Kubernetes: `>= 1.26.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
@ -65,7 +65,7 @@ Kubernetes: `>= 1.26.0-0`
| argo-cd.repoServer.initContainers[0].command[0] | string | `"/usr/local/bin/sa2kubeconfig.sh"` | | | argo-cd.repoServer.initContainers[0].command[0] | string | `"/usr/local/bin/sa2kubeconfig.sh"` | |
| argo-cd.repoServer.initContainers[0].command[1] | string | `"/home/argocd/.kube/config"` | | | argo-cd.repoServer.initContainers[0].command[1] | string | `"/home/argocd/.kube/config"` | |
| argo-cd.repoServer.initContainers[0].image | string | `"{{ default .Values.global.image.repository .Values.repoServer.image.repository }}:{{ default (include \"argo-cd.defaultTag\" .) .Values.repoServer.image.tag }}"` | | | argo-cd.repoServer.initContainers[0].image | string | `"{{ default .Values.global.image.repository .Values.repoServer.image.repository }}:{{ default (include \"argo-cd.defaultTag\" .) .Values.repoServer.image.tag }}"` | |
| argo-cd.repoServer.initContainers[0].imagePullPolicy | string | `"{{ default .Values.global.image.imagePullPolicy .Values.repoServer.image.imagePullPolicy }}"` | | | argo-cd.repoServer.initContainers[0].imagePullPolicy | string | `"IfNotPresent"` | |
| argo-cd.repoServer.initContainers[0].name | string | `"create-kubeconfig"` | | | argo-cd.repoServer.initContainers[0].name | string | `"create-kubeconfig"` | |
| argo-cd.repoServer.initContainers[0].securityContext.allowPrivilegeEscalation | bool | `false` | | | argo-cd.repoServer.initContainers[0].securityContext.allowPrivilegeEscalation | bool | `false` | |
| argo-cd.repoServer.initContainers[0].securityContext.capabilities.drop[0] | string | `"ALL"` | | | argo-cd.repoServer.initContainers[0].securityContext.capabilities.drop[0] | string | `"ALL"` | |

View File

@ -26,7 +26,7 @@ argo-events:
versions: versions:
- version: 2.10.11 - version: 2.10.11
natsImage: nats:2.10.11-scratch natsImage: nats:2.10.11-scratch
metricsExporterImage: natsio/prometheus-nats-exporter:0.14.0 metricsExporterImage: natsio/prometheus-nats-exporter:0.15.0
configReloaderImage: natsio/nats-server-config-reloader:0.14.1 configReloaderImage: natsio/nats-server-config-reloader:0.14.1
startCommand: /nats-server startCommand: /nats-server
@ -91,7 +91,7 @@ argo-cd:
secret: secret:
createSecret: false createSecret: false
# `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/' | base64 -w0` # `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/'`
# argocdServerAdminPassword: "$2a$10$ivKzaXVxMqdeDSfS3nqi1Od3iDbnL7oXrixzDfZFRHlXHnAG6LydG" # argocdServerAdminPassword: "$2a$10$ivKzaXVxMqdeDSfS3nqi1Od3iDbnL7oXrixzDfZFRHlXHnAG6LydG"
# argocdServerAdminPassword: "ref+file://secrets.yaml#/test" # argocdServerAdminPassword: "ref+file://secrets.yaml#/test"
# argocdServerAdminPasswordMtime: "2020-04-24T15:33:09BST" # argocdServerAdminPasswordMtime: "2020-04-24T15:33:09BST"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-auth name: kubezero-auth
description: KubeZero umbrella chart for all things Authentication and Identity management description: KubeZero umbrella chart for all things Authentication and Identity management
type: application type: application
version: 0.5.1 version: 0.4.6
appVersion: 22.0.5 appVersion: 22.0.5
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
@ -16,8 +16,9 @@ dependencies:
- name: kubezero-lib - name: kubezero-lib
version: ">= 0.1.6" version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: keycloak - #! renovate: datasource=docker
name: keycloak
repository: "oci://registry-1.docker.io/bitnamicharts" repository: "oci://registry-1.docker.io/bitnamicharts"
version: 22.2.1 version: 18.7.1
condition: keycloak.enabled condition: keycloak.enabled
kubeVersion: ">= 1.26.0" kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-auth # kubezero-auth
![Version: 0.5.1](https://img.shields.io/badge/Version-0.5.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 22.0.5](https://img.shields.io/badge/AppVersion-22.0.5-informational?style=flat-square) ![Version: 0.4.6](https://img.shields.io/badge/Version-0.4.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 22.0.5](https://img.shields.io/badge/AppVersion-22.0.5-informational?style=flat-square)
KubeZero umbrella chart for all things Authentication and Identity management KubeZero umbrella chart for all things Authentication and Identity management
@ -19,7 +19,7 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| oci://registry-1.docker.io/bitnamicharts | keycloak | 22.2.1 | | oci://registry-1.docker.io/bitnamicharts | keycloak | 18.7.1 |
# Keycloak # Keycloak
@ -41,7 +41,6 @@ https://github.com/keycloak/keycloak-benchmark/tree/main/provision/minikube/keyc
| keycloak.auth.existingSecret | string | `"kubezero-auth"` | | | keycloak.auth.existingSecret | string | `"kubezero-auth"` | |
| keycloak.auth.passwordSecretKey | string | `"admin-password"` | | | keycloak.auth.passwordSecretKey | string | `"admin-password"` | |
| keycloak.enabled | bool | `false` | | | keycloak.enabled | bool | `false` | |
| keycloak.hostnameStrict | bool | `false` | |
| keycloak.istio.admin.enabled | bool | `false` | | | keycloak.istio.admin.enabled | bool | `false` | |
| keycloak.istio.admin.gateway | string | `"istio-ingress/private-ingressgateway"` | | | keycloak.istio.admin.gateway | string | `"istio-ingress/private-ingressgateway"` | |
| keycloak.istio.admin.url | string | `""` | | | keycloak.istio.admin.url | string | `""` | |
@ -56,13 +55,9 @@ https://github.com/keycloak/keycloak-benchmark/tree/main/provision/minikube/keyc
| keycloak.postgresql.auth.existingSecret | string | `"kubezero-auth"` | | | keycloak.postgresql.auth.existingSecret | string | `"kubezero-auth"` | |
| keycloak.postgresql.auth.username | string | `"keycloak"` | | | keycloak.postgresql.auth.username | string | `"keycloak"` | |
| keycloak.postgresql.primary.persistence.size | string | `"1Gi"` | | | keycloak.postgresql.primary.persistence.size | string | `"1Gi"` | |
| keycloak.postgresql.primary.resources.limits.memory | string | `"128Mi"` | |
| keycloak.postgresql.primary.resources.requests.cpu | string | `"100m"` | |
| keycloak.postgresql.primary.resources.requests.memory | string | `"64Mi"` | |
| keycloak.postgresql.readReplicas.replicaCount | int | `0` | | | keycloak.postgresql.readReplicas.replicaCount | int | `0` | |
| keycloak.production | bool | `true` | | | keycloak.production | bool | `true` | |
| keycloak.proxyHeaders | string | `"xforwarded"` | | | keycloak.proxy | string | `"edge"` | |
| keycloak.replicaCount | int | `1` | | | keycloak.replicaCount | int | `1` | |
| keycloak.resources.limits.memory | string | `"768Mi"` | |
| keycloak.resources.requests.cpu | string | `"100m"` | | | keycloak.resources.requests.cpu | string | `"100m"` | |
| keycloak.resources.requests.memory | string | `"512Mi"` | | | keycloak.resources.requests.memory | string | `"512Mi"` | |

View File

@ -2,11 +2,11 @@
## backup ## backup
- shell into running postgres-auth pod - shell into running posgres-auth pod
``` ```
export PGPASSWORD="$POSTGRES_POSTGRES_PASSWORD" export PGPASSWORD="<postgres_password from secret>"
cd /bitnami/postgresql cd /bitnami/posgres
pg_dumpall -U postgres > /bitnami/postgresql/backup pg_dumpall > backup
``` ```
- store backup off-site - store backup off-site
@ -29,10 +29,8 @@ kubectl cp keycloak/kubezero-auth-postgresql-0:/bitnami/postgresql/backup postgr
kubectl cp postgres-backup keycloak/kubezero-auth-postgresql-0:/bitnami/postgresql/backup kubectl cp postgres-backup keycloak/kubezero-auth-postgresql-0:/bitnami/postgresql/backup
``` ```
- shell into running postgres-auth pod - log into psql as admin ( shell on running pod )
``` ```
export PGPASSWORD="$POSTGRES_POSTGRES_PASSWORD"
cd /bitnami/postgresql
psql -U postgres psql -U postgres
``` ```

View File

@ -1,9 +1,8 @@
keycloak: keycloak:
enabled: false enabled: false
proxy: edge
production: true production: true
hostnameStrict: false
proxyHeaders: xforwarded
auth: auth:
adminUser: admin adminUser: admin
@ -16,18 +15,14 @@ keycloak:
create: false create: false
minAvailable: 1 minAvailable: 1
resources:
limits:
#cpu: 750m
memory: 768Mi
requests:
cpu: 100m
memory: 512Mi
metrics: metrics:
enabled: false enabled: false
serviceMonitor: serviceMonitor:
enabled: true enabled: true
resources:
requests:
cpu: 100m
memory: 512Mi
postgresql: postgresql:
auth: auth:
@ -39,14 +34,6 @@ keycloak:
persistence: persistence:
size: 1Gi size: 1Gi
resources:
limits:
#cpu: 750m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
readReplicas: readReplicas:
replicaCount: 0 replicaCount: 0

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-cert-manager name: kubezero-cert-manager
description: KubeZero Umbrella Chart for cert-manager description: KubeZero Umbrella Chart for cert-manager
type: application type: application
version: 0.9.9 version: 0.9.8
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -16,6 +16,6 @@ dependencies:
version: ">= 0.1.6" version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: cert-manager - name: cert-manager
version: v1.15.2 version: v1.15.1
repository: https://charts.jetstack.io repository: https://charts.jetstack.io
kubeVersion: ">= 1.26.0-0" kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-cert-manager # kubezero-cert-manager
![Version: 0.9.9](https://img.shields.io/badge/Version-0.9.9-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.9.8](https://img.shields.io/badge/Version-0.9.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for cert-manager KubeZero Umbrella Chart for cert-manager
@ -14,12 +14,12 @@ KubeZero Umbrella Chart for cert-manager
## Requirements ## Requirements
Kubernetes: `>= 1.26.0-0` Kubernetes: `>= 1.26.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.jetstack.io | cert-manager | v1.15.2 | | https://charts.jetstack.io | cert-manager | v1.15.1 |
## AWS - OIDC IAM roles ## AWS - OIDC IAM roles
@ -34,6 +34,9 @@ If your resolvers need additional sercrets like CloudFlare API tokens etc. make
|-----|------|---------|-------------| |-----|------|---------|-------------|
| cert-manager.cainjector.extraArgs[0] | string | `"--logging-format=json"` | | | cert-manager.cainjector.extraArgs[0] | string | `"--logging-format=json"` | |
| cert-manager.cainjector.extraArgs[1] | string | `"--leader-elect=false"` | | | cert-manager.cainjector.extraArgs[1] | string | `"--leader-elect=false"` | |
| cert-manager.cainjector.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cert-manager.cainjector.tolerations[0].effect | string | `"NoSchedule"` | |
| cert-manager.cainjector.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| cert-manager.crds.enabled | bool | `true` | | | cert-manager.crds.enabled | bool | `true` | |
| cert-manager.enableCertificateOwnerRef | bool | `true` | | | cert-manager.enableCertificateOwnerRef | bool | `true` | |
| cert-manager.enabled | bool | `true` | | | cert-manager.enabled | bool | `true` | |
@ -43,9 +46,15 @@ If your resolvers need additional sercrets like CloudFlare API tokens etc. make
| cert-manager.global.leaderElection.namespace | string | `"cert-manager"` | | | cert-manager.global.leaderElection.namespace | string | `"cert-manager"` | |
| cert-manager.ingressShim.defaultIssuerKind | string | `"ClusterIssuer"` | | | cert-manager.ingressShim.defaultIssuerKind | string | `"ClusterIssuer"` | |
| cert-manager.ingressShim.defaultIssuerName | string | `"letsencrypt-dns-prod"` | | | cert-manager.ingressShim.defaultIssuerName | string | `"letsencrypt-dns-prod"` | |
| cert-manager.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cert-manager.prometheus.servicemonitor.enabled | bool | `false` | | | cert-manager.prometheus.servicemonitor.enabled | bool | `false` | |
| cert-manager.startupapicheck.enabled | bool | `false` | | | cert-manager.startupapicheck.enabled | bool | `false` | |
| cert-manager.tolerations[0].effect | string | `"NoSchedule"` | |
| cert-manager.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| cert-manager.webhook.extraArgs[0] | string | `"--logging-format=json"` | | | cert-manager.webhook.extraArgs[0] | string | `"--logging-format=json"` | |
| cert-manager.webhook.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cert-manager.webhook.tolerations[0].effect | string | `"NoSchedule"` | |
| cert-manager.webhook.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| clusterIssuer | object | `{}` | | | clusterIssuer | object | `{}` | |
| localCA.enabled | bool | `false` | | | localCA.enabled | bool | `false` | |
| localCA.selfsigning | bool | `true` | | | localCA.selfsigning | bool | `true` | |

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin" "subdir": "contrib/mixin"
} }
}, },
"version": "df4e472a2d09813560ba44b21a29c0453dbec18c", "version": "010d462c0ff03a70f5c5fd32efbb76ad4c1e7c81",
"sum": "IXI3LQIT9NmTPJAk8WLUJd5+qZfcGpeNCyWIK7oEpws=" "sum": "IXI3LQIT9NmTPJAk8WLUJd5+qZfcGpeNCyWIK7oEpws="
}, },
{ {
@ -58,7 +58,7 @@
"subdir": "gen/grafonnet-latest" "subdir": "gen/grafonnet-latest"
} }
}, },
"version": "733beadbc8dab55c5fe1bcdcf0d8a2d215759a55", "version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"sum": "eyuJ0jOXeA4MrobbNgU4/v5a7ASDHslHZ0eS6hDdWoI=" "sum": "eyuJ0jOXeA4MrobbNgU4/v5a7ASDHslHZ0eS6hDdWoI="
}, },
{ {
@ -68,7 +68,7 @@
"subdir": "gen/grafonnet-v10.0.0" "subdir": "gen/grafonnet-v10.0.0"
} }
}, },
"version": "733beadbc8dab55c5fe1bcdcf0d8a2d215759a55", "version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"sum": "xdcrJPJlpkq4+5LpGwN4tPAuheNNLXZjE6tDcyvFjr0=" "sum": "xdcrJPJlpkq4+5LpGwN4tPAuheNNLXZjE6tDcyvFjr0="
}, },
{ {
@ -78,8 +78,8 @@
"subdir": "gen/grafonnet-v11.0.0" "subdir": "gen/grafonnet-v11.0.0"
} }
}, },
"version": "733beadbc8dab55c5fe1bcdcf0d8a2d215759a55", "version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"sum": "0BvzR0i4bS4hc2O3xDv6i9m52z7mPrjvqxtcPrGhynA=" "sum": "Fuo+qTZZzF+sHDBWX/8fkPsUmwW6qhH8hRVz45HznfI="
}, },
{ {
"source": { "source": {
@ -88,8 +88,8 @@
"subdir": "grafana-builder" "subdir": "grafana-builder"
} }
}, },
"version": "d9ba581fb27aa6689e911f288d4df06948eb8aad", "version": "1d877bb0651ef92176f651d0be473c06e372a8a0",
"sum": "yxqWcq/N3E/a/XreeU6EuE6X7kYPnG0AspAQFKOjASo=" "sum": "udZaafkbKYMGodLqsFhEe+Oy/St2p0edrK7hiMPEey0="
}, },
{ {
"source": { "source": {
@ -128,8 +128,8 @@
"subdir": "" "subdir": ""
} }
}, },
"version": "1b71e399caee334af8ba2d15d0dd615043a652d0", "version": "3dfa72d1d1ab31a686b1f52ec28bbf77c972bd23",
"sum": "qcRxavmCpuWQuwCMqYaOZ+soA8jxwWLrK7LYqohN5NA=" "sum": "7ufhpvzoDqAYLrfAsGkTAIRmu2yWQkmHukTE//jOsJU="
}, },
{ {
"source": { "source": {
@ -138,8 +138,8 @@
"subdir": "jsonnet/kube-state-metrics" "subdir": "jsonnet/kube-state-metrics"
} }
}, },
"version": "f8aa7d9bb9d8e29876e19f4859391a54a7e61d63", "version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"sum": "lO7jUSzAIy8Yk9pOWJIWgPRhubkWzVh56W6wtYfbVH4=" "sum": "pvInhJNQVDOcC3NGWRMKRIP954mAvLXCQpTlafIg7fA="
}, },
{ {
"source": { "source": {
@ -148,7 +148,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin" "subdir": "jsonnet/kube-state-metrics-mixin"
} }
}, },
"version": "f8aa7d9bb9d8e29876e19f4859391a54a7e61d63", "version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c=" "sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
}, },
{ {
@ -158,8 +158,8 @@
"subdir": "jsonnet/kube-prometheus" "subdir": "jsonnet/kube-prometheus"
} }
}, },
"version": "33c43a4067a174a99529e41d537eef290a7028ea", "version": "defa2bd1e242519c62a5c2b3b786b1caa6d906d4",
"sum": "/jU8uXWR202aR7K/3zOefhc4JBUAUkTdHvE9rhfzI/g=" "sum": "INKeZ+QIIPImq+TrfHT8CpYdoRzzxRk0txG07XlOo/Q="
}, },
{ {
"source": { "source": {
@ -168,7 +168,7 @@
"subdir": "jsonnet/mixin" "subdir": "jsonnet/mixin"
} }
}, },
"version": "aa74b0d377d32648ca50f2531fe2253895629d9f", "version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=", "sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin" "name": "prometheus-operator-mixin"
}, },
@ -179,8 +179,8 @@
"subdir": "jsonnet/prometheus-operator" "subdir": "jsonnet/prometheus-operator"
} }
}, },
"version": "aa74b0d377d32648ca50f2531fe2253895629d9f", "version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"sum": "EZR4sBAtmFRsUR7U4SybuBUhK9ncMCvEu9xHtu8B9KA=" "sum": "z2/5LjQpWC7snhT+n/mtQqoy5986uI95sTqcKQziwGU="
}, },
{ {
"source": { "source": {
@ -189,7 +189,7 @@
"subdir": "doc/alertmanager-mixin" "subdir": "doc/alertmanager-mixin"
} }
}, },
"version": "27b6eb7ce02680c84b9a06503edbddc9213f586d", "version": "eb8369ec510d76f63901379a8437c4b55885d6c5",
"sum": "IpF46ZXsm+0wJJAPtAre8+yxTNZA57mBqGpBP/r7/kw=", "sum": "IpF46ZXsm+0wJJAPtAre8+yxTNZA57mBqGpBP/r7/kw=",
"name": "alertmanager" "name": "alertmanager"
}, },
@ -210,7 +210,7 @@
"subdir": "documentation/prometheus-mixin" "subdir": "documentation/prometheus-mixin"
} }
}, },
"version": "616038f2b64656b2c9c6053f02aee544c5b8bb17", "version": "ac85bd47e1cfa0d63520e4c0b4e26900c42c326b",
"sum": "dYLcLzGH4yF3qB7OGC/7z4nqeTNjv42L7Q3BENU8XJI=", "sum": "dYLcLzGH4yF3qB7OGC/7z4nqeTNjv42L7Q3BENU8XJI=",
"name": "prometheus" "name": "prometheus"
}, },
@ -232,7 +232,7 @@
"subdir": "mixin" "subdir": "mixin"
} }
}, },
"version": "dcadaae80fcce1fb05452b37ca8d3b2809d7cef9", "version": "35c0dbec856f97683a846e9c53f83156a3a44ff3",
"sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=", "sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=",
"name": "thanos-mixin" "name": "thanos-mixin"
} }

View File

@ -61,15 +61,31 @@ cert-manager:
# mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/" # mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
# readOnly: true # readOnly: true
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
ingressShim: ingressShim:
defaultIssuerName: letsencrypt-dns-prod defaultIssuerName: letsencrypt-dns-prod
defaultIssuerKind: ClusterIssuer defaultIssuerKind: ClusterIssuer
webhook: webhook:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
extraArgs: extraArgs:
- "--logging-format=json" - "--logging-format=json"
cainjector: cainjector:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
extraArgs: extraArgs:
- "--logging-format=json" - "--logging-format=json"
- "--leader-elect=false" - "--leader-elect=false"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-ci name: kubezero-ci
description: KubeZero umbrella chart for all things CI description: KubeZero umbrella chart for all things CI
type: application type: application
version: 0.8.16 version: 0.8.13
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -22,7 +22,7 @@ dependencies:
repository: https://dl.gitea.io/charts/ repository: https://dl.gitea.io/charts/
condition: gitea.enabled condition: gitea.enabled
- name: jenkins - name: jenkins
version: 5.5.8 version: 5.4.3
repository: https://charts.jenkins.io repository: https://charts.jenkins.io
condition: jenkins.enabled condition: jenkins.enabled
- name: trivy - name: trivy
@ -30,7 +30,7 @@ dependencies:
repository: https://aquasecurity.github.io/helm-charts/ repository: https://aquasecurity.github.io/helm-charts/
condition: trivy.enabled condition: trivy.enabled
- name: renovate - name: renovate
version: 38.57.0 version: 37.438.2
repository: https://docs.renovatebot.com/helm-charts repository: https://docs.renovatebot.com/helm-charts
condition: renovate.enabled condition: renovate.enabled
kubeVersion: ">= 1.25.0" kubeVersion: ">= 1.25.0"

View File

@ -1,6 +1,6 @@
# kubezero-ci # kubezero-ci
![Version: 0.8.16](https://img.shields.io/badge/Version-0.8.16-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.8.13](https://img.shields.io/badge/Version-0.8.13-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for all things CI KubeZero umbrella chart for all things CI
@ -20,9 +20,9 @@ Kubernetes: `>= 1.25.0`
|------------|------|---------| |------------|------|---------|
| https://aquasecurity.github.io/helm-charts/ | trivy | 0.7.0 | | https://aquasecurity.github.io/helm-charts/ | trivy | 0.7.0 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.jenkins.io | jenkins | 5.5.8 | | https://charts.jenkins.io | jenkins | 5.4.3 |
| https://dl.gitea.io/charts/ | gitea | 10.4.0 | | https://dl.gitea.io/charts/ | gitea | 10.4.0 |
| https://docs.renovatebot.com/helm-charts | renovate | 38.57.0 | | https://docs.renovatebot.com/helm-charts | renovate | 37.438.2 |
# Jenkins # Jenkins
- default build retention 10 builds, 32days - default build retention 10 builds, 32days
@ -84,14 +84,13 @@ Kubernetes: `>= 1.25.0`
| gitea.securityContext.capabilities.drop[0] | string | `"ALL"` | | | gitea.securityContext.capabilities.drop[0] | string | `"ALL"` | |
| gitea.strategy.type | string | `"Recreate"` | | | gitea.strategy.type | string | `"Recreate"` | |
| gitea.test.enabled | bool | `false` | | | gitea.test.enabled | bool | `false` | |
| jenkins.agent.annotations."cluster-autoscaler.kubernetes.io/safe-to-evict" | string | `"false"` | |
| jenkins.agent.annotations."container.apparmor.security.beta.kubernetes.io/jnlp" | string | `"unconfined"` | | | jenkins.agent.annotations."container.apparmor.security.beta.kubernetes.io/jnlp" | string | `"unconfined"` | |
| jenkins.agent.containerCap | int | `2` | | | jenkins.agent.containerCap | int | `2` | |
| jenkins.agent.customJenkinsLabels[0] | string | `"podman-aws-trivy"` | | | jenkins.agent.customJenkinsLabels[0] | string | `"podman-aws-trivy"` | |
| jenkins.agent.defaultsProviderTemplate | string | `"podman-aws"` | | | jenkins.agent.defaultsProviderTemplate | string | `"podman-aws"` | |
| jenkins.agent.idleMinutes | int | `30` | | | jenkins.agent.idleMinutes | int | `30` | |
| jenkins.agent.image.repository | string | `"public.ecr.aws/zero-downtime/jenkins-podman"` | | | jenkins.agent.image.repository | string | `"public.ecr.aws/zero-downtime/jenkins-podman"` | |
| jenkins.agent.image.tag | string | `"v0.6.2"` | | | jenkins.agent.image.tag | string | `"v0.6.0"` | |
| jenkins.agent.inheritYamlMergeStrategy | bool | `true` | | | jenkins.agent.inheritYamlMergeStrategy | bool | `true` | |
| jenkins.agent.podName | string | `"podman-aws"` | | | jenkins.agent.podName | string | `"podman-aws"` | |
| jenkins.agent.podRetention | string | `"Default"` | | | jenkins.agent.podRetention | string | `"Default"` | |
@ -104,7 +103,7 @@ Kubernetes: `>= 1.25.0`
| jenkins.agent.serviceAccount | string | `"jenkins-podman-aws"` | | | jenkins.agent.serviceAccount | string | `"jenkins-podman-aws"` | |
| jenkins.agent.showRawYaml | bool | `false` | | | jenkins.agent.showRawYaml | bool | `false` | |
| jenkins.agent.yamlMergeStrategy | string | `"merge"` | | | jenkins.agent.yamlMergeStrategy | string | `"merge"` | |
| jenkins.agent.yamlTemplate | string | `"apiVersion: v1\nkind: Pod\nspec:\n securityContext:\n fsGroup: 1000\n containers:\n - name: jnlp\n resources:\n requests:\n cpu: \"200m\"\n memory: \"512Mi\"\n limits:\n cpu: \"4\"\n memory: \"6144Mi\"\n github.com/fuse: 1\n volumeMounts:\n - name: aws-token\n mountPath: \"/var/run/secrets/sts.amazonaws.com/serviceaccount/\"\n readOnly: true\n - name: host-registries-conf\n mountPath: \"/home/jenkins/.config/containers/registries.conf\"\n readOnly: true\n volumes:\n - name: aws-token\n projected:\n sources:\n - serviceAccountToken:\n path: token\n expirationSeconds: 86400\n audience: \"sts.amazonaws.com\"\n - name: host-registries-conf\n hostPath:\n path: /etc/containers/registries.conf\n type: File"` | | | jenkins.agent.yamlTemplate | string | `"apiVersion: v1\nkind: Pod\nspec:\n securityContext:\n fsGroup: 1000\n containers:\n - name: jnlp\n resources:\n requests:\n cpu: \"512m\"\n memory: \"1024Mi\"\n limits:\n cpu: \"4\"\n memory: \"6144Mi\"\n github.com/fuse: 1\n volumeMounts:\n - name: aws-token\n mountPath: \"/var/run/secrets/sts.amazonaws.com/serviceaccount/\"\n readOnly: true\n - name: host-registries-conf\n mountPath: \"/home/jenkins/.config/containers/registries.conf\"\n readOnly: true\n volumes:\n - name: aws-token\n projected:\n sources:\n - serviceAccountToken:\n path: token\n expirationSeconds: 86400\n audience: \"sts.amazonaws.com\"\n - name: host-registries-conf\n hostPath:\n path: /etc/containers/registries.conf\n type: File"` | |
| jenkins.controller.JCasC.configScripts.zdt-settings | string | `"jenkins:\n noUsageStatistics: true\n disabledAdministrativeMonitors:\n - \"jenkins.security.ResourceDomainRecommendation\"\nappearance:\n themeManager:\n disableUserThemes: true\n theme: \"dark\"\nunclassified:\n openTelemetry:\n configurationProperties: |-\n otel.exporter.otlp.protocol=grpc\n otel.instrumentation.jenkins.web.enabled=false\n ignoredSteps: \"dir,echo,isUnix,pwd,properties\"\n #endpoint: \"telemetry-jaeger-collector.telemetry:4317\"\n exportOtelConfigurationAsEnvironmentVariables: false\n #observabilityBackends:\n # - jaeger:\n # jaegerBaseUrl: \"https://jaeger.example.com\"\n # name: \"KubeZero Jaeger\"\n serviceName: \"Jenkins\"\n buildDiscarders:\n configuredBuildDiscarders:\n - \"jobBuildDiscarder\"\n - defaultBuildDiscarder:\n discarder:\n logRotator:\n artifactDaysToKeepStr: \"32\"\n artifactNumToKeepStr: \"10\"\n daysToKeepStr: \"100\"\n numToKeepStr: \"10\"\n"` | | | jenkins.controller.JCasC.configScripts.zdt-settings | string | `"jenkins:\n noUsageStatistics: true\n disabledAdministrativeMonitors:\n - \"jenkins.security.ResourceDomainRecommendation\"\nappearance:\n themeManager:\n disableUserThemes: true\n theme: \"dark\"\nunclassified:\n openTelemetry:\n configurationProperties: |-\n otel.exporter.otlp.protocol=grpc\n otel.instrumentation.jenkins.web.enabled=false\n ignoredSteps: \"dir,echo,isUnix,pwd,properties\"\n #endpoint: \"telemetry-jaeger-collector.telemetry:4317\"\n exportOtelConfigurationAsEnvironmentVariables: false\n #observabilityBackends:\n # - jaeger:\n # jaegerBaseUrl: \"https://jaeger.example.com\"\n # name: \"KubeZero Jaeger\"\n serviceName: \"Jenkins\"\n buildDiscarders:\n configuredBuildDiscarders:\n - \"jobBuildDiscarder\"\n - defaultBuildDiscarder:\n discarder:\n logRotator:\n artifactDaysToKeepStr: \"32\"\n artifactNumToKeepStr: \"10\"\n daysToKeepStr: \"100\"\n numToKeepStr: \"10\"\n"` | |
| jenkins.controller.containerEnv[0].name | string | `"OTEL_LOGS_EXPORTER"` | | | jenkins.controller.containerEnv[0].name | string | `"OTEL_LOGS_EXPORTER"` | |
| jenkins.controller.containerEnv[0].value | string | `"none"` | | | jenkins.controller.containerEnv[0].value | string | `"none"` | |

View File

@ -12,48 +12,6 @@ Use the following links to reference issues, PRs, and commits prior to v2.6.0.
The changelog until v1.5.7 was auto-generated based on git commits. The changelog until v1.5.7 was auto-generated based on git commits.
Those entries include a reference to the git commit to be able to get more details. Those entries include a reference to the git commit to be able to get more details.
## 5.5.8
Add `agent.garbageCollection` to support setting [kubernetes plugin garbage collection](https://plugins.jenkins.io/kubernetes/#plugin-content-garbage-collection-beta).
## 5.5.7
Update `kubernetes` to version `4285.v50ed5f624918`
## 5.5.6
Add `agent.useDefaultServiceAccount` to support omitting setting `serviceAccount` in the default pod template from `serviceAgentAccount.name`.
Add `agent.serviceAccount` to support setting the default pod template value.
## 5.5.5
Update `jenkins/inbound-agent` to version `3261.v9c670a_4748a_9-1`
## 5.5.4
Update `jenkins/jenkins` to version `2.462.1-jdk17`
## 5.5.3
Update `git` to version `5.3.0`
## 5.5.2
Update `kubernetes` to version `4280.vd919fa_528c7e`
## 5.5.1
Update `kubernetes` to version `4265.v78b_d4a_1c864a_`
## 5.5.0
Introduce capability of set skipTlsVerify and usageRestricted flags in additionalClouds
## 5.4.4
Update CHANGELOG.md, README.md, and UPGRADING.md for linting
## 5.4.3 ## 5.4.3
Update `configuration-as-code` to version `1836.vccda_4a_122a_a_e` Update `configuration-as-code` to version `1836.vccda_4a_122a_a_e`
@ -81,6 +39,7 @@ Update `kubernetes` to version `4253.v7700d91739e5`
## 5.3.4 ## 5.3.4
Update `jenkins/jenkins` to version `2.452.3-jdk17` Update `jenkins/jenkins` to version `2.452.3-jdk17`
## 5.3.3 ## 5.3.3
Update `jenkins/inbound-agent` to version `3256.v88a_f6e922152-1` Update `jenkins/inbound-agent` to version `3256.v88a_f6e922152-1`
@ -415,7 +374,7 @@ Changes in 4.7.0 were reverted.
## 4.7.0 ## 4.7.0
Runs `config-reload` as an init container, in addition to the sidecar container, to ensure that JCasC YAMLs are present before the main Jenkins container starts. This should fix some race conditions and crashes on startup. Runs `config-reload` as an init container, in addition to the sidecar container, to ensure that JCasC YAMLS are present before the main Jenkins container starts. This should fix some race conditions and crashes on startup.
## 4.6.7 ## 4.6.7
@ -581,7 +540,7 @@ Disable volume mount if disableSecretMount enabled
## 4.3.9 ## 4.3.9
Document `.Values.agent.directConnection` in readme. Document `.Values.agent.directConnection` in README.
Add default value for `.Values.agent.directConnection` to `values.yaml` Add default value for `.Values.agent.directConnection` to `values.yaml`
## 4.3.8 ## 4.3.8
@ -773,7 +732,7 @@ Fix path of projected secrets from `additionalExistingSecrets`.
## 4.1.7 ## 4.1.7
Update readme with explanation on the required environmental variable `AWS_REGION` in case of using an S3 bucket. Update README with explanation on the required environmental variable `AWS_REGION` in case of using an S3 bucket.
## 4.1.6 ## 4.1.6
@ -781,7 +740,7 @@ project adminSecret, additionalSecrets and additionalExistingSecrets instead of
## 4.1.5 ## 4.1.5
Update readme to fix `JAVA_OPTS` name. Update README to fix `JAVA_OPTS` name.
## 4.1.4 ## 4.1.4
Update plugins Update plugins
@ -896,7 +855,7 @@ Update default plugin versions
## 3.9.4 ## 3.9.4
Add JAVA_OPTIONS to the readme so proxy settings get picked by jenkins-plugin-cli Add JAVA_OPTIONS to the README so proxy settings get picked by jenkins-plugin-cli
## 3.9.3 ## 3.9.3
@ -1189,7 +1148,7 @@ Update Jenkins image and appVersion to jenkins lts release version 2.263.4
## 3.1.12 ## 3.1.12
Added GitHub Action to automate the updating of LTS releases. Added GitHub action to automate the updating of LTS releases.
## 3.1.11 ## 3.1.11
@ -1393,7 +1352,7 @@ Added unit tests for most resources in the Helm chart.
## 2.12.1 ## 2.12.1
Helm chart readme update Helm chart README update
## 2.12.0 ## 2.12.0
@ -1455,7 +1414,7 @@ Fixes #19
## 2.6.0 First release in jenkinsci GitHub org ## 2.6.0 First release in jenkinsci GitHub org
Updated readme for new location Updated README for new location
## 2.5.2 ## 2.5.2
@ -1471,7 +1430,7 @@ Add an option to specify that Jenkins master should be initialized only once, du
## 2.4.1 ## 2.4.1
Reorder readme parameters into sections to facilitate chart usage and maintenance Reorder README parameters into sections to facilitate chart usage and maintenance
## 2.4.0 Update default agent image ## 2.4.0 Update default agent image
@ -1505,7 +1464,7 @@ Configure `REQ_RETRY_CONNECT` to `10` to give Jenkins more time to start up.
Value can be configured via `master.sidecars.configAutoReload.reqRetryConnect` Value can be configured via `master.sidecars.configAutoReload.reqRetryConnect`
## 2.1.2 updated readme ## 2.1.2 updated README
## 2.1.1 update credentials-binding plugin to 1.23 ## 2.1.1 update credentials-binding plugin to 1.23
@ -1519,7 +1478,7 @@ Only render authorizationStrategy and securityRealm when values are set.
## 2.0.0 Configuration as Code now default + container does not run as root anymore ## 2.0.0 Configuration as Code now default + container does not run as root anymore
The readme contains more details for this update. The README contains more details for this update.
Please note that the updated values contain breaking changes. Please note that the updated values contain breaking changes.
## 1.27.0 Update plugin versions & sidecar container ## 1.27.0 Update plugin versions & sidecar container
@ -1684,7 +1643,7 @@ In recent version of configuration-as-code-plugin this is no longer necessary.
## 1.9.24 ## 1.9.24
Update JCasC auto-reload docs and remove stale SSH key references from version "1.8.0 JCasC auto reload works without SSH keys" Update JCasC auto-reload docs and remove stale ssh key references from version "1.8.0 JCasC auto reload works without ssh keys"
## 1.9.23 Support jenkinsUriPrefix when JCasC is enabled ## 1.9.23 Support jenkinsUriPrefix when JCasC is enabled
@ -1809,7 +1768,7 @@ Revert fix in `1.7.10` since direct connection is now disabled by default.
Add `master.schedulerName` to allow setting a Kubernetes custom scheduler Add `master.schedulerName` to allow setting a Kubernetes custom scheduler
## 1.8.0 JCasC auto reload works without SSH keys ## 1.8.0 JCasC auto reload works without ssh keys
We make use of the fact that the Jenkins Configuration as Code Plugin can be triggered via http `POST` to `JENKINS_URL/configuration-as-code/reload`and a pre-shared key. We make use of the fact that the Jenkins Configuration as Code Plugin can be triggered via http `POST` to `JENKINS_URL/configuration-as-code/reload`and a pre-shared key.
The sidecar container responsible for reloading config changes is now `kiwigrid/k8s-sidecar:0.1.20` instead of it's fork `shadwell/k8s-sidecar`. The sidecar container responsible for reloading config changes is now `kiwigrid/k8s-sidecar:0.1.20` instead of it's fork `shadwell/k8s-sidecar`.
@ -2337,7 +2296,7 @@ commit: 9de96faa0
## 0.32.7 ## 0.32.7
Fix Markdown syntax in readme (#11496) Fix Markdown syntax in README (#11496)
commit: a32221a95 commit: a32221a95
## 0.32.6 ## 0.32.6
@ -2567,7 +2526,7 @@ commit: e0a20b0b9
## 0.16.22 ## 0.16.22
avoid linting errors when adding Values.Ingress.Annotations (#7425) avoid lint errors when adding Values.Ingress.Annotations (#7425)
commit: 99eacc854 commit: 99eacc854
## 0.16.21 ## 0.16.21
@ -2592,7 +2551,7 @@ commit: bf8180018
## 0.16.17 ## 0.16.17
Add Master.AdminPassword in readme (#6987) Add Master.AdminPassword in README (#6987)
commit: 13e754ad7 commit: 13e754ad7
## 0.16.16 ## 0.16.16
@ -2662,7 +2621,7 @@ commit: fc6100c38
## 0.16.1 ## 0.16.1
fix typo in jenkins readme (#5228) fix typo in jenkins README (#5228)
commit: 3cd3f4b8b commit: 3cd3f4b8b
## 0.16.0 ## 0.16.0
@ -2783,7 +2742,7 @@ commit: 9a230a6b1
Double retry count for Jenkins test Double retry count for Jenkins test
commit: 129c8e824 commit: 129c8e824
Jenkins: Update readme | Master.ServiceAnnotations (#2757) Jenkins: Update README | Master.ServiceAnnotations (#2757)
commit: 6571810bc commit: 6571810bc
## 0.10.0 ## 0.10.0
@ -2855,7 +2814,7 @@ commit: 4af5810ff
## 0.8.4 ## 0.8.4
Add support for supplying JENKINS_OPTS and/or URI prefix (#1405) Add support for supplying JENKINS_OPTS and/or uri prefix (#1405)
commit: 6a331901a commit: 6a331901a
## 0.8.3 ## 0.8.3
@ -3065,7 +3024,7 @@ commit: 3cbd3ced6
Remove 'Getting Started:' from various NOTES.txt. (#181) Remove 'Getting Started:' from various NOTES.txt. (#181)
commit: 2f63fd524 commit: 2f63fd524
docs(\*): update readmes to reference chart repos (#119) docs(\*): update READMEs to reference chart repos (#119)
commit: c7d1bff05 commit: c7d1bff05
## 0.1.0 ## 0.1.0

View File

@ -1,14 +1,14 @@
annotations: annotations:
artifacthub.io/category: integration-delivery artifacthub.io/category: integration-delivery
artifacthub.io/changes: | artifacthub.io/changes: |
- Add `agent.garbageCollection` to support setting [kubernetes plugin garbage collection](https://plugins.jenkins.io/kubernetes/#plugin-content-garbage-collection-beta). - Update `configuration-as-code` to version `1836.vccda_4a_122a_a_e`
artifacthub.io/images: | artifacthub.io/images: |
- name: jenkins - name: jenkins
image: docker.io/jenkins/jenkins:2.462.1-jdk17 image: docker.io/jenkins/jenkins:2.452.3-jdk17
- name: k8s-sidecar - name: k8s-sidecar
image: docker.io/kiwigrid/k8s-sidecar:1.27.5 image: docker.io/kiwigrid/k8s-sidecar:1.27.5
- name: inbound-agent - name: inbound-agent
image: jenkins/inbound-agent:3261.v9c670a_4748a_9-1 image: jenkins/inbound-agent:3256.v88a_f6e922152-1
artifacthub.io/license: Apache-2.0 artifacthub.io/license: Apache-2.0
artifacthub.io/links: | artifacthub.io/links: |
- name: Chart Source - name: Chart Source
@ -18,7 +18,7 @@ annotations:
- name: support - name: support
url: https://github.com/jenkinsci/helm-charts/issues url: https://github.com/jenkinsci/helm-charts/issues
apiVersion: v2 apiVersion: v2
appVersion: 2.462.1 appVersion: 2.452.3
description: 'Jenkins - Build great things at any scale! As the leading open source description: 'Jenkins - Build great things at any scale! As the leading open source
automation server, Jenkins provides over 1800 plugins to support building, deploying automation server, Jenkins provides over 1800 plugins to support building, deploying
and automating any project. ' and automating any project. '
@ -46,4 +46,4 @@ sources:
- https://github.com/maorfr/kube-tasks - https://github.com/maorfr/kube-tasks
- https://github.com/jenkinsci/configuration-as-code-plugin - https://github.com/jenkinsci/configuration-as-code-plugin
type: application type: application
version: 5.5.8 version: 5.4.3

View File

@ -122,7 +122,7 @@ So think of the list below more as a general guideline of what should be done.
- Test drive those setting on a separate installation - Test drive those setting on a separate installation
- Put Jenkins to Quiet Down mode so that it does not accept new jobs - Put Jenkins to Quiet Down mode so that it does not accept new jobs
`<JENKINS_URL>/quietDown` `<JENKINS_URL>/quietDown`
- Change permissions of all files and folders to the new user and group ID: - Change permissions of all files and folders to the new user and group id:
```console ```console
kubectl exec -it <jenkins_pod> -c jenkins /bin/bash kubectl exec -it <jenkins_pod> -c jenkins /bin/bash

View File

@ -8,71 +8,64 @@ The following tables list the configurable parameters of the Jenkins chart and t
| Key | Type | Description | Default | | Key | Type | Description | Default |
|:----|:-----|:---------|:------------| |:----|:-----|:---------|:------------|
| [additionalAgents](./values.yaml#L1189) | object | Configure additional | `{}` | | [additionalAgents](./values.yaml#L1165) | object | Configure additional | `{}` |
| [additionalClouds](./values.yaml#L1214) | object | | `{}` | | [additionalClouds](./values.yaml#L1190) | object | | `{}` |
| [agent.TTYEnabled](./values.yaml#L1095) | bool | Allocate pseudo tty to the side container | `false` | | [agent.TTYEnabled](./values.yaml#L1083) | bool | Allocate pseudo tty to the side container | `false` |
| [agent.additionalContainers](./values.yaml#L1142) | list | Add additional containers to the agents | `[]` | | [agent.additionalContainers](./values.yaml#L1118) | list | Add additional containers to the agents | `[]` |
| [agent.alwaysPullImage](./values.yaml#L988) | bool | Always pull agent container image before build | `false` | | [agent.alwaysPullImage](./values.yaml#L976) | bool | Always pull agent container image before build | `false` |
| [agent.annotations](./values.yaml#L1138) | object | Annotations to apply to the pod | `{}` | | [agent.annotations](./values.yaml#L1114) | object | Annotations to apply to the pod | `{}` |
| [agent.args](./values.yaml#L1089) | string | Arguments passed to command to execute | `"${computer.jnlpmac} ${computer.name}"` | | [agent.args](./values.yaml#L1077) | string | Arguments passed to command to execute | `"${computer.jnlpmac} ${computer.name}"` |
| [agent.command](./values.yaml#L1087) | string | Command to execute when side container starts | `nil` | | [agent.command](./values.yaml#L1075) | string | Command to execute when side container starts | `nil` |
| [agent.componentName](./values.yaml#L956) | string | | `"jenkins-agent"` | | [agent.componentName](./values.yaml#L944) | string | | `"jenkins-agent"` |
| [agent.connectTimeout](./values.yaml#L1136) | int | Timeout in seconds for an agent to be online | `100` | | [agent.connectTimeout](./values.yaml#L1112) | int | Timeout in seconds for an agent to be online | `100` |
| [agent.containerCap](./values.yaml#L1097) | int | Max number of agents to launch | `10` | | [agent.containerCap](./values.yaml#L1085) | int | Max number of agents to launch | `10` |
| [agent.customJenkinsLabels](./values.yaml#L953) | list | Append Jenkins labels to the agent | `[]` | | [agent.customJenkinsLabels](./values.yaml#L941) | list | Append Jenkins labels to the agent | `[]` |
| [agent.defaultsProviderTemplate](./values.yaml#L907) | string | The name of the pod template to use for providing default values | `""` | | [agent.defaultsProviderTemplate](./values.yaml#L907) | string | The name of the pod template to use for providing default values | `""` |
| [agent.directConnection](./values.yaml#L959) | bool | | `false` | | [agent.directConnection](./values.yaml#L947) | bool | | `false` |
| [agent.disableDefaultAgent](./values.yaml#L1160) | bool | Disable the default Jenkins Agent configuration | `false` | | [agent.disableDefaultAgent](./values.yaml#L1136) | bool | Disable the default Jenkins Agent configuration | `false` |
| [agent.enabled](./values.yaml#L905) | bool | Enable Kubernetes plugin jnlp-agent podTemplate | `true` | | [agent.enabled](./values.yaml#L905) | bool | Enable Kubernetes plugin jnlp-agent podTemplate | `true` |
| [agent.envVars](./values.yaml#L1070) | list | Environment variables for the agent Pod | `[]` | | [agent.envVars](./values.yaml#L1058) | list | Environment variables for the agent Pod | `[]` |
| [agent.garbageCollection.enabled](./values.yaml#L1104) | bool | When enabled, Jenkins will periodically check for orphan pods that have not been touched for the given timeout period and delete them. | `false` | | [agent.hostNetworking](./values.yaml#L955) | bool | Enables the agent to use the host network | `false` |
| [agent.garbageCollection.namespaces](./values.yaml#L1106) | string | Namespaces to look at for garbage collection, in addition to the default namespace defined for the cloud. One namespace per line. | `""` | | [agent.idleMinutes](./values.yaml#L1090) | int | Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it | `0` |
| [agent.garbageCollection.timeout](./values.yaml#L1111) | int | Timeout value for orphaned pods | `300` | | [agent.image.repository](./values.yaml#L934) | string | Repository to pull the agent jnlp image from | `"jenkins/inbound-agent"` |
| [agent.hostNetworking](./values.yaml#L967) | bool | Enables the agent to use the host network | `false` | | [agent.image.tag](./values.yaml#L936) | string | Tag of the image to pull | `"3256.v88a_f6e922152-1"` |
| [agent.idleMinutes](./values.yaml#L1114) | int | Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it | `0` | | [agent.imagePullSecretName](./values.yaml#L943) | string | Name of the secret to be used to pull the image | `nil` |
| [agent.image.repository](./values.yaml#L946) | string | Repository to pull the agent jnlp image from | `"jenkins/inbound-agent"` | | [agent.inheritYamlMergeStrategy](./values.yaml#L1110) | bool | Controls whether the defined yaml merge strategy will be inherited if another defined pod template is configured to inherit from the current one | `false` |
| [agent.image.tag](./values.yaml#L948) | string | Tag of the image to pull | `"3261.v9c670a_4748a_9-1"` | | [agent.jenkinsTunnel](./values.yaml#L915) | string | Overrides the Kubernetes Jenkins tunnel | `nil` |
| [agent.imagePullSecretName](./values.yaml#L955) | string | Name of the secret to be used to pull the image | `nil` | | [agent.jenkinsUrl](./values.yaml#L911) | string | Overrides the Kubernetes Jenkins URL | `nil` |
| [agent.inheritYamlMergeStrategy](./values.yaml#L1134) | bool | Controls whether the defined yaml merge strategy will be inherited if another defined pod template is configured to inherit from the current one | `false` | | [agent.jnlpregistry](./values.yaml#L931) | string | Custom registry used to pull the agent jnlp image from | `nil` |
| [agent.jenkinsTunnel](./values.yaml#L923) | string | Overrides the Kubernetes Jenkins tunnel | `nil` | | [agent.kubernetesConnectTimeout](./values.yaml#L917) | int | The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5 | `5` |
| [agent.jenkinsUrl](./values.yaml#L919) | string | Overrides the Kubernetes Jenkins URL | `nil` | | [agent.kubernetesReadTimeout](./values.yaml#L919) | int | The read timeout in seconds for connections to Kubernetes API. The minimum value is 15 | `15` |
| [agent.jnlpregistry](./values.yaml#L943) | string | Custom registry used to pull the agent jnlp image from | `nil` | | [agent.livenessProbe](./values.yaml#L966) | object | | `{}` |
| [agent.kubernetesConnectTimeout](./values.yaml#L929) | int | The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5 | `5` | | [agent.maxRequestsPerHostStr](./values.yaml#L921) | string | The maximum concurrent connections to Kubernetes API | `"32"` |
| [agent.kubernetesReadTimeout](./values.yaml#L931) | int | The read timeout in seconds for connections to Kubernetes API. The minimum value is 15 | `15` | | [agent.namespace](./values.yaml#L927) | string | Namespace in which the Kubernetes agents should be launched | `nil` |
| [agent.livenessProbe](./values.yaml#L978) | object | | `{}` | | [agent.nodeSelector](./values.yaml#L1069) | object | Node labels for pod assignment | `{}` |
| [agent.maxRequestsPerHostStr](./values.yaml#L933) | string | The maximum concurrent connections to Kubernetes API | `"32"` | | [agent.nodeUsageMode](./values.yaml#L939) | string | | `"NORMAL"` |
| [agent.namespace](./values.yaml#L939) | string | Namespace in which the Kubernetes agents should be launched | `nil` | | [agent.podLabels](./values.yaml#L929) | object | Custom Pod labels (an object with `label-key: label-value` pairs) | `{}` |
| [agent.nodeSelector](./values.yaml#L1081) | object | Node labels for pod assignment | `{}` | | [agent.podName](./values.yaml#L1087) | string | Agent Pod base name | `"default"` |
| [agent.nodeUsageMode](./values.yaml#L951) | string | | `"NORMAL"` | | [agent.podRetention](./values.yaml#L985) | string | | `"Never"` |
| [agent.podLabels](./values.yaml#L941) | object | Custom Pod labels (an object with `label-key: label-value` pairs) | `{}` | | [agent.podTemplates](./values.yaml#L1146) | object | Configures extra pod templates for the default kubernetes cloud | `{}` |
| [agent.podName](./values.yaml#L1099) | string | Agent Pod base name | `"default"` | | [agent.privileged](./values.yaml#L949) | bool | Agent privileged container | `false` |
| [agent.podRetention](./values.yaml#L997) | string | | `"Never"` | | [agent.resources](./values.yaml#L957) | object | Resources allocation (Requests and Limits) | `{"limits":{"cpu":"512m","memory":"512Mi"},"requests":{"cpu":"512m","memory":"512Mi"}}` |
| [agent.podTemplates](./values.yaml#L1170) | object | Configures extra pod templates for the default kubernetes cloud | `{}` | | [agent.restrictedPssSecurityContext](./values.yaml#L982) | bool | Set a restricted securityContext on jnlp containers | `false` |
| [agent.privileged](./values.yaml#L961) | bool | Agent privileged container | `false` | | [agent.retentionTimeout](./values.yaml#L923) | int | Time in minutes after which the Kubernetes cloud plugin will clean up an idle worker that has not already terminated | `5` |
| [agent.resources](./values.yaml#L969) | object | Resources allocation (Requests and Limits) | `{"limits":{"cpu":"512m","memory":"512Mi"},"requests":{"cpu":"512m","memory":"512Mi"}}` | | [agent.runAsGroup](./values.yaml#L953) | string | Configure container group | `nil` |
| [agent.restrictedPssSecurityContext](./values.yaml#L994) | bool | Set a restricted securityContext on jnlp containers | `false` | | [agent.runAsUser](./values.yaml#L951) | string | Configure container user | `nil` |
| [agent.retentionTimeout](./values.yaml#L935) | int | Time in minutes after which the Kubernetes cloud plugin will clean up an idle worker that has not already terminated | `5` | | [agent.secretEnvVars](./values.yaml#L1062) | list | Mount a secret as environment variable | `[]` |
| [agent.runAsGroup](./values.yaml#L965) | string | Configure container group | `nil` | | [agent.showRawYaml](./values.yaml#L989) | bool | | `true` |
| [agent.runAsUser](./values.yaml#L963) | string | Configure container user | `nil` | | [agent.sideContainerName](./values.yaml#L1079) | string | Side container name | `"jnlp"` |
| [agent.secretEnvVars](./values.yaml#L1074) | list | Mount a secret as environment variable | `[]` | | [agent.volumes](./values.yaml#L996) | list | Additional volumes | `[]` |
| [agent.serviceAccount](./values.yaml#L915) | string | Override the default service account | `serviceAccountAgent.name` if `agent.useDefaultServiceAccount` is `true` | | [agent.waitForPodSec](./values.yaml#L925) | int | Seconds to wait for pod to be running | `600` |
| [agent.showRawYaml](./values.yaml#L1001) | bool | | `true` | | [agent.websocket](./values.yaml#L946) | bool | Enables agent communication via websockets | `false` |
| [agent.sideContainerName](./values.yaml#L1091) | string | Side container name | `"jnlp"` | | [agent.workingDir](./values.yaml#L938) | string | Configure working directory for default agent | `"/home/jenkins/agent"` |
| [agent.skipTlsVerify](./values.yaml#L925) | bool | Disables the verification of the controller certificate on remote connection. This flag correspond to the "Disable https certificate check" flag in kubernetes plugin UI | `false` | | [agent.workspaceVolume](./values.yaml#L1031) | object | Workspace volume (defaults to EmptyDir) | `{}` |
| [agent.usageRestricted](./values.yaml#L927) | bool | Enable the possibility to restrict the usage of this agent to specific folder. This flag correspond to the "Restrict pipeline support to authorized folders" flag in kubernetes plugin UI | `false` | | [agent.yamlMergeStrategy](./values.yaml#L1108) | string | Defines how the raw yaml field gets merged with yaml definitions from inherited pod templates. Possible values: "merge" or "override" | `"override"` |
| [agent.useDefaultServiceAccount](./values.yaml#L911) | bool | Use `serviceAccountAgent.name` as the default value for defaults template `serviceAccount` | `true` | | [agent.yamlTemplate](./values.yaml#L1097) | string | The raw yaml of a Pod API Object to merge into the agent spec | `""` |
| [agent.volumes](./values.yaml#L1008) | list | Additional volumes | `[]` | | [awsSecurityGroupPolicies.enabled](./values.yaml#L1316) | bool | | `false` |
| [agent.waitForPodSec](./values.yaml#L937) | int | Seconds to wait for pod to be running | `600` | | [awsSecurityGroupPolicies.policies[0].name](./values.yaml#L1318) | string | | `""` |
| [agent.websocket](./values.yaml#L958) | bool | Enables agent communication via websockets | `false` | | [awsSecurityGroupPolicies.policies[0].podSelector](./values.yaml#L1320) | object | | `{}` |
| [agent.workingDir](./values.yaml#L950) | string | Configure working directory for default agent | `"/home/jenkins/agent"` | | [awsSecurityGroupPolicies.policies[0].securityGroupIds](./values.yaml#L1319) | list | | `[]` |
| [agent.workspaceVolume](./values.yaml#L1043) | object | Workspace volume (defaults to EmptyDir) | `{}` | | [checkDeprecation](./values.yaml#L1313) | bool | Checks if any deprecated values are used | `true` |
| [agent.yamlMergeStrategy](./values.yaml#L1132) | string | Defines how the raw yaml field gets merged with yaml definitions from inherited pod templates. Possible values: "merge" or "override" | `"override"` |
| [agent.yamlTemplate](./values.yaml#L1121) | string | The raw yaml of a Pod API Object to merge into the agent spec | `""` |
| [awsSecurityGroupPolicies.enabled](./values.yaml#L1340) | bool | | `false` |
| [awsSecurityGroupPolicies.policies[0].name](./values.yaml#L1342) | string | | `""` |
| [awsSecurityGroupPolicies.policies[0].podSelector](./values.yaml#L1344) | object | | `{}` |
| [awsSecurityGroupPolicies.policies[0].securityGroupIds](./values.yaml#L1343) | list | | `[]` |
| [checkDeprecation](./values.yaml#L1337) | bool | Checks if any deprecated values are used | `true` |
| [clusterZone](./values.yaml#L21) | string | Override the cluster name for FQDN resolving | `"cluster.local"` | | [clusterZone](./values.yaml#L21) | string | Override the cluster name for FQDN resolving | `"cluster.local"` |
| [controller.JCasC.authorizationStrategy](./values.yaml#L533) | string | Jenkins Config as Code Authorization Strategy-section | `"loggedInUsersCanDoAnything:\n allowAnonymousRead: false"` | | [controller.JCasC.authorizationStrategy](./values.yaml#L533) | string | Jenkins Config as Code Authorization Strategy-section | `"loggedInUsersCanDoAnything:\n allowAnonymousRead: false"` |
| [controller.JCasC.configMapAnnotations](./values.yaml#L538) | object | Annotations for the JCasC ConfigMap | `{}` | | [controller.JCasC.configMapAnnotations](./values.yaml#L538) | object | Annotations for the JCasC ConfigMap | `{}` |
@ -164,7 +157,7 @@ The following tables list the configurable parameters of the Jenkins chart and t
| [controller.initializeOnce](./values.yaml#L414) | bool | Initialize only on first installation. Ensures plugins do not get updated inadvertently. Requires `persistence.enabled` to be set to `true` | `false` | | [controller.initializeOnce](./values.yaml#L414) | bool | Initialize only on first installation. Ensures plugins do not get updated inadvertently. Requires `persistence.enabled` to be set to `true` | `false` |
| [controller.installLatestPlugins](./values.yaml#L403) | bool | Download the minimum required version or latest version of all dependencies | `true` | | [controller.installLatestPlugins](./values.yaml#L403) | bool | Download the minimum required version or latest version of all dependencies | `true` |
| [controller.installLatestSpecifiedPlugins](./values.yaml#L406) | bool | Set to true to download the latest version of any plugin that is requested to have the latest version | `false` | | [controller.installLatestSpecifiedPlugins](./values.yaml#L406) | bool | Set to true to download the latest version of any plugin that is requested to have the latest version | `false` |
| [controller.installPlugins](./values.yaml#L395) | list | List of Jenkins plugins to install. If you don't want to install plugins, set it to `false` | `["kubernetes:4285.v50ed5f624918","workflow-aggregator:600.vb_57cdd26fdd7","git:5.3.0","configuration-as-code:1836.vccda_4a_122a_a_e"]` | | [controller.installPlugins](./values.yaml#L395) | list | List of Jenkins plugins to install. If you don't want to install plugins, set it to `false` | `["kubernetes:4253.v7700d91739e5","workflow-aggregator:600.vb_57cdd26fdd7","git:5.2.2","configuration-as-code:1836.vccda_4a_122a_a_e"]` |
| [controller.javaOpts](./values.yaml#L156) | string | Append to `JAVA_OPTS` env var | `nil` | | [controller.javaOpts](./values.yaml#L156) | string | Append to `JAVA_OPTS` env var | `nil` |
| [controller.jenkinsAdminEmail](./values.yaml#L96) | string | Email address for the administrator of the Jenkins instance | `nil` | | [controller.jenkinsAdminEmail](./values.yaml#L96) | string | Email address for the administrator of the Jenkins instance | `nil` |
| [controller.jenkinsHome](./values.yaml#L101) | string | Custom Jenkins home path | `"/var/jenkins_home"` | | [controller.jenkinsHome](./values.yaml#L101) | string | Custom Jenkins home path | `"/var/jenkins_home"` |
@ -277,40 +270,40 @@ The following tables list the configurable parameters of the Jenkins chart and t
| [controller.usePodSecurityContext](./values.yaml#L176) | bool | Enable pod security context (must be `true` if podSecurityContextOverride, runAsUser or fsGroup are set) | `true` | | [controller.usePodSecurityContext](./values.yaml#L176) | bool | Enable pod security context (must be `true` if podSecurityContextOverride, runAsUser or fsGroup are set) | `true` |
| [credentialsId](./values.yaml#L27) | string | The Jenkins credentials to access the Kubernetes API server. For the default cluster it is not needed. | `nil` | | [credentialsId](./values.yaml#L27) | string | The Jenkins credentials to access the Kubernetes API server. For the default cluster it is not needed. | `nil` |
| [fullnameOverride](./values.yaml#L13) | string | Override the full resource names | `jenkins-(release-name)` or `jenkins` if the release-name is `jenkins` | | [fullnameOverride](./values.yaml#L13) | string | Override the full resource names | `jenkins-(release-name)` or `jenkins` if the release-name is `jenkins` |
| [helmtest.bats.image.registry](./values.yaml#L1353) | string | Registry of the image used to test the framework | `"docker.io"` | | [helmtest.bats.image.registry](./values.yaml#L1329) | string | Registry of the image used to test the framework | `"docker.io"` |
| [helmtest.bats.image.repository](./values.yaml#L1355) | string | Repository of the image used to test the framework | `"bats/bats"` | | [helmtest.bats.image.repository](./values.yaml#L1331) | string | Repository of the image used to test the framework | `"bats/bats"` |
| [helmtest.bats.image.tag](./values.yaml#L1357) | string | Tag of the image to test the framework | `"1.11.0"` | | [helmtest.bats.image.tag](./values.yaml#L1333) | string | Tag of the image to test the framework | `"1.11.0"` |
| [kubernetesURL](./values.yaml#L24) | string | The URL of the Kubernetes API server | `"https://kubernetes.default"` | | [kubernetesURL](./values.yaml#L24) | string | The URL of the Kubernetes API server | `"https://kubernetes.default"` |
| [nameOverride](./values.yaml#L10) | string | Override the resource name prefix | `Chart.Name` | | [nameOverride](./values.yaml#L10) | string | Override the resource name prefix | `Chart.Name` |
| [namespaceOverride](./values.yaml#L16) | string | Override the deployment namespace | `Release.Namespace` | | [namespaceOverride](./values.yaml#L16) | string | Override the deployment namespace | `Release.Namespace` |
| [networkPolicy.apiVersion](./values.yaml#L1283) | string | NetworkPolicy ApiVersion | `"networking.k8s.io/v1"` | | [networkPolicy.apiVersion](./values.yaml#L1259) | string | NetworkPolicy ApiVersion | `"networking.k8s.io/v1"` |
| [networkPolicy.enabled](./values.yaml#L1278) | bool | Enable the creation of NetworkPolicy resources | `false` | | [networkPolicy.enabled](./values.yaml#L1254) | bool | Enable the creation of NetworkPolicy resources | `false` |
| [networkPolicy.externalAgents.except](./values.yaml#L1297) | list | A list of IP sub-ranges to be excluded from the allowlisted IP range | `[]` | | [networkPolicy.externalAgents.except](./values.yaml#L1273) | list | A list of IP sub-ranges to be excluded from the allowlisted IP range | `[]` |
| [networkPolicy.externalAgents.ipCIDR](./values.yaml#L1295) | string | The IP range from which external agents are allowed to connect to controller, i.e., 172.17.0.0/16 | `nil` | | [networkPolicy.externalAgents.ipCIDR](./values.yaml#L1271) | string | The IP range from which external agents are allowed to connect to controller, i.e., 172.17.0.0/16 | `nil` |
| [networkPolicy.internalAgents.allowed](./values.yaml#L1287) | bool | Allow internal agents (from the same cluster) to connect to controller. Agent pods will be filtered based on PodLabels | `true` | | [networkPolicy.internalAgents.allowed](./values.yaml#L1263) | bool | Allow internal agents (from the same cluster) to connect to controller. Agent pods will be filtered based on PodLabels | `true` |
| [networkPolicy.internalAgents.namespaceLabels](./values.yaml#L1291) | object | A map of labels (keys/values) that agents namespaces must have to be able to connect to controller | `{}` | | [networkPolicy.internalAgents.namespaceLabels](./values.yaml#L1267) | object | A map of labels (keys/values) that agents namespaces must have to be able to connect to controller | `{}` |
| [networkPolicy.internalAgents.podLabels](./values.yaml#L1289) | object | A map of labels (keys/values) that agent pods must have to be able to connect to controller | `{}` | | [networkPolicy.internalAgents.podLabels](./values.yaml#L1265) | object | A map of labels (keys/values) that agent pods must have to be able to connect to controller | `{}` |
| [persistence.accessMode](./values.yaml#L1253) | string | The PVC access mode | `"ReadWriteOnce"` | | [persistence.accessMode](./values.yaml#L1229) | string | The PVC access mode | `"ReadWriteOnce"` |
| [persistence.annotations](./values.yaml#L1249) | object | Annotations for the PVC | `{}` | | [persistence.annotations](./values.yaml#L1225) | object | Annotations for the PVC | `{}` |
| [persistence.dataSource](./values.yaml#L1259) | object | Existing data source to clone PVC from | `{}` | | [persistence.dataSource](./values.yaml#L1235) | object | Existing data source to clone PVC from | `{}` |
| [persistence.enabled](./values.yaml#L1233) | bool | Enable the use of a Jenkins PVC | `true` | | [persistence.enabled](./values.yaml#L1209) | bool | Enable the use of a Jenkins PVC | `true` |
| [persistence.existingClaim](./values.yaml#L1239) | string | Provide the name of a PVC | `nil` | | [persistence.existingClaim](./values.yaml#L1215) | string | Provide the name of a PVC | `nil` |
| [persistence.labels](./values.yaml#L1251) | object | Labels for the PVC | `{}` | | [persistence.labels](./values.yaml#L1227) | object | Labels for the PVC | `{}` |
| [persistence.mounts](./values.yaml#L1271) | list | Additional mounts | `[]` | | [persistence.mounts](./values.yaml#L1247) | list | Additional mounts | `[]` |
| [persistence.size](./values.yaml#L1255) | string | The size of the PVC | `"8Gi"` | | [persistence.size](./values.yaml#L1231) | string | The size of the PVC | `"8Gi"` |
| [persistence.storageClass](./values.yaml#L1247) | string | Storage class for the PVC | `nil` | | [persistence.storageClass](./values.yaml#L1223) | string | Storage class for the PVC | `nil` |
| [persistence.subPath](./values.yaml#L1264) | string | SubPath for jenkins-home mount | `nil` | | [persistence.subPath](./values.yaml#L1240) | string | SubPath for jenkins-home mount | `nil` |
| [persistence.volumes](./values.yaml#L1266) | list | Additional volumes | `[]` | | [persistence.volumes](./values.yaml#L1242) | list | Additional volumes | `[]` |
| [rbac.create](./values.yaml#L1303) | bool | Whether RBAC resources are created | `true` | | [rbac.create](./values.yaml#L1279) | bool | Whether RBAC resources are created | `true` |
| [rbac.readSecrets](./values.yaml#L1305) | bool | Whether the Jenkins service account should be able to read Kubernetes secrets | `false` | | [rbac.readSecrets](./values.yaml#L1281) | bool | Whether the Jenkins service account should be able to read Kubernetes secrets | `false` |
| [renderHelmLabels](./values.yaml#L30) | bool | Enables rendering of the helm.sh/chart label to the annotations | `true` | | [renderHelmLabels](./values.yaml#L30) | bool | Enables rendering of the helm.sh/chart label to the annotations | `true` |
| [serviceAccount.annotations](./values.yaml#L1315) | object | Configures annotations for the ServiceAccount | `{}` | | [serviceAccount.annotations](./values.yaml#L1291) | object | Configures annotations for the ServiceAccount | `{}` |
| [serviceAccount.create](./values.yaml#L1309) | bool | Configures if a ServiceAccount with this name should be created | `true` | | [serviceAccount.create](./values.yaml#L1285) | bool | Configures if a ServiceAccount with this name should be created | `true` |
| [serviceAccount.extraLabels](./values.yaml#L1317) | object | Configures extra labels for the ServiceAccount | `{}` | | [serviceAccount.extraLabels](./values.yaml#L1293) | object | Configures extra labels for the ServiceAccount | `{}` |
| [serviceAccount.imagePullSecretName](./values.yaml#L1319) | string | Controller ServiceAccount image pull secret | `nil` | | [serviceAccount.imagePullSecretName](./values.yaml#L1295) | string | Controller ServiceAccount image pull secret | `nil` |
| [serviceAccount.name](./values.yaml#L1313) | string | | `nil` | | [serviceAccount.name](./values.yaml#L1289) | string | | `nil` |
| [serviceAccountAgent.annotations](./values.yaml#L1330) | object | Configures annotations for the agent ServiceAccount | `{}` | | [serviceAccountAgent.annotations](./values.yaml#L1306) | object | Configures annotations for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.create](./values.yaml#L1324) | bool | Configures if an agent ServiceAccount should be created | `false` | | [serviceAccountAgent.create](./values.yaml#L1300) | bool | Configures if an agent ServiceAccount should be created | `false` |
| [serviceAccountAgent.extraLabels](./values.yaml#L1332) | object | Configures extra labels for the agent ServiceAccount | `{}` | | [serviceAccountAgent.extraLabels](./values.yaml#L1308) | object | Configures extra labels for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.imagePullSecretName](./values.yaml#L1334) | string | Agent ServiceAccount image pull secret | `nil` | | [serviceAccountAgent.imagePullSecretName](./values.yaml#L1310) | string | Agent ServiceAccount image pull secret | `nil` |
| [serviceAccountAgent.name](./values.yaml#L1328) | string | The name of the agent ServiceAccount to be used by access-controlled resources | `nil` | | [serviceAccountAgent.name](./values.yaml#L1304) | string | The name of the agent ServiceAccount to be used by access-controlled resources | `nil` |

View File

@ -140,14 +140,6 @@ jenkins:
clouds: clouds:
- kubernetes: - kubernetes:
containerCapStr: "{{ .Values.agent.containerCap }}" containerCapStr: "{{ .Values.agent.containerCap }}"
{{- if .Values.agent.garbageCollection.enabled }}
garbageCollection:
{{- if .Values.agent.garbageCollection.namespaces }}
namespaces: |-
{{- .Values.agent.garbageCollection.namespaces | nindent 10 }}
{{- end }}
timeout: "{{ .Values.agent.garbageCollection.timeout }}"
{{- end }}
{{- if .Values.agent.jnlpregistry }} {{- if .Values.agent.jnlpregistry }}
jnlpregistry: "{{ .Values.agent.jnlpregistry }}" jnlpregistry: "{{ .Values.agent.jnlpregistry }}"
{{- end }} {{- end }}
@ -172,8 +164,6 @@ jenkins:
webSocket: true webSocket: true
{{- end }} {{- end }}
{{- end }} {{- end }}
skipTlsVerify: {{ .Values.agent.skipTlsVerify | default false}}
usageRestricted: {{ .Values.agent.usageRestricted | default false}}
maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }} maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }}
retentionTimeout: {{ .Values.agent.retentionTimeout | quote }} retentionTimeout: {{ .Values.agent.retentionTimeout | quote }}
waitForPodSec: {{ .Values.agent.waitForPodSec | quote }} waitForPodSec: {{ .Values.agent.waitForPodSec | quote }}
@ -258,8 +248,6 @@ jenkins:
webSocket: true webSocket: true
{{- end }} {{- end }}
{{- end }} {{- end }}
skipTlsVerify: {{ .Values.agent.skipTlsVerify | default false}}
usageRestricted: {{ .Values.agent.usageRestricted | default false}}
maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }} maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }}
retentionTimeout: {{ .Values.agent.retentionTimeout | quote }} retentionTimeout: {{ .Values.agent.retentionTimeout | quote }}
waitForPodSec: {{ .Values.agent.waitForPodSec | quote }} waitForPodSec: {{ .Values.agent.waitForPodSec | quote }}
@ -483,10 +471,7 @@ Returns kubernetes pod template configuration as code
nodeUsageMode: {{ quote .Values.agent.nodeUsageMode }} nodeUsageMode: {{ quote .Values.agent.nodeUsageMode }}
podRetention: {{ .Values.agent.podRetention }} podRetention: {{ .Values.agent.podRetention }}
showRawYaml: {{ .Values.agent.showRawYaml }} showRawYaml: {{ .Values.agent.showRawYaml }}
{{- $asaname := default (include "jenkins.serviceAccountAgentName" .) .Values.agent.serviceAccount -}} serviceAccount: "{{ include "jenkins.serviceAccountAgentName" . }}"
{{- if or (.Values.agent.useDefaultServiceAccount) (.Values.agent.serviceAccount) }}
serviceAccount: "{{ $asaname }}"
{{- end }}
slaveConnectTimeoutStr: "{{ .Values.agent.connectTimeout }}" slaveConnectTimeoutStr: "{{ .Values.agent.connectTimeout }}"
{{- if .Values.agent.volumes }} {{- if .Values.agent.volumes }}
volumes: volumes:

View File

@ -393,9 +393,9 @@ controller:
# Plugins will be installed during Jenkins controller start # Plugins will be installed during Jenkins controller start
# -- List of Jenkins plugins to install. If you don't want to install plugins, set it to `false` # -- List of Jenkins plugins to install. If you don't want to install plugins, set it to `false`
installPlugins: installPlugins:
- kubernetes:4285.v50ed5f624918 - kubernetes:4253.v7700d91739e5
- workflow-aggregator:600.vb_57cdd26fdd7 - workflow-aggregator:600.vb_57cdd26fdd7
- git:5.3.0 - git:5.2.2
- configuration-as-code:1836.vccda_4a_122a_a_e - configuration-as-code:1836.vccda_4a_122a_a_e
# If set to false, Jenkins will download the minimum required version of all dependencies. # If set to false, Jenkins will download the minimum required version of all dependencies.
@ -906,14 +906,6 @@ agent:
# -- The name of the pod template to use for providing default values # -- The name of the pod template to use for providing default values
defaultsProviderTemplate: "" defaultsProviderTemplate: ""
# Useful for not including a serviceAccount in the template if `false`
# -- Use `serviceAccountAgent.name` as the default value for defaults template `serviceAccount`
useDefaultServiceAccount: true
# -- Override the default service account
# @default -- `serviceAccountAgent.name` if `agent.useDefaultServiceAccount` is `true`
serviceAccount:
# For connecting to the Jenkins controller # For connecting to the Jenkins controller
# -- Overrides the Kubernetes Jenkins URL # -- Overrides the Kubernetes Jenkins URL
jenkinsUrl: jenkinsUrl:
@ -921,10 +913,6 @@ agent:
# connects to the specified host and port, instead of connecting directly to the Jenkins controller # connects to the specified host and port, instead of connecting directly to the Jenkins controller
# -- Overrides the Kubernetes Jenkins tunnel # -- Overrides the Kubernetes Jenkins tunnel
jenkinsTunnel: jenkinsTunnel:
# -- Disables the verification of the controller certificate on remote connection. This flag correspond to the "Disable https certificate check" flag in kubernetes plugin UI
skipTlsVerify: false
# -- Enable the possibility to restrict the usage of this agent to specific folder. This flag correspond to the "Restrict pipeline support to authorized folders" flag in kubernetes plugin UI
usageRestricted: false
# -- The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5 # -- The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5
kubernetesConnectTimeout: 5 kubernetesConnectTimeout: 5
# -- The read timeout in seconds for connections to Kubernetes API. The minimum value is 15 # -- The read timeout in seconds for connections to Kubernetes API. The minimum value is 15
@ -945,7 +933,7 @@ agent:
# -- Repository to pull the agent jnlp image from # -- Repository to pull the agent jnlp image from
repository: "jenkins/inbound-agent" repository: "jenkins/inbound-agent"
# -- Tag of the image to pull # -- Tag of the image to pull
tag: "3261.v9c670a_4748a_9-1" tag: "3256.v88a_f6e922152-1"
# -- Configure working directory for default agent # -- Configure working directory for default agent
workingDir: "/home/jenkins/agent" workingDir: "/home/jenkins/agent"
nodeUsageMode: "NORMAL" nodeUsageMode: "NORMAL"
@ -1098,18 +1086,6 @@ agent:
# -- Agent Pod base name # -- Agent Pod base name
podName: "default" podName: "default"
# Enables garbage collection of orphan pods for this Kubernetes cloud. (beta)
garbageCollection:
# -- When enabled, Jenkins will periodically check for orphan pods that have not been touched for the given timeout period and delete them.
enabled: false
# -- Namespaces to look at for garbage collection, in addition to the default namespace defined for the cloud. One namespace per line.
namespaces: ""
# namespaces: |-
# namespaceOne
# namespaceTwo
# -- Timeout value for orphaned pods
timeout: 300
# -- Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it # -- Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it
idleMinutes: 0 idleMinutes: 0

View File

@ -183,15 +183,14 @@ jenkins:
agent: agent:
image: image:
repository: public.ecr.aws/zero-downtime/jenkins-podman repository: public.ecr.aws/zero-downtime/jenkins-podman
tag: v0.6.2 tag: v0.6.0
#alwaysPullImage: true #alwaysPullImage: true
podRetention: "Default" podRetention: "Default"
showRawYaml: false showRawYaml: false
podName: "podman-aws" podName: "podman-aws"
defaultsProviderTemplate: "podman-aws" defaultsProviderTemplate: "podman-aws"
annotations: annotations:
container.apparmor.security.beta.kubernetes.io/jnlp: "unconfined" container.apparmor.security.beta.kubernetes.io/jnlp: unconfined
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
customJenkinsLabels: customJenkinsLabels:
- podman-aws-trivy - podman-aws-trivy
idleMinutes: 30 idleMinutes: 30
@ -225,8 +224,8 @@ jenkins:
- name: jnlp - name: jnlp
resources: resources:
requests: requests:
cpu: "200m" cpu: "512m"
memory: "512Mi" memory: "1024Mi"
limits: limits:
cpu: "4" cpu: "4"
memory: "6144Mi" memory: "6144Mi"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio-gateway name: kubezero-istio-gateway
description: KubeZero Umbrella Chart for Istio gateways description: KubeZero Umbrella Chart for Istio gateways
type: application type: application
version: 0.22.3-1 version: 0.22.3
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -19,4 +19,4 @@ dependencies:
- name: gateway - name: gateway
version: 1.22.3 version: 1.22.3
repository: https://istio-release.storage.googleapis.com/charts repository: https://istio-release.storage.googleapis.com/charts
kubeVersion: ">= 1.26.0-0" kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-istio-gateway # kubezero-istio-gateway
![Version: 0.22.3-1](https://img.shields.io/badge/Version-0.22.3--1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.22.3](https://img.shields.io/badge/Version-0.22.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio gateways KubeZero Umbrella Chart for Istio gateways
@ -16,7 +16,7 @@ Installs Istio Ingress Gateways, requires kubezero-istio to be installed !
## Requirements ## Requirements
Kubernetes: `>= 1.26.0-0` Kubernetes: `>= 1.26.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
@ -33,6 +33,7 @@ Kubernetes: `>= 1.26.0-0`
| gateway.autoscaling.minReplicas | int | `1` | | | gateway.autoscaling.minReplicas | int | `1` | |
| gateway.autoscaling.targetCPUUtilizationPercentage | int | `80` | | | gateway.autoscaling.targetCPUUtilizationPercentage | int | `80` | |
| gateway.podAnnotations."proxy.istio.io/config" | string | `"{ \"terminationDrainDuration\": \"20s\" }"` | | | gateway.podAnnotations."proxy.istio.io/config" | string | `"{ \"terminationDrainDuration\": \"20s\" }"` | |
| gateway.priorityClassName | string | `"system-cluster-critical"` | |
| gateway.replicaCount | int | `1` | | | gateway.replicaCount | int | `1` | |
| gateway.resources.limits.memory | string | `"512Mi"` | | | gateway.resources.limits.memory | string | `"512Mi"` | |
| gateway.resources.requests.cpu | string | `"50m"` | | | gateway.resources.requests.cpu | string | `"50m"` | |

View File

@ -19,5 +19,6 @@ Installs Istio Ingress Gateways, requires kubezero-istio to be installed !
## Resources ## Resources
- https://github.com/cilium/cilium/blob/main/operator/pkg/model/translation/envoy_listener.go#L134 - https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#IstioOperatorSpec
- https://github.com/istio/istio/blob/master/manifests/profiles/default.yaml
- https://istio.io/latest/docs/setup/install/standalone-operator/

View File

@ -8,6 +8,7 @@ gateway:
replicaCount: 1 replicaCount: 1
terminationGracePeriodSeconds: 120 terminationGracePeriodSeconds: 120
priorityClassName: system-cluster-critical
resources: resources:
requests: requests:

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio name: kubezero-istio
description: KubeZero Umbrella Chart for Istio description: KubeZero Umbrella Chart for Istio
type: application type: application
version: 0.22.3-1 version: 0.22.3
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -22,7 +22,7 @@ dependencies:
version: 1.22.3 version: 1.22.3
repository: https://istio-release.storage.googleapis.com/charts repository: https://istio-release.storage.googleapis.com/charts
- name: kiali-server - name: kiali-server
version: "1.88.0" version: "1.87.0"
repository: https://kiali.org/helm-charts repository: https://kiali.org/helm-charts
condition: kiali-server.enabled condition: kiali-server.enabled
kubeVersion: ">= 1.26.0-0" kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-istio # kubezero-istio
![Version: 0.22.3-1](https://img.shields.io/badge/Version-0.22.3--1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.22.3](https://img.shields.io/badge/Version-0.22.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio KubeZero Umbrella Chart for Istio
@ -16,14 +16,14 @@ Installs the Istio control plane
## Requirements ## Requirements
Kubernetes: `>= 1.26.0-0` Kubernetes: `>= 1.26.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://istio-release.storage.googleapis.com/charts | base | 1.22.3 | | https://istio-release.storage.googleapis.com/charts | base | 1.22.3 |
| https://istio-release.storage.googleapis.com/charts | istiod | 1.22.3 | | https://istio-release.storage.googleapis.com/charts | istiod | 1.22.3 |
| https://kiali.org/helm-charts | kiali-server | 1.88.0 | | https://kiali.org/helm-charts | kiali-server | 1.87.0 |
## Values ## Values
@ -31,15 +31,19 @@ Kubernetes: `>= 1.26.0-0`
|-----|------|---------|-------------| |-----|------|---------|-------------|
| global.defaultPodDisruptionBudget.enabled | bool | `false` | | | global.defaultPodDisruptionBudget.enabled | bool | `false` | |
| global.logAsJson | bool | `true` | | | global.logAsJson | bool | `true` | |
| global.priorityClassName | string | `"system-cluster-critical"` | |
| global.variant | string | `"distroless"` | | | global.variant | string | `"distroless"` | |
| istiod.meshConfig.accessLogEncoding | string | `"JSON"` | | | istiod.meshConfig.accessLogEncoding | string | `"JSON"` | |
| istiod.meshConfig.accessLogFile | string | `"/dev/stdout"` | | | istiod.meshConfig.accessLogFile | string | `"/dev/stdout"` | |
| istiod.meshConfig.tcpKeepalive.interval | string | `"60s"` | | | istiod.meshConfig.tcpKeepalive.interval | string | `"60s"` | |
| istiod.meshConfig.tcpKeepalive.time | string | `"120s"` | | | istiod.meshConfig.tcpKeepalive.time | string | `"120s"` | |
| istiod.pilot.autoscaleEnabled | bool | `false` | | | istiod.pilot.autoscaleEnabled | bool | `false` | |
| istiod.pilot.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| istiod.pilot.replicaCount | int | `1` | | | istiod.pilot.replicaCount | int | `1` | |
| istiod.pilot.resources.requests.cpu | string | `"100m"` | | | istiod.pilot.resources.requests.cpu | string | `"100m"` | |
| istiod.pilot.resources.requests.memory | string | `"128Mi"` | | | istiod.pilot.resources.requests.memory | string | `"128Mi"` | |
| istiod.pilot.tolerations[0].effect | string | `"NoSchedule"` | |
| istiod.pilot.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| istiod.telemetry.enabled | bool | `false` | | | istiod.telemetry.enabled | bool | `false` | |
| kiali-server.auth.strategy | string | `"anonymous"` | | | kiali-server.auth.strategy | string | `"anonymous"` | |
| kiali-server.deployment.ingress_enabled | bool | `false` | | | kiali-server.deployment.ingress_enabled | bool | `false` | |

View File

@ -6,11 +6,19 @@ global:
defaultPodDisruptionBudget: defaultPodDisruptionBudget:
enabled: false enabled: false
priorityClassName: "system-cluster-critical"
istiod: istiod:
pilot: pilot:
autoscaleEnabled: false autoscaleEnabled: false
replicaCount: 1 replicaCount: 1
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
resources: resources:
requests: requests:
cpu: 100m cpu: 100m

View File

@ -1,15 +0,0 @@
{{- if or .Values.redis.metrics.enabled ( index .Values "redis-cluster" "metrics" "enabled") }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" (include "kubezero-lib.fullname" $) "grafana-dashboards" | trunc 63 | trimSuffix "-" }}
namespace: {{ .Release.Namespace }}
labels:
grafana_dashboard: "1"
{{- include "kubezero-lib.labels" . | nindent 4 }}
binaryData:
redis.json.gz:
H4sIAAAAAAAC/+1daVPcOBr+Pr9C5Zndgq0OsQ0dmqmaDySBmdQAyQYyW7UD1aW2RbcWX2PJQIdifvvq8CHZ6jMNNMEfOKxX1vGejw5Ldz8AYPX7OEoySqyfwZ/sGYA78ZtRIhgilmq9P+1/+vzx+ODst4Mvp1anIAdwgAJO/5TGIaIjlJGK6CPipTihOI54lopAx4ko1IcUkjhLPVTRkiAb4uiDz+mJoVBJP8mbpVQrMtyz3xcd2aUU/ZXhFBk6VdQ/TOEljGBVOPaNyQUTfq0TrlFK8t692epuuXkjOubqEhgxZjUrS0bGqtRkpaLpdZhYiicyM2qy0VSls2Vv2Uv0jeBoGCBCIW1WeWqgNXtZihNGUczyMiqXp6zeCjChpXSrRjHKIMMB/cBLcjpVqsIcc6dZHhTBQcDpNM2Qkj7CviEVe3H0Lg7ilBeYDgdww+4A13HYr263A5xNteii6/tVX8A/wX6AUqo1oZIlGQ1imPpWTrsXfy9+yMVQN7DPyMcEvC/eApdxCqpOAkk+uE3ilKIUOFu3HYApuInTKwJuMB2BEQpCwETCGPA65blfjSBA+Qtb5+l5xH8+XIJxnIEQEy5fIDKCEIVxOgYZxQH+KvrWAUmAIEEgjH18OQbnVghvZbZzC1zDIEMAR/IfsjWGoVQfixVHayKwhhGiwiU4Tm+7K5O4gZzFcUBxwgi2SBRqF2VBIJ9Yq2HOHKfb23Z2uju2280LCHB0JXyDVDChwAZf4UFvhM5wiOKMKoVLGpf8W+hdDdM4i3jdlzAgSKf/wTtoJhFFeUGuP+4O05vujvyxt/Y0DZI5tneZZrl7HbBj8yw9Q54uozi7LsvSFaXsbhY6dFG551nWwPTLwyEUXLHLRIN8ZHqaCjPQ+8l0MIRUOIOq3CHMhqg0Y5HEdKNglGPbSndCHBUENZmM4pt6Zdx0Rszhj+LAP+KBiUzLcQzTKyREwLtRWFjVxhT7n2Kit3LEHneVwngTXOX5ttbKMX9uFC20dK96jJieMjOoaxcmJ+imzmVda3PWJQkzxDPpNRxTuq5oVYcUpyQNksaAoluq6BPISbzoMvG+M72wFEbDGYW5VWENzWTK8J4p56eYsYboGmFxFgnCcSwcMrOkKEIeRb6l5TnjNdc4GidlAKmkkcSEXuJbHaHkiYdxRE/xV1FP1/6HQk9R8x2RNvEVwZNjmEyRxSWzQf4eb7TONSp7Y5283q8R4vKFKQwlCVN2pjo1o7vEQaBHr20WuZiPZb96e9xxOD3Nt1zyepo2xUtWy5HFuNwH7W1rBeRma7Q44VVYMVlYh4swHSI6hW8sRImqmd5ssJ9+zIBEnzKfvSGCUz9L+APDuH2CmLr45A5HLMxFHvrl73Prp+Lh3Lr/8ycOhaU9Xmxu6ryufJkojqAUI6JnUWzZMlMOoUcFo1yNHKAhivzDsgb9ZeaZU+w105nGSaRc0wpCEQ+KTs+2p2hF6QpJjd+YCvdufRF8a+J2E64Tdq0q/66q/II6Xflj3mLrlzn1vnAjM1VfZCyGC16WpiiiZijbRvoVRfoojtALCfbuzGDPxw1cqT4yn5Q2xxASDDhuiwZaNNCigZWiARn7S4XoewFmvn9y7F862D9mSHeXiuchOsyVyQn19NMRvmxYRgkA3kmWLYkAei0CmI4ADLF7GQjQBBJPjQASxGqK6EOAgNoE3BNggJ0aBthZEQZwWgzQYoAWA6wUAzA9Af8C+TSAnHzuZ4SBgcGYoslQAIDXQHuHzyrMeGXzO8MPPbuz110FiDiWSwNfCByil4IkYIAheVfEfcW3DGDaCFJioecIRUM6Eq5NS0em7LOC+YJxGwsbdrSEX1Poc/Cn4QEeI/X+zB01e7Wo2ZsnavooOpWGU2+zaIc7IzIKA9KbBq+HBndVyLJJYYZvSsWRIXXSNEVMoclHypWvgtDoPnehxBDs0Q32haK40yDAPEFRCX76GulRWV6jVTmyg8MmzE14dVxrMv5uV09vSpAx3EcpEm76MogV85e+sgBLtY6xwOUhk7EwF+JdNWrhbi9B/pGMdDpt/pEkpMVUsheHIYx80k/S2EOEhxIh38kzyk54sYbzyPuLRY0dW3elgnPVOv0c8UQRoRJKahEjRJ/RMNfJ2gszBqq5WMDBLfIypuUsgBPkKeGmXClWhyDkMyIMXuRrxE17hkzozTEHiVNaG5wIW+4XMc3LwiyAFF8jq4lpqr0n6gaPW3iLa5Y4yLwrqZ5qn3mzc5PWJucrrF3LbR43lc7HYONjeDttfKAs6Y44J3T9yzcE6Y3ghHj4FhKk78goHWwju/SwjeQmWpw89Fi7djYMZNwUOgMOQ5MuivQjdF02WtsSsmbwQ5lLcJ8TJnHerACUOC0oeVJQojXt+8UkWAElV2gsmtAfYUpm4ZHuMngk3/1W05ZVAhXe9NWMcOfBKp0lmMs3vKHnyl7Z+AUZ/PZ5gsHfmCox/HcsugyYcwCnTwYFceTja+xnMHixULCmXUsBLLtFgd+MAuv4wfrx7aHz1rabivliJ6mcWZs5d5cAhLsPDwh5/DgIEzqeQPsvSuPV4Mj68ttawEieACABX3k32+mt5TZKLLA68tArHbwNa4MFF10HWj34mxfiMWteEMetB147404ETFgmauftVg/WhAY/JBJq4drzm7R7DnDMcVeAxxRmrdMM3Voiq3bVcCWrhhGi8gtrCR4eZA5pQbRwdwdEi8D9/aKQan7opHMgzujjsOCbZ8sYc2RjJ3NnzTHVCaL8w17w4fXHxcBUHTO1WOqJsVS7Bvpy4NTu085uOTtLwKnuEmiKC3KfnJlPWpiFtWrZF4FaKR6O6KnxJIclUZhx32oLwpYDYRq/VoLBSBYWu3/9gVjKmwg5NsFgDDb8wdPCDn/AIAf4vjZnyfmdDxSFcjXu/dt2Me5BMYn2AW4LSVpI8nwgSWOGZylM4my3UzwtulgvdPEKNLP3WUk4xdFwyntPB0eimIKigY++7PbcmGVm1PJbrNYDuh3kvQLXBJzE9FX5/DsTSIvi2t31LYxTikfX2BMqZf3Y27MPbVfVnxCF4nwBv4/5SKgvMjc/Hzq3nF13y9ndsrecn/ecrn1udf4XD1h6dYAnP6zCXEmKvADiULZh+/BNb3u73d41/wSY6ywBNnst1nwZWNPsCIU3kBuFctvTfFXhhdzpH+M/+uokw1cbyvqcgC/MGwmgNcfanJwtK1LXccd73qO12UdWZ3ju/h+L4avjq2z4M8az4uvQg1o32p1mLYZtpyLXGhz2nngqcil06Oy8ZHjotFORjzMVSePkaqPbAdh0VsV0eAHEERVPu+zphX5jq1WF1bTu9D0YBHmflt+5tgaHVYB3vCPLHFSxqj1a7Uxai0JeMgpZbNJpRVNMvWVAxJsWL0z47u+RoIL73X3vN+Vg5FWfi+xMCtPrEY/lxUglP4DXOPXYFJPbqNtG3WcSdctb1Jjpcavivd62pYJbxBuhEP5RXr3mOjKZjoP8FrL0SuZkbrISvLQaqyyaojDhM1rRcJ772arYdGc6otN4NZt6Umf93r+apIvpZdUIceQFmY/2zefVTrnHkMsxY/ZveG367YhaTKo8Ekv+K0PpeMKlfJqYHC11iG5rgy2LXOHkSxqcjiPPdMpT81bAuj1r2hUUB2/XvIQirvsF79Pz0SWOcHFFneBzX7qX8nKeDuB8FJFQOxp4aSGeFMUtIsPI9NIs+S3Un0qs7reJ1RBimG0K2ZF/F02zdGqjDzzNnDlXGtlJhZARdCYLMqK7NdCku5LxHMOUDwzEdEAS+6vRrk+xD8QhwAsoF6u8H9XeWUq3FunhqvXNafVtXmkIYfC0QvBCQKbVu6X1sA77ZmhgUffDaeACff5ONbN2PatA4CUYKo/yj29eOQW2L07rZ3hQey3BDN2n1cs5w8q76VQAbnWVoYpjKw/b6oN6aHtX+d9RH7ZtlaKMQFzlfye/kPai6AMfCCq6NLMWteA3asFqLe6O+qAsCez6anuLtmjs+xqL8bE1SOMbNkLN4as+3pv3ntwN0624UputTN5zfPv+6OTz71/+81WmVtcY7/xw/38XppizYHsAAA==
redis-cluster.json.gz:
H4sIAAAAAAAC/+1d63PTOhb/3r/CY5idMpMW23mWmfuhLfTCLAUuLezsQjej2Goialu+stw2dLp/++rhh2Q7aRJKm4BhKLWOrMfRefx0JEs3W4ZhDocojBIamy+ML+zZMG7ET0YJQQBZqvnyZPjh4/vjV6evX306MVsZ2Qcj6HP6B4IDSCcwiQuiB2OXoIgiHPIsBYFOI1GoByiIcUJcWNAiPxmj8I3H6VFNoZL+Lm2WUq3IcMt+nrVklwj8O0EE1nQqq39MwDkIQVE48mqTMyb8WSZcQhKnvevvOrt22ohWfXURCBmzqpVFk9qq1GSlohXqCAGF/o6H2HBACndKefIhTum1tVq79q41v+q60UQzxzGsjmBdvazWu+qt7XJMAa1WdqKlVpmaSw8IQ8zyMioXH1ml6aOY5sJUNIRRRgny6Rtekt0qUhWG1HeU5YEhGPmcTkkClfQJ8mpSkYvDQ+xjwgsk4xHYtlqGY9vsR7fbMuxnatFZp/eLvhj/MPZ9SKjWhGL84skIA+KZKe1W/H+2lbK+rM9Fh4z8VeMcE+MjZKJmxJBwBouaTJZCSx01xyGkQs/tTs/uyiQu9acY+xRFjGCJRDGgYeL78olCAtIm2D2n0+kN7L2Okxbgo/BCKLwcRiEaNQbgzpExzxH0vUMcnqNxLgCpUTsHiU9jLZWlu0lMccBTb1tqegCiCIXjolEF3yfMOE2w75XL4m9hMfwmGMXYTxSdTOkxhVGsyGL256b0zBuWycuYQBiWCpJ6APwEpjwuUW+15zPlSe9lEiIq1M7cqslgYiYJhEm05MJWKQNrGfI+YJ0N5oQ99pUyrthzT3m+zgQkfZ7y50rRQnjsQf6MI12pCxYdpyyX3FAKFkKZUUMcasRvbNjR+TQjuzBk8qlmwASxtExiTZBQrJIJ9BIXvq9pFW8X8N3qKDOnG9N3mL5jw2UqFF26hADzl83n//3Mu/T0uTb2sp88wznwY1g7bBReU50tFf5Kd/y57AcLVw/IGFK9E1of4XUkpDMA19vs35CLypCiAG4TbkaGScQfGD4ZxpBZPy+++YZHf/zvq/nU9RnvIflq3n55ygEM+5018uzZM72fKGS+IKQVYypI8h0NnUgWwzEMvSNMAkCrVALPJUDZV8R9qzQKJm/3ERE2IbNeefrJBJ3TKoEKC2keyp4Zn0Tfq7hJOLdanzgC7sWY4CT0dDdhO4NW9o+58oJFTMbcCTxl1eCk0iCpFyCaM3xPn06Y7f8n5Npn4tE36NIXg85A51dugp4cHh7qJC5inPLu+VxeugTH8QQgknfryeDI6luHCtq806YzcYp8IFrK+cp8hcJbJoeAjTh77a8EkumJal0YkTJxeMv07nNqK1VZ+hFnoWs28NE4LI1CXofPPZ/0obrKlo3xhruc1hIVMBsxr/iBtbg/e1DH1bvTcU3QeMKkYULfh8c4iSFvSVnuhHPbKx4Lc6apsTRmJ5jQAyH8O4GiFFwLypBS/NWshI6rUim7fslU7gNm1XKSbRX6wlSPIJdP0gp9tcQfc2HPQUA4hssbn+7A0i3MuTTBJmfJbNujE3D+whybRPDVa8hHiGXuFn1n+nXFplIUhS49ZJa47HlEhrdiSGZTxIvxbDpn7RzyB0g4EpmdgRv82dTPGTSo0Pl7+9eonsZGLEYcyNR2e2EkkPl94eg1P18a2Nw7C4TA5hsIxuaCDj6jHAGXCgF15vr/m5sIe7elBkgpnwsPtHRuQQUatax5sIFJ5Qn6zofH6VRS04GznYVgRoErZ/A8myP+GwIS1+BDTpsKWp15rC3qGId0MqusQBIXLuxfEF7MKutK0BYu6iWYzirJ46SFC3rNQMaskiaCtjivUMg87UxmpdSFizuRCHlGcSl+XqZ1vo/iuWUGapY7sPAHAl2UGvubrQV4MZMTt4tg6QCeMp0p/I838PY8JXaXo+1ZKHt+1IzPeMX0ipk+ogSVmKs+LXVWoLUi6sQ7tYJf65dANea2xPxjQZeWs/IuryYy6pwre+4rgijc9/3P2gxSJ8/EyoJ6nOODzJPc12xmtRlLb9D9wRlLa/HK+rYzs7LBkXPo1NdnrVpbe1Zt/fbBUW+vvjZn1do6s2o72u91LKu+Nvvnz/302OUx4JAiNibgEhohDne+Q4INZsZC1gvoGbHPCGs+dbzfGcugNGOxndKURUv4wTmL3bvPyckDzTYeYyLRzBPumifkOjuUOnvXrGHxmN/NjcFAv1FG/QsG/xoU36D4BsU/HIqXHn2dUPwaAfQqAr8XfA58BGLRjFhb+GTAnVTnBXyR+C0Mx3QiooVaOqzLfhemq1lUlumECMbo5T000jpnSqVuCBAJfxLgIelxLQ2T6fxbOK5cQWnlFdF+DUjzPBieyGhZmUmiIU4JnOnNEGsU+/Fp/b4FcDkuFypYSQisWYPjQeSa7MxQ1KQSDpFOavdFcFhRTaWYCj9fLuhSVYsKfzjmrOgMT7xCnhBdZ15UnGuvCInnK8IZPjHnr0EDvi/jNFufmdW4SAI2MIaVobsL4Ua8VVz8El5zV0+vSgIbLw8Sgd/Ncx8rtlEGWt9r0l8QI+DCOi2PKZvMV2rhIdEIem/5fKpMWxwIMhO/naHBIGBzsXgYEezCOGawUEhBFRV+6QZnz9Y2oLy/VETZ6Vi6kxGcK/Y1zfDm6mLgl8XWrT/CcQZFz5ZZ0E6HxXh1Dd2ET7CfGwxvKN463/OjqEQQf4RiLbLGAQulB3z5r2oMuCfXLaHQ+GEGCdwkSHxA0WXNfgZla6C6/+4ayMmMtufLvZDiqa/W5gupWlSxgEil3PWmKzdRNY5lCq7hkvEt2+pZs2RdRz7ars7yMrTp4/EBiKG+zy0345Xs0o5XkpUurxRkYr2x17w3FWWbVgUoW+wvy7VIfwsv80Zr2/HWDIN5DCAHQOyxcxpgtkz4bLACMrMbZLYRyExr4e8LzJCCzC7gVDRhOGEz7vsFZele6ZIw/QS0tsPbfi97ABZCbK0VuBsgBno3lr+y9Uty+GAzMfFrJksMBh+LLhvMfBgnPxkROzMRMQo9dIm8BPi/CyLeGzgrQMjyisEqANK6Vyy8N2g3ULgeCpexjPnk4Mg+sKyqiDfhynUJV9qdFVBx/+FQMXeFr4KITmte4rT/QILrYPMvD6Z5ggFig++jaCKdPy3SKcBeAANMpsOERzdHU1q35m08emRzhzdvbbByvleAmdMh00bJwvvg2wOCY2ZElkTA64F0T7mBMo4Fy41PMVfpx4G5v2Hg1+7PxLnCdPxMfHi/SJd1pYG6zcr75gR47e4qEV6rCfE2i++bufgeQirPkZGodL1X3ndES5cFoosDTp0tOKGbwhfZ1M1Emu8gvcLkwnjz/P1yELNi0hqEuSgsc7rWoyHM+99a4HSbrQUNyFwcZA4eN17q2KtgzE6DMedhTLvBmD+AMfXvfe4DYlIcXWx3Wwaq2+g5A0wZX+ygchTQw8Ipww2qXw8VMVCtH0MX+H7amdUx6Rps8TQOeUdW2d7ZQLCVIVjbGfxCuzvbzl4DwRoItjAE6z9ynM9xVsFg7QaDNXG+zcFgcRIYKfTyRmK7YRVzPTN2jGq+ISsCERSO6154PHwWYmpkLXvw5eiN4VI9h1bfkrkeWPVV2ivjMjbeYbqTPzMkEjdbMB8Ktnas3i8EWztWv4GtDWxdV9haiRyu8v1RtwGtDWj91UDraGpse6PHDRV6o3nnDG3k9y1y198bCgP5dcvLg0f63PvXRlbnyowmhGsLmRowVP0sBV4iV54+9GSwZx1Z6k5NNscKxNH83hBxFRqKzPlxFvJSAxcyW2b3mTHv71q79os9u2t9NVvSxhWX7vAz0OorIdD1AQpkG9pHvUG73Xwds8ZbCp29FUDboAFtDWhbCrTVm3Bhx+TKZWo1NCub2U9njo19jA2KDARuKwvHIqLF7KgIus3biSeBIfA88uyR429r9B1NmZmpU9oQZsrWbnDMUhyc9KrUjeYbmp+Bptc7Avk7wen8UkamqlzdeK/bVnqxYMwgcgAKN+n0ZDKd+umhsORC5mRethh4U9xRmLJNPuykt26ZeX0Md0dc1MPxIndAgvS8zaqUFnBJvRtwoeshGZxFIcpOLRcDO5TinV+P1jKYydUufkw/SVRnxChkBtnjZ4LW8L9eXkym5xTVZM+Oek1NvHbZXXFSsXLxlPk3PxZ9iQ4UA21rqWN4XXICZnyBok/EP5mGbp2Vq8YGmBjIY1H/yhpl6tRK63lafebU3MnuKQR+eqwsSL2oq3ydJpWHQKeClZ8ujq927Mx4ZgeIM4XTXosQM5+keDllWH4JnmrhzK4yy7It5aGtPthB8XtX+d1WH9qWSlFMvKP8bqcXiJ5lfeCOHVfOsJ5di1pwTy1YrcXpqA+KN+x7anuztmjs+44FoDRHBF/FqQQXnlZeX5pag9i4RMDIoSOTJ0iM9q7c8m4m8rLZ92A62e99DL/L1OJe2b2t2/8DY8J/imB5AAA=
{{- end }}

View File

@ -90,14 +90,7 @@ Kubernetes: `>= 1.26.0`
| fluent-bit.serviceMonitor.selector.release | string | `"metrics"` | | | fluent-bit.serviceMonitor.selector.release | string | `"metrics"` | |
| fluent-bit.testFramework.enabled | bool | `false` | | | fluent-bit.testFramework.enabled | bool | `false` | |
| fluent-bit.tolerations[0].effect | string | `"NoSchedule"` | | | fluent-bit.tolerations[0].effect | string | `"NoSchedule"` | |
| fluent-bit.tolerations[0].key | string | `"kubezero-workergroup"` | |
| fluent-bit.tolerations[0].operator | string | `"Exists"` | | | fluent-bit.tolerations[0].operator | string | `"Exists"` | |
| fluent-bit.tolerations[1].effect | string | `"NoSchedule"` | |
| fluent-bit.tolerations[1].key | string | `"nvidia.com/gpu"` | |
| fluent-bit.tolerations[1].operator | string | `"Exists"` | |
| fluent-bit.tolerations[2].effect | string | `"NoSchedule"` | |
| fluent-bit.tolerations[2].key | string | `"aws.amazon.com/neuron"` | |
| fluent-bit.tolerations[2].operator | string | `"Exists"` | |
| fluentd.configMapConfigs[0] | string | `"fluentd-prometheus-conf"` | | | fluentd.configMapConfigs[0] | string | `"fluentd-prometheus-conf"` | |
| fluentd.dashboards.enabled | bool | `false` | | | fluentd.dashboards.enabled | bool | `false` | |
| fluentd.enabled | bool | `false` | | | fluentd.enabled | bool | `false` | |

View File

@ -6,9 +6,10 @@ metadata:
labels: labels:
common.k8s.elastic.co/type: elasticsearch common.k8s.elastic.co/type: elasticsearch
elasticsearch.k8s.elastic.co/cluster-name: {{ template "kubezero-lib.fullname" $ }} elasticsearch.k8s.elastic.co/cluster-name: {{ template "kubezero-lib.fullname" $ }}
{{ include "kubezero-lib.labels" . | nindent 4 }}
name: {{ template "kubezero-lib.fullname" $ }}-es-elastic-user name: {{ template "kubezero-lib.fullname" $ }}-es-elastic-user
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
labels:
{{ include "kubezero-lib.labels" . | indent 4 }}
data: data:
elastic: {{ .Values.elastic_password | b64enc | quote }} elastic: {{ .Values.elastic_password | b64enc | quote }}
--- ---
@ -19,9 +20,10 @@ metadata:
labels: labels:
common.k8s.elastic.co/type: elasticsearch common.k8s.elastic.co/type: elasticsearch
elasticsearch.k8s.elastic.co/cluster-name: {{ template "kubezero-lib.fullname" $ }} elasticsearch.k8s.elastic.co/cluster-name: {{ template "kubezero-lib.fullname" $ }}
{{ include "kubezero-lib.labels" . | nindent 4 }}
name: {{ template "kubezero-lib.fullname" $ }}-es-elastic-username name: {{ template "kubezero-lib.fullname" $ }}-es-elastic-username
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
labels:
{{ include "kubezero-lib.labels" . | indent 4 }}
data: data:
username: {{ "elastic" | b64enc | quote }} username: {{ "elastic" | b64enc | quote }}
{{- end }} {{- end }}

View File

@ -240,8 +240,6 @@ fluent-bit:
#dnsPolicy: ClusterFirstWithHostNet #dnsPolicy: ClusterFirstWithHostNet
tolerations: tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: kubezero-workergroup - key: kubezero-workergroup
effect: NoSchedule effect: NoSchedule
operator: Exists operator: Exists

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-mq name: kubezero-mq
description: KubeZero umbrella chart for MQ systems like NATS, RabbitMQ description: KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
type: application type: application
version: 0.3.10 version: 0.3.8
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -18,15 +18,15 @@ dependencies:
version: ">= 0.1.6" version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: nats - name: nats
version: 1.2.2 version: 0.8.4
repository: https://nats-io.github.io/k8s/helm/charts/ #repository: https://nats-io.github.io/k8s/helm/charts/
condition: nats.enabled condition: nats.enabled
- name: rabbitmq - name: rabbitmq
version: 14.6.6 version: 12.5.7
repository: https://charts.bitnami.com/bitnami repository: https://charts.bitnami.com/bitnami
condition: rabbitmq.enabled condition: rabbitmq.enabled
- name: rabbitmq-cluster-operator - name: rabbitmq-cluster-operator
version: 4.3.19 version: 3.10.7
repository: https://charts.bitnami.com/bitnami repository: https://charts.bitnami.com/bitnami
condition: rabbitmq-cluster-operator.enabled condition: rabbitmq-cluster-operator.enabled
kubeVersion: ">= 1.26.0" kubeVersion: ">= 1.25.0"

View File

@ -1,6 +1,6 @@
# kubezero-mq # kubezero-mq
![Version: 0.3.10](https://img.shields.io/badge/Version-0.3.10-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.3.5](https://img.shields.io/badge/Version-0.3.5-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for MQ systems like NATS, RabbitMQ KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
@ -14,14 +14,14 @@ KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
## Requirements ## Requirements
Kubernetes: `>= 1.25.0` Kubernetes: `>= 1.20.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| | nats | 0.8.4 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.bitnami.com/bitnami | rabbitmq | 14.6.6 | | https://charts.bitnami.com/bitnami | rabbitmq | 11.3.2 |
| https://charts.bitnami.com/bitnami | rabbitmq-cluster-operator | 4.3.19 | | https://charts.bitnami.com/bitnami | rabbitmq-cluster-operator | 3.1.4 |
| https://nats-io.github.io/k8s/helm/charts/ | nats | 1.2.2 |
## Values ## Values
@ -64,7 +64,7 @@ Kubernetes: `>= 1.25.0`
| rabbitmq.podAntiAffinityPreset | string | `""` | | | rabbitmq.podAntiAffinityPreset | string | `""` | |
| rabbitmq.replicaCount | int | `1` | | | rabbitmq.replicaCount | int | `1` | |
| rabbitmq.resources.requests.cpu | string | `"100m"` | | | rabbitmq.resources.requests.cpu | string | `"100m"` | |
| rabbitmq.resources.requests.memory | string | `"512Mi"` | | | rabbitmq.resources.requests.memory | string | `"256Mi"` | |
| rabbitmq.topologySpreadConstraints | string | `"- maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels: {{- include \"common.labels.matchLabels\" . | nindent 6 }}\n- maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels: {{- include \"common.labels.matchLabels\" . | nindent 6 }}"` | | | rabbitmq.topologySpreadConstraints | string | `"- maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels: {{- include \"common.labels.matchLabels\" . | nindent 6 }}\n- maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels: {{- include \"common.labels.matchLabels\" . | nindent 6 }}"` | |
| rabbitmq.ulimitNofiles | string | `""` | | | rabbitmq.ulimitNofiles | string | `""` | |

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,19 @@
apiVersion: v2
appVersion: 2.3.2
description: A Helm chart for the NATS.io High Speed Cloud Native Distributed Communications
Technology.
home: http://github.com/nats-io/k8s
icon: https://nats.io/img/nats-icon-color.png
keywords:
- nats
- messaging
- cncf
maintainers:
- email: wally@nats.io
name: Waldemar Quevedo
- email: colin@nats.io
name: Colin Sullivan
- email: jaime@nats.io
name: Jaime Piña
name: nats
version: 0.8.4

View File

@ -0,0 +1,596 @@
# NATS Server
[NATS](https://nats.io) is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Foundation ([CNCF](https://cncf.io)). NATS has over [30 client language implementations](https://nats.io/download/), and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems.
## TL;DR;
```console
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats
```
## Configuration
### Server Image
```yaml
nats:
image: nats:2.1.7-alpine3.11
pullPolicy: IfNotPresent
```
### Limits
```yaml
nats:
# The number of connect attempts against discovered routes.
connectRetries: 30
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
lameDuckDuration:
# Number of seconds to wait for client connections to end after the pod termination is requested
terminationGracePeriodSeconds: 60
```
### Logging
*Note*: It is not recommended to enable trace or debug in production since enabling it will significantly degrade performance.
```yaml
nats:
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
```
### TLS setup for client connections
You can find more on how to setup and trouble shoot TLS connnections at:
https://docs.nats.io/nats-server/configuration/securing_nats/tls
```yaml
nats:
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Clustering
If clustering is enabled, then a 3-node cluster will be setup. More info at:
https://docs.nats.io/nats-server/configuration/clustering#nats-server-clustering
```yaml
cluster:
enabled: true
replicas: 3
tls:
secret:
name: nats-server-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
Example:
```sh
$ helm install nats nats/nats --set cluster.enabled=true
```
## Leafnodes
Leafnode connections to extend a cluster. More info at:
https://docs.nats.io/nats-server/configuration/leafnodes
```yaml
leafnodes:
enabled: true
remotes:
- url: "tls://connect.ngs.global:7422"
# credentials:
# secret:
# name: leafnode-creds
# key: TA.creds
# tls:
# secret:
# name: nats-leafnode-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Setting up External Access
### Using HostPorts
In case of both external access and advertisements being enabled, an
initializer container will be used to gather the public ips. This
container will required to have enough RBAC policy to be able to make a
look up of the public ip of the node where it is running.
For example, to setup external access for a cluster and advertise the public ip to clients:
```yaml
nats:
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: true
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
```
Where the service account named `nats-server` has the following RBAC policy for example:
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nats-server
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nats-server
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nats-server-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nats-server
subjects:
- kind: ServiceAccount
name: nats-server
namespace: default
```
The container image of the initializer can be customized via:
```yaml
bootconfig:
image: natsio/nats-boot-config:latest
pullPolicy: IfNotPresent
```
### Using LoadBalancers
In case of using a load balancer for external access, it is recommended to disable no advertise
so that internal ips from the NATS Servers are not advertised to the clients connecting through
the load balancer.
```yaml
nats:
image: nats:alpine
cluster:
enabled: true
noAdvertise: true
leafnodes:
enabled: true
noAdvertise: true
natsbox:
enabled: true
```
Then could use an L4 enabled load balancer to connect to NATS, for example:
```yaml
apiVersion: v1
kind: Service
metadata:
name: nats-lb
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: nats
ports:
- protocol: TCP
port: 4222
targetPort: 4222
name: nats
- protocol: TCP
port: 7422
targetPort: 7422
name: leafnodes
- protocol: TCP
port: 7522
targetPort: 7522
name: gateways
```
## Gateways
A super cluster can be formed by pointing to remote gateways.
You can find more about gateways in the NATS documentation:
https://docs.nats.io/nats-server/configuration/gateways
```yaml
gateway:
enabled: false
name: 'default'
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
```
## Auth setup
### Auth with a Memory Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
############################
# #
# Memory resolver settings #
# #
##############################
type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
configMap:
name: nats-accounts
key: resolver.conf
```
### Auth using an Account Server Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
##########################
# #
# URL resolver settings #
# #
##########################
type: URL
url: "http://nats-account-server:9090/jwt/v1/accounts/"
```
## JetStream
### Setting up Memory and File Storage
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
memStorage:
enabled: true
size: 2Gi
fileStorage:
enabled: true
size: 1Gi
storageDirectory: /data/
storageClassName: default
```
### Using with an existing PersistentVolumeClaim
For example, given the following `PersistentVolumeClaim`:
```yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nats-js-disk
annotations:
volume.beta.kubernetes.io/storage-class: "default"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
```
You can start JetStream so that one pod is bounded to it:
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
fileStorage:
enabled: true
storageDirectory: /data/
existingClaim: nats-js-disk
claimStorageSize: 3Gi
```
### Clustering example
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
memStorage:
enabled: true
size: "2Gi"
fileStorage:
enabled: true
size: "1Gi"
storageDirectory: /data/
storageClassName: default
cluster:
enabled: true
# Cluster name is required, by default will be release name.
# name: "nats"
replicas: 3
```
## Misc
### NATS Box
A lightweight container with NATS and NATS Streaming utilities that is deployed along the cluster to confirm the setup.
You can find the image at: https://github.com/nats-io/nats-box
```yaml
natsbox:
enabled: true
image: nats:alpine
pullPolicy: IfNotPresent
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
```
### Configuration Reload sidecar
The NATS config reloader image to use:
```yaml
reloader:
enabled: true
image: natsio/nats-server-config-reloader:latest
pullPolicy: IfNotPresent
```
### Prometheus Exporter sidecar
You can toggle whether to start the sidecar that can be used to feed metrics to Prometheus:
```yaml
exporter:
enabled: true
image: natsio/prometheus-nats-exporter:latest
pullPolicy: IfNotPresent
```
### Prometheus operator ServiceMonitor support
You can enable prometheus operator ServiceMonitor:
```yaml
exporter:
# You have to enable exporter first
enabled: true
serviceMonitor:
enabled: true
## Specify the namespace where Prometheus Operator is running
# namespace: monitoring
# ...
```
### Pod Customizations
#### Security Context
```yaml
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
fsGroup: 1000
runAsUser: 1000
runAsNonRoot: true
```
#### Affinity
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity>
`matchExpressions` must be configured according to your setup
```yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/purpose
operator: In
values:
- nats
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats
- stan
topologyKey: "kubernetes.io/hostname"
```
#### Service topology
[Service topology](https://kubernetes.io/docs/concepts/services-networking/service-topology/) is disabled by default, but can be enabled by setting `topologyKeys`. For example:
```yaml
topologyKeys:
- "kubernetes.io/hostname"
- "topology.kubernetes.io/zone"
- "topology.kubernetes.io/region"
```
#### CPU/Memory Resource Requests/Limits
Sets the pods cpu/memory requests/limits
```yaml
nats:
resources:
requests:
cpu: 2
memory: 4Gi
limits:
cpu: 4
memory: 6Gi
```
No resources are set by default.
#### Annotations
<https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations>
```yaml
podAnnotations:
key1 : "value1",
key2 : "value2"
```
### Name Overides
Can change the name of the resources as needed with:
```yaml
nameOverride: "my-nats"
```
### Image Pull Secrets
```yaml
imagePullSecrets:
- name: myRegistry
```
Adds this to the StatefulSet:
```yaml
spec:
imagePullSecrets:
- name: myRegistry
```

View File

@ -0,0 +1,21 @@
// Operator "KO"
operator: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiI0U09OUjZLT05FMzNFRFhRWE5IR1JUSEg2TEhPM0dFU0xXWlJYNlNENTQ2MjQyTE80QlVRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJLTyIsInN1YiI6Ik9DREc2T1lQV1hGU0tMR1NIUEFSR1NSWUNLTEpJUUkySU5FS1VWQUYzMk1XNTZWVExMNEZXSjRJIiwidHlwZSI6Im9wZXJhdG9yIiwibmF0cyI6e319.0039eTgLj-uyYFoWB3rivGP0WyIZkb_vrrE6tnqcNgIDM59o92nw_Rvb-hrvsK30QWqwm_W8BpVZHDMEY-CiBQ
system_account: ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW
resolver: MEMORY
resolver_preload: {
// Account "A"
AA3NXTHTXOHCTPIBKEDHNAYAHJ4CO7ERCOJFYCXOXVEOPZTMW55WX32Z: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJSM0QyWUM1UVlJWk4zS0hYR1FFRTZNQTRCRVU3WkFWQk5LSElJNTNOM0tLRVRTTVZEVVRRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJBIiwic3ViIjoiQUEzTlhUSFRYT0hDVFBJQktFREhOQVlBSEo0Q083RVJDT0pGWUNYT1hWRU9QWlRNVzU1V1gzMloiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiZXhwb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIiwicmVzcG9uc2VfdHlwZSI6IlNpbmdsZXRvbiIsInNlcnZpY2VfbGF0ZW5jeSI6eyJzYW1wbGluZyI6MTAwLCJyZXN1bHRzIjoibGF0ZW5jeS5vbi50ZXN0In19XSwibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.W7oEjpQA986Hai3t8UOiJwCcVDYm2sj7L545oYZhQtYbydh_ragPn8pc0f1pA1krMz_ZDuBwKHLZRgXuNSysDQ
// Account "STAN"
AAYNFTMTKWXZEPPSEZLECMHE3VBULMIUO2QGVY3P4VCI7NNQC3TVX2PB: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRSUozV0I0MjdSVU5RSlZFM1dRVEs3TlNaVlpaNkRQT01KWkdHMlhTMzQ2WFNQTVZERElBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTVEFOIiwic3ViIjoiQUFZTkZUTVRLV1haRVBQU0VaTEVDTUhFM1ZCVUxNSVVPMlFHVlkzUDRWQ0k3Tk5RQzNUVlgyUEIiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.SPyQdAFmoON577s-eZP4K3-9QXYhTn9Xqy3aDGeHvHYRE9IVD47Eu7d38ZiySPlxgkdM_WXZn241_59d07axBA
// Account "SYS"
ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJGSk1TSEROVlVGUEM0U0pSRlcyV0NZT1hRWUFDM1hNNUJaWTRKQUZWUTc1V0lEUkdDN0lBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBQ0xaNk9TV0M3QlhGVDRWTlZCRE1XVUZOQklWR0hUVU9OT1hJNlRDQlAzUUhPRDM0SklEU1JZVyIsInR5cGUiOiJhY2NvdW50IiwibmF0cyI6eyJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.owW08dIa97STqgT0ux-5sD00Ad0I3HstJKTmh1CGVpsQwelaZdrBuia-4XgCgN88zuLokPMfWI_pkxXU_iB0BA
// Account "B"
ADOR7Q5KMWC2XIWRRRC4MZUDCPYG3UMAIWDRX6M2MFDY5SR6HQAAMHJA: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRQjdIRFg3VUZYN01KUjZPS1E2S1dRSlVUUEpWWENTNkJCWjQ3SDVVTFdVVFNRUU1NQzJRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJCIiwic3ViIjoiQURPUjdRNUtNV0MyWElXUlJSQzRNWlVEQ1BZRzNVTUFJV0RSWDZNMk1GRFk1U1I2SFFBQU1ISkEiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiaW1wb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsImFjY291bnQiOiJBQTNOWFRIVFhPSENUUElCS0VESE5BWUFISjRDTzdFUkNPSkZZQ1hPWFZFT1BaVE1XNTVXWDMyWiIsInRvIjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIn1dLCJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.r5p_sGt_hmDfWWIJGrLodAM8VfXPeUzsbRtzrMTBGGkcLdi4jqAHXRu09CmFISEzX2VKeGuOonGuAMOFotvICg
}

View File

@ -0,0 +1,24 @@
# Setup memory preload config.
auth:
enabled: true
resolver:
type: memory
preload: |
// Operator "KO"
operator: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiI0U09OUjZLT05FMzNFRFhRWE5IR1JUSEg2TEhPM0dFU0xXWlJYNlNENTQ2MjQyTE80QlVRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJLTyIsInN1YiI6Ik9DREc2T1lQV1hGU0tMR1NIUEFSR1NSWUNLTEpJUUkySU5FS1VWQUYzMk1XNTZWVExMNEZXSjRJIiwidHlwZSI6Im9wZXJhdG9yIiwibmF0cyI6e319.0039eTgLj-uyYFoWB3rivGP0WyIZkb_vrrE6tnqcNgIDM59o92nw_Rvb-hrvsK30QWqwm_W8BpVZHDMEY-CiBQ
system_account: ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW
resolver_preload: {
// Account "A"
AA3NXTHTXOHCTPIBKEDHNAYAHJ4CO7ERCOJFYCXOXVEOPZTMW55WX32Z: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJSM0QyWUM1UVlJWk4zS0hYR1FFRTZNQTRCRVU3WkFWQk5LSElJNTNOM0tLRVRTTVZEVVRRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJBIiwic3ViIjoiQUEzTlhUSFRYT0hDVFBJQktFREhOQVlBSEo0Q083RVJDT0pGWUNYT1hWRU9QWlRNVzU1V1gzMloiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiZXhwb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIiwicmVzcG9uc2VfdHlwZSI6IlNpbmdsZXRvbiIsInNlcnZpY2VfbGF0ZW5jeSI6eyJzYW1wbGluZyI6MTAwLCJyZXN1bHRzIjoibGF0ZW5jeS5vbi50ZXN0In19XSwibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.W7oEjpQA986Hai3t8UOiJwCcVDYm2sj7L545oYZhQtYbydh_ragPn8pc0f1pA1krMz_ZDuBwKHLZRgXuNSysDQ
// Account "STAN"
AAYNFTMTKWXZEPPSEZLECMHE3VBULMIUO2QGVY3P4VCI7NNQC3TVX2PB: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRSUozV0I0MjdSVU5RSlZFM1dRVEs3TlNaVlpaNkRQT01KWkdHMlhTMzQ2WFNQTVZERElBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTVEFOIiwic3ViIjoiQUFZTkZUTVRLV1haRVBQU0VaTEVDTUhFM1ZCVUxNSVVPMlFHVlkzUDRWQ0k3Tk5RQzNUVlgyUEIiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.SPyQdAFmoON577s-eZP4K3-9QXYhTn9Xqy3aDGeHvHYRE9IVD47Eu7d38ZiySPlxgkdM_WXZn241_59d07axBA
// Account "SYS"
ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJGSk1TSEROVlVGUEM0U0pSRlcyV0NZT1hRWUFDM1hNNUJaWTRKQUZWUTc1V0lEUkdDN0lBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBQ0xaNk9TV0M3QlhGVDRWTlZCRE1XVUZOQklWR0hUVU9OT1hJNlRDQlAzUUhPRDM0SklEU1JZVyIsInR5cGUiOiJhY2NvdW50IiwibmF0cyI6eyJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.owW08dIa97STqgT0ux-5sD00Ad0I3HstJKTmh1CGVpsQwelaZdrBuia-4XgCgN88zuLokPMfWI_pkxXU_iB0BA
// Account "B"
ADOR7Q5KMWC2XIWRRRC4MZUDCPYG3UMAIWDRX6M2MFDY5SR6HQAAMHJA: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRQjdIRFg3VUZYN01KUjZPS1E2S1dRSlVUUEpWWENTNkJCWjQ3SDVVTFdVVFNRUU1NQzJRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJCIiwic3ViIjoiQURPUjdRNUtNV0MyWElXUlJSQzRNWlVEQ1BZRzNVTUFJV0RSWDZNMk1GRFk1U1I2SFFBQU1ISkEiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiaW1wb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsImFjY291bnQiOiJBQTNOWFRIVFhPSENUUElCS0VESE5BWUFISjRDTzdFUkNPSkZZQ1hPWFZFT1BaVE1XNTVXWDMyWiIsInRvIjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIn1dLCJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.r5p_sGt_hmDfWWIJGrLodAM8VfXPeUzsbRtzrMTBGGkcLdi4jqAHXRu09CmFISEzX2VKeGuOonGuAMOFotvICg
}

View File

@ -0,0 +1,9 @@
# Setup memory preload config.
auth:
enabled: true
resolver:
type: memory
configMap:
name: nats-accounts
key: resolver.conf

View File

View File

@ -0,0 +1,9 @@
let accounts = ./accounts.conf as Text
in
''
port: 4222
${accounts}
''

View File

@ -0,0 +1,21 @@
// Operator "KO"
operator: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJKS0E2U0pKUUVOTFpYVDJEWTRWNE00UDZXUFRVUlhIQzNMU1pJWEZWRlFGV0I3U0tKVk9BIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJLTyIsInN1YiI6Ik9CRkJIQzNVNVdVVEVNWkpPM1g3SFlZMkI2M1BZSlBPRFhLQVZZR0dTRU1BNzNMS0ZNWDRMRjJBIiwidHlwZSI6Im9wZXJhdG9yIiwibmF0cyI6e319.60YToJe3Dz9OZES80jYXVgg7uCB1c3BsX6HglA8tsKKRe-Br3pMpn9yUPUujjB61MGqnA7Zmbx8qWnoj8CkuCw
system_account: ABL65FFQWUDHHTGMGRFVVSQDBAWHGEJ2CDRCMGBFV6SB4MLKFSUPN7GP
resolver: MEMORY
resolver_preload: {
// Account "B"
AAIJAGRSL2KCEPTRBP6DJCTAMSNOUXRILLZXIY6CTZ4GR27ISCZOP6QH: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJEVTdWV1BXQUtBSVdNNkhNUElDNE43TVRGTFEyV01JUFhFVU5aNVEzRE1YSTRKVkpLQU1BIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJCIiwic3ViIjoiQUFJSkFHUlNMMktDRVBUUkJQNkRKQ1RBTVNOT1VYUklMTFpYSVk2Q1RaNEdSMjdJU0NaT1A2UUgiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiaW1wb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsImFjY291bnQiOiJBQlhXNU9aV09LSzUzWDNWNUhSVkdPMlJXTlVUU1NQSU1HVDZORU9SMjNBQzRNTk1QTlFTUTZWTCIsInRvIjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIn1dLCJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.VLv3U7k8jJaIcGpDYXo0XQCYNVMNQd2PHVUOXGMvCU8ifiYpkaRJ4G0UXZHqlQl_0g3M_LEtJw0K-4HwgOeIAA
// Account "SYS"
ABL65FFQWUDHHTGMGRFVVSQDBAWHGEJ2CDRCMGBFV6SB4MLKFSUPN7GP: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJPSUpENkozTjdCVk0zSEY0M0NCTUhLMllUNlpXTlFCWkZBRzQ0VE5RSFA3SlVZT0hZR0dRIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBQkw2NUZGUVdVREhIVEdNR1JGVlZTUURCQVdIR0VKMkNEUkNNR0JGVjZTQjRNTEtGU1VQTjdHUCIsInR5cGUiOiJhY2NvdW50IiwibmF0cyI6eyJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.Jei8psto5h35bFn4y1Unsk0Noh6MYJxkB8Hs-nnLuUBrkTppSwukEkM_ufNGA_lxsmPki3zBf8y6rsQ13Ec5AA
// Account "A"
ABXW5OZWOKK53X3V5HRVGO2RWNUTSSPIMGT6NEOR23AC4MNMPNQSQ6VL: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJSRFNRTE9GT0gzRUoyUUNLSkkyUEJXNkxLMllUVFJDUUdGV0tJRFJJRVRVMzZTT0RPT1FRIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJBIiwic3ViIjoiQUJYVzVPWldPS0s1M1gzVjVIUlZHTzJSV05VVFNTUElNR1Q2TkVPUjIzQUM0TU5NUE5RU1E2VkwiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiZXhwb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIiwicmVzcG9uc2VfdHlwZSI6IlNpbmdsZXRvbiIsInNlcnZpY2VfbGF0ZW5jeSI6eyJzYW1wbGluZyI6MTAwLCJyZXN1bHRzIjoibGF0ZW5jeS5vbi50ZXN0In19XSwibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.lJfHHkbXeEf6DbHFju0zktCjWL0kgll17BdYJl6f2hcZxbUtiyf3H1mGfrzELgCuEO7p8X11UpRVy_eTQfnGAA
// Account "STAN"
ACLSVE2AZYTXOBIJXOV5XHAIIM7KLL777F7GAEWW5W5P4IAR2VZJSGID: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJJT1ZPSFBPV1hJRDI2U1JYVEJQTTVUQlVKWDJRU0FSSTJMQjJTM09aRFpMU0paS1BOVU9BIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJTVEFOIiwic3ViIjoiQUNMU1ZFMkFaWVRYT0JJSlhPVjVYSEFJSU03S0xMNzc3RjdHQUVXVzVXNVA0SUFSMlZaSlNHSUQiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.CE5_K9kAdAgxesJRiJYh3kK2f74_c7T3bNQhgfaXOMzI8X6VOWqn0_5gH9jOD0xzHXIYiUMwy7a4Ou63PizHCw
}

View File

@ -0,0 +1,26 @@
{{- if or .Values.nats.logging.debug .Values.nats.logging.trace }}
*WARNING*: Keep in mind that running the server with
debug and/or trace enabled significantly affects the
performance of the server!
{{- end }}
You can find more information about running NATS on Kubernetes
in the NATS documentation website:
https://docs.nats.io/nats-on-kubernetes/nats-kubernetes
{{- if .Values.natsbox.enabled }}
NATS Box has been deployed into your cluster, you can
now use the NATS tools within the container as follows:
kubectl exec -n {{ .Release.Namespace }} -it deployment/{{ template "nats.fullname" . }}-box -- /bin/sh -l
nats-box:~# nats-sub test &
nats-box:~# nats-pub test hi
nats-box:~# nc {{ template "nats.fullname" . }} 4222
{{- end }}
Thanks for using NATS!

View File

@ -0,0 +1,98 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "nats.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "nats.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nats.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "nats.labels" -}}
helm.sh/chart: {{ include "nats.chart" . }}
{{- range $name, $value := .Values.commonLabels }}
{{ $name }}: {{ $value }}
{{- end }}
{{ include "nats.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "nats.selectorLabels" -}}
app.kubernetes.io/name: {{ include "nats.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Return the proper NATS image name
*/}}
{{- define "nats.clusterAdvertise" -}}
{{- printf "$(POD_NAME).%s.$(POD_NAMESPACE).svc.%s." (include "nats.fullname" . ) $.Values.k8sClusterDomain }}
{{- end }}
{{/*
Return the NATS cluster routes.
*/}}
{{- define "nats.clusterRoutes" -}}
{{- $name := (include "nats.fullname" . ) -}}
{{- range $i, $e := until (.Values.cluster.replicas | int) -}}
{{- printf "nats://%s-%d.%s.%s.svc.%s.:6222," $name $i $name $.Release.Namespace $.Values.k8sClusterDomain -}}
{{- end -}}
{{- end }}
{{- define "nats.tlsConfig" -}}
tls {
{{- if .cert }}
cert_file: {{ .secretPath }}/{{ .secret.name }}/{{ .cert }}
{{- end }}
{{- if .key }}
key_file: {{ .secretPath }}/{{ .secret.name }}/{{ .key }}
{{- end }}
{{- if .ca }}
ca_file: {{ .secretPath }}/{{ .secret.name }}/{{ .ca }}
{{- end }}
{{- if .insecure }}
insecure: {{ .insecure }}
{{- end }}
{{- if .verify }}
verify: {{ .verify }}
{{- end }}
{{- if .verifyAndMap }}
verify_and_map: {{ .verifyAndMap }}
{{- end }}
{{- if .curvePreferences }}
curve_preferences: {{ .curvePreferences }}
{{- end }}
{{- if .timeout }}
timeout: {{ .timeout }}
{{- end }}
}
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.auth.enabled }}
{{- if eq .Values.auth.resolver.type "memory" }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "nats.name" . }}-accounts
labels:
app: {{ template "nats.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
data:
accounts.conf: |-
{{- .Files.Get "accounts.conf" | indent 6 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,398 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "nats.fullname" . }}-config
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
data:
nats.conf: |
# PID file shared with configuration reloader.
pid_file: "/var/run/nats/nats.pid"
###############
# #
# Monitoring #
# #
###############
http: 8222
server_name: $POD_NAME
{{- if .Values.nats.tls }}
#####################
# #
# TLS Configuration #
# #
#####################
{{- with .Values.nats.tls }}
{{- $nats_tls := merge (dict) . }}
{{- $_ := set $nats_tls "secretPath" "/etc/nats-certs/clients" }}
{{- include "nats.tlsConfig" $nats_tls | nindent 4}}
{{- end }}
{{- end }}
{{- if .Values.nats.jetstream.enabled }}
###################################
# #
# NATS JetStream #
# #
###################################
jetstream {
{{- if .Values.nats.jetstream.memStorage.enabled }}
max_mem: {{ .Values.nats.jetstream.memStorage.size }}
{{- end }}
{{- if .Values.nats.jetstream.fileStorage.enabled }}
store_dir: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
max_file:
{{- if .Values.nats.jetstream.fileStorage.existingClaim }}
{{- .Values.nats.jetstream.fileStorage.claimStorageSize }}
{{- else }}
{{- .Values.nats.jetstream.fileStorage.size }}
{{- end }}
{{- end }}
}
{{- end }}
{{- if .Values.mqtt.enabled }}
###################################
# #
# NATS MQTT #
# #
###################################
mqtt {
port: 1883
{{- with .Values.mqtt.tls }}
{{- $mqtt_tls := merge (dict) . }}
{{- $_ := set $mqtt_tls "secretPath" "/etc/nats-certs/mqtt" }}
{{- include "nats.tlsConfig" $mqtt_tls | nindent 6}}
{{- end }}
{{- if .Values.mqtt.noAuthUser }}
no_auth_user: {{ .Values.mqtt.noAuthUser | quote }}
{{- end }}
ack_wait: {{ .Values.mqtt.ackWait | quote }}
max_ack_pending: {{ .Values.mqtt.maxAckPending }}
}
{{- end }}
{{- if .Values.cluster.enabled }}
###################################
# #
# NATS Full Mesh Clustering Setup #
# #
###################################
cluster {
port: 6222
{{- if .Values.nats.jetstream.enabled }}
{{- if .Values.cluster.name }}
name: {{ .Values.cluster.name }}
{{- else }}
name: {{ template "nats.name" . }}
{{- end }}
{{- else }}
{{- with .Values.cluster.name }}
name: {{ . }}
{{- end }}
{{- end }}
{{- with .Values.cluster.tls }}
{{- $cluster_tls := merge (dict) . }}
{{- $_ := set $cluster_tls "secretPath" "/etc/nats-certs/cluster" }}
{{- include "nats.tlsConfig" $cluster_tls | nindent 6}}
{{- end }}
{{- if .Values.cluster.authorization }}
authorization {
{{- with .Values.cluster.authorization.user }}
user: {{ . }}
{{- end }}
{{- with .Values.cluster.authorization.password }}
password: {{ . }}
{{- end }}
{{- with .Values.cluster.authorization.timeout }}
timeout: {{ . }}
{{- end }}
}
{{- end }}
routes = [
{{ include "nats.clusterRoutes" . }}
]
cluster_advertise: $CLUSTER_ADVERTISE
{{- with .Values.cluster.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
connect_retries: {{ .Values.nats.connectRetries }}
}
{{ end }}
{{- if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/client_advertise.conf"
{{- end }}
{{- if or .Values.leafnodes.enabled .Values.leafnodes.remotes }}
#################
# #
# NATS Leafnode #
# #
#################
leafnodes {
{{- if .Values.leafnodes.enabled }}
listen: "0.0.0.0:7422"
{{- end }}
{{ if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{ end }}
{{- with .Values.leafnodes.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{- $leafnode_tls := merge (dict) . }}
{{- $_ := set $leafnode_tls "secretPath" "/etc/nats-certs/leafnodes" }}
{{- include "nats.tlsConfig" $leafnode_tls | nindent 6}}
{{- end }}
remotes: [
{{- range .Values.leafnodes.remotes }}
{
{{- with .url }}
url: {{ . }}
{{- end }}
{{- with .credentials }}
credentials: "/etc/nats-creds/{{ .secret.name }}/{{ .secret.key }}"
{{- end }}
{{- with .tls }}
{{ $secretName := .secret.name }}
tls: {
{{- with .cert }}
cert_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .key }}
key_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .ca }}
ca_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
}
{{- end }}
}
{{- end }}
]
}
{{ end }}
{{- if .Values.gateway.enabled }}
#################
# #
# NATS Gateways #
# #
#################
gateway {
name: {{ .Values.gateway.name }}
port: 7522
{{ if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{ end }}
{{- with .Values.gateway.tls }}
{{- $gateway_tls := merge (dict) . }}
{{- $_ := set $gateway_tls "secretPath" "/etc/nats-certs/gateway" }}
{{- include "nats.tlsConfig" $gateway_tls | nindent 6}}
{{- end }}
# Gateways array here
gateways: [
{{- range .Values.gateway.gateways }}
{
{{- with .name }}
name: {{ . }}
{{- end }}
{{- with .url }}
url: {{ . | quote }}
{{- end }}
{{- with .urls }}
urls: [{{ join "," . }}]
{{- end }}
},
{{- end }}
]
}
{{ end }}
{{- with .Values.nats.logging.debug }}
debug: {{ . }}
{{- end }}
{{- with .Values.nats.logging.trace }}
trace: {{ . }}
{{- end }}
{{- with .Values.nats.logging.logtime }}
logtime: {{ . }}
{{- end }}
{{- with .Values.nats.logging.connectErrorReports }}
connect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.logging.reconnectErrorReports }}
reconnect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxConnections }}
max_connections: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxSubscriptions }}
max_subscriptions: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPending }}
max_pending: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxControlLine }}
max_control_line: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPayload }}
max_payload: {{ . }}
{{- end }}
{{- with .Values.nats.pingInterval }}
ping_interval: {{ . }}
{{- end }}
{{- with .Values.nats.maxPings }}
ping_max: {{ . }}
{{- end }}
{{- with .Values.nats.writeDeadline }}
write_deadline: {{ . | quote }}
{{- end }}
{{- with .Values.nats.writeDeadline }}
lame_duck_duration: {{ . | quote }}
{{- end }}
{{- if .Values.websocket.enabled }}
##################
# #
# Websocket #
# #
##################
websocket {
port: {{ .Values.websocket.port }}
{{- if .Values.websocket.tls }}
{{ $secretName := .secret.name }}
tls {
{{- with .cert }}
cert_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .key }}
key_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .ca }}
ca_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
}
{{- else }}
no_tls: {{ .Values.websocket.noTLS }}
{{- end }}
}
{{- end }}
{{- if .Values.auth.enabled }}
##################
# #
# Authorization #
# #
##################
{{- if .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
resolver: MEMORY
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- else }}
{{- with .Values.auth.resolver }}
operator: {{ .operator }}
system_account: {{ .systemAccount }}
{{- end }}
resolver: {
type: full
{{- with .Values.auth.resolver }}
dir: {{ .store.dir | quote }}
allow_delete: {{ .allowDelete }}
interval: {{ .interval | quote }}
{{- end }}
}
{{- end }}
{{- end }}
{{- if .Values.auth.resolver.resolverPreload }}
resolver_preload: {{ toRawJson .Values.auth.resolver.resolverPreload }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
{{- with .Values.auth.resolver.url }}
resolver: URL({{ . }})
{{- end }}
operator: /etc/nats-config/operator/{{ .Values.auth.operatorjwt.configMap.key }}
{{- end }}
{{- end }}
{{- with .Values.auth.systemAccount }}
system_account: {{ . }}
{{- end }}
{{- with .Values.auth.basic }}
{{- with .noAuthUser }}
no_auth_user: {{ . }}
{{- end }}
{{- with .users }}
authorization {
users: [
{{- range . }}
{{- toRawJson . | nindent 4 }},
{{- end }}
]
}
{{- end }}
{{- if .token }}
authorization {
token: "{{ .token }}"
}
{{- end }}
{{- with .accounts }}
accounts: {{- toRawJson . }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,95 @@
{{- if .Values.natsbox.enabled }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nats.fullname" . }}-box
namespace: {{ .Release.Namespace | quote }}
labels:
app: {{ include "nats.fullname" . }}-box
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ include "nats.fullname" . }}-box
template:
metadata:
labels:
app: {{ include "nats.fullname" . }}-box
{{- if .Values.natsbox.podAnnotations }}
annotations:
{{- range $key, $value := .Values.natsbox.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
{{- with .Values.natsbox.affinity }}
affinity:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
secret:
secretName: {{ .Values.natsbox.credentials.secret.name }}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: nats-box
image: {{ .Values.natsbox.image }}
imagePullPolicy: {{ .Values.natsbox.pullPolicy }}
resources:
{{- toYaml .Values.natsbox.resources | nindent 10 }}
env:
- name: NATS_URL
value: {{ template "nats.fullname" . }}
{{- if .Values.natsbox.credentials }}
- name: USER_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
- name: USER2_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp /etc/nats-certs/clients/{{ $secretName }}/* /usr/local/share/ca-certificates && update-ca-certificates
{{- end }}
command:
- "tail"
- "-f"
- "/dev/null"
volumeMounts:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
mountPath: /etc/nats-config/creds
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,22 @@
{{- if .Values.podDisruptionBudget }}
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
name: {{ include "nats.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,31 @@
{{ if and .Values.nats.externalAccess .Values.nats.advertise }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.nats.serviceAccount }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.nats.serviceAccount }}
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.nats.serviceAccount }}-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Values.nats.serviceAccount }}
subjects:
- kind: ServiceAccount
name: {{ .Values.nats.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{ end }}

View File

@ -0,0 +1,67 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.serviceAnnotations}}
annotations:
{{- range $key, $value := .Values.serviceAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
selector:
{{- include "nats.selectorLabels" . | nindent 4 }}
clusterIP: None
{{- if .Values.topologyKeys }}
topologyKeys:
{{- .Values.topologyKeys | toYaml | nindent 4 }}
{{- end }}
ports:
{{- if .Values.websocket.enabled }}
- name: websocket
port: {{ .Values.websocket.port }}
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- end }}
- name: client
port: 4222
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: cluster
port: 6222
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: monitor
port: 8222
{{- if .Values.appProtocol.enabled }}
appProtocol: http
{{- end }}
- name: metrics
port: 7777
{{- if .Values.appProtocol.enabled }}
appProtocol: http
{{- end }}
- name: leafnodes
port: 7422
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: gateways
port: 7522
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- if .Values.mqtt.enabled }}
- name: mqtt
port: 1883
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- end }}

View File

@ -0,0 +1,40 @@
{{ if and .Values.exporter.enabled .Values.exporter.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "nats.fullname" . }}
{{- if .Values.exporter.serviceMonitor.namespace }}
namespace: {{ .Values.exporter.serviceMonitor.namespace }}
{{- else }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.labels }}
labels:
{{- range $key, $value := .Values.exporter.serviceMonitor.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.annotations }}
annotations:
{{- range $key, $value := .Values.exporter.serviceMonitor.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
endpoints:
- port: metrics
{{- if .Values.exporter.serviceMonitor.path }}
path: {{ .Values.exporter.serviceMonitor.path }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.interval }}
interval: {{ .Values.exporter.serviceMonitor.interval }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.exporter.serviceMonitor.scrapeTimeout }}
{{- end }}
namespaceSelector:
any: true
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,477 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.statefulSetAnnotations}}
annotations:
{{- range $key, $value := .Values.statefulSetAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- if .Values.cluster.enabled }}
replicas: {{ .Values.cluster.replicas }}
{{- else }}
replicas: 1
{{- end }}
serviceName: {{ include "nats.fullname" . }}
template:
metadata:
{{- if or .Values.podAnnotations .Values.exporter.enabled }}
annotations:
{{- if .Values.exporter.enabled }}
prometheus.io/path: /metrics
prometheus.io/port: "7777"
prometheus.io/scrape: "true"
{{- end }}
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
{{- include "nats.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- range .Values.topologySpreadConstraints }}
{{- if and .maxSkew .topologyKey }}
- maxSkew: {{ .maxSkew }}
topologyKey: {{ .topologyKey }}
{{- if .whenUnsatisfiable }}
whenUnsatisfiable: {{ .whenUnsatisfiable }}
{{- end }}
labelSelector:
matchLabels:
{{- include "nats.selectorLabels" $ | nindent 12 }}
{{- end}}
{{- end }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
# Common volumes for the containers.
volumes:
- name: config-volume
configMap:
name: {{ include "nats.fullname" . }}-config
# Local volume shared with the reloader.
- name: pid
emptyDir: {}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
configMap:
name: {{ .Values.auth.resolver.configMap.name }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
configMap:
name: {{ .Values.auth.operatorjwt.configMap.name }}
{{- end }}
{{- end }}
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Local volume shared with the advertise config initializer.
- name: advertiseconfig
emptyDir: {}
{{- end }}
{{- if and .Values.nats.jetstream.fileStorage.enabled .Values.nats.jetstream.fileStorage.existingClaim }}
# Persistent volume for jetstream running with file storage option
- name: {{ include "nats.fullname" . }}-js-pvc
persistentVolumeClaim:
claimName: {{ .Values.nats.jetstream.fileStorage.existingClaim | quote }}
{{- end }}
#################
# #
# TLS Volumes #
# #
#################
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.mqtt.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-mqtt-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-cluster-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-leafnodes-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-gateways-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-ws-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
secret:
secretName: {{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
{{ if and .Values.nats.externalAccess .Values.nats.advertise }}
# Assume that we only use the service account in case we want to
# figure out what is the current external public IP from the server
# in order to be able to advertise correctly.
serviceAccountName: {{ .Values.nats.serviceAccount }}
{{ end }}
# Required to be able to HUP signal and apply config
# reload to the server without restarting the pod.
shareProcessNamespace: true
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Initializer container required to be able to lookup
# the external ip on which this node is running.
initContainers:
- name: bootconfig
command:
- nats-pod-bootconfig
- -f
- /etc/nats-config/advertise/client_advertise.conf
- -gf
- /etc/nats-config/advertise/gateway_advertise.conf
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: {{ .Values.bootconfig.image }}
imagePullPolicy: {{ .Values.bootconfig.pullPolicy }}
resources:
{{- toYaml .Values.bootconfig.resources | nindent 10 }}
volumeMounts:
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
#################
# #
# NATS Server #
# #
#################
terminationGracePeriodSeconds: {{ .Values.nats.terminationGracePeriodSeconds }}
containers:
- name: nats
image: {{ .Values.nats.image }}
imagePullPolicy: {{ .Values.nats.pullPolicy }}
resources:
{{- toYaml .Values.nats.resources | nindent 10 }}
ports:
- containerPort: 4222
name: client
{{- if .Values.nats.externalAccess }}
hostPort: 4222
{{- end }}
- containerPort: 7422
name: leafnodes
{{- if .Values.nats.externalAccess }}
hostPort: 7422
{{- end }}
- containerPort: 7522
name: gateways
{{- if .Values.nats.externalAccess }}
hostPort: 7522
{{- end }}
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
{{- if .Values.mqtt.enabled }}
- containerPort: 1883
name: mqtt
{{- if .Values.nats.externalAccess }}
hostPort: 1883
{{- end }}
{{- end }}
{{- if .Values.websocket.enabled }}
- containerPort: {{ .Values.websocket.port }}
name: websocket
{{- if .Values.nats.externalAccess }}
hostPort: {{ .Values.websocket.port }}
{{- end }}
{{- end }}
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CLUSTER_ADVERTISE
value: {{ include "nats.clusterAdvertise" . }}
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
- name: nats-jwt-pvc
mountPath: {{ .Values.auth.resolver.store.dir }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
mountPath: /etc/nats-config/operator
{{- end }}
{{- end }}
{{- if .Values.nats.jetstream.fileStorage.enabled }}
- name: {{ include "nats.fullname" . }}-js-pvc
mountPath: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- with .Values.mqtt.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-mqtt-volume
mountPath: /etc/nats-certs/mqtt/{{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-cluster-volume
mountPath: /etc/nats-certs/cluster/{{ $secretName }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-leafnodes-volume
mountPath: /etc/nats-certs/leafnodes/{{ $secretName }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-gateways-volume
mountPath: /etc/nats-certs/gateways/{{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-ws-volume
mountPath: /etc/nats-certs/ws/{{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
mountPath: /etc/nats-creds/{{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
# Liveness/Readiness probes against the monitoring.
#
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
# Gracefully stop NATS Server on pod deletion or image upgrade.
#
lifecycle:
preStop:
exec:
# Using the alpine based NATS image, we add an extra sleep that is
# the same amount as the terminationGracePeriodSeconds to allow
# the NATS Server to gracefully terminate the client connections.
#
command:
- "/bin/sh"
- "-c"
- "nats-server -sl=ldm=/var/run/nats/nats.pid && /bin/sleep {{ .Values.nats.terminationGracePeriodSeconds }}"
#################################
# #
# NATS Configuration Reloader #
# #
#################################
{{ if .Values.reloader.enabled }}
- name: reloader
image: {{ .Values.reloader.image }}
imagePullPolicy: {{ .Values.reloader.pullPolicy }}
resources:
{{- toYaml .Values.reloader.resources | nindent 10 }}
command:
- "nats-server-config-reloader"
- "-pid"
- "/var/run/nats/nats.pid"
- "-config"
- "/etc/nats-config/nats.conf"
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{ end }}
##############################
# #
# NATS Prometheus Exporter #
# #
##############################
{{ if .Values.exporter.enabled }}
- name: metrics
image: {{ .Values.exporter.image }}
imagePullPolicy: {{ .Values.exporter.pullPolicy }}
resources:
{{- toYaml .Values.exporter.resources | nindent 10 }}
args:
- -connz
- -routez
- -subz
- -varz
- -prefix=nats
- -use_internal_server_id
{{- if .Values.nats.jetstream.enabled }}
- -jsz=all
{{- end }}
- http://localhost:8222/
ports:
- containerPort: 7777
name: metrics
{{ end }}
volumeClaimTemplates:
{{- if eq .Values.auth.resolver.type "full" }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
#####################################
# #
# Account Server Embedded JWT #
# #
#####################################
- metadata:
name: nats-jwt-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.auth.resolver.store.size }}
{{- end }}
{{- end }}
{{- if and .Values.nats.jetstream.fileStorage.enabled (not .Values.nats.jetstream.fileStorage.existingClaim) }}
#####################################
# #
# Jetstream New Persistent Volume #
# #
#####################################
- metadata:
name: {{ include "nats.fullname" . }}-js-pvc
{{- if .Values.nats.jetstream.fileStorage.annotations }}
annotations:
{{- range $key, $value := .Values.nats.jetstream.fileStorage.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
accessModes:
{{- range .Values.nats.jetstream.fileStorage.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.nats.jetstream.fileStorage.size }}
storageClassName: {{ .Values.nats.jetstream.fileStorage.storageClassName | quote }}
{{- end }}

View File

@ -0,0 +1,405 @@
###############################
# #
# NATS Server Configuration #
# #
###############################
nats:
image: nats:2.3.2-alpine
pullPolicy: IfNotPresent
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: false
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
# The number of connect attempts against discovered routes.
connectRetries: 30
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
resources: {}
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
lameDuckDuration:
terminationGracePeriodSeconds: 60
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
jetstream:
enabled: false
#############################
# #
# Jetstream Memory Storage #
# #
#############################
memStorage:
enabled: true
size: 1Gi
############################
# #
# Jetstream File Storage #
# #
############################
fileStorage:
enabled: false
storageDirectory: /data
# Set for use with existing PVC
# existingClaim: jetstream-pvc
# claimStorageSize: 1Gi
# Use below block to create new persistent volume
# only used if existingClaim is not specified
size: 1Gi
storageClassName: default
accessModes:
- ReadWriteOnce
annotations:
# key: "value"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
mqtt:
enabled: false
ackWait: 1m
maxAckPending: 100
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
#
# tls:
# secret:
# name: nats-mqtt-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
nameOverride: ""
# An array of imagePullSecrets, and they have to be created manually in the same namespace
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: {}
# securityContext:
# fsGroup: 1000
# runAsUser: 1000
# runAsNonRoot: true
# Affinity for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
## Pod priority class name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: null
# Service topology
# ref: https://kubernetes.io/docs/concepts/services-networking/service-topology/
topologyKeys: []
# Pod Topology Spread Constraints
# ref https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []
# - maxSkew: 1
# topologyKey: zone
# whenUnsatisfiable: DoNotSchedule
# Annotations to add to the NATS pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# key: "value"
## Define a Pod Disruption Budget for the stateful set
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
podDisruptionBudget: null
# minAvailable: 1
# maxUnavailable: 1
# Annotations to add to the NATS StatefulSet
statefulSetAnnotations: {}
# Annotations to add to the NATS Service
serviceAnnotations: {}
cluster:
enabled: false
replicas: 3
noAdvertise: false
# authorization:
# user: foo
# password: pwd
# timeout: 0.5
# Leafnode connections to extend a cluster:
#
# https://docs.nats.io/nats-server/configuration/leafnodes
#
leafnodes:
enabled: false
noAdvertise: false
# remotes:
# - url: "tls://connect.ngs.global:7422"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# Gateway connections to create a super cluster
#
# https://docs.nats.io/nats-server/configuration/gateways
#
gateway:
enabled: false
name: 'default'
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# In case of both external access and advertisements being
# enabled, an initializer container will be used to gather
# the public ips.
bootconfig:
image: natsio/nats-boot-config:0.5.3
pullPolicy: IfNotPresent
# NATS Box
#
# https://github.com/nats-io/nats-box
#
natsbox:
enabled: true
image: natsio/nats-box:0.6.0
pullPolicy: IfNotPresent
# An array of imagePullSecrets, and they have to be created manually in the same namespace
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: dockerhub
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
# Annotations to add to the box pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# key: "value"
# Affinity for nats box pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# The NATS config reloader image to use.
reloader:
enabled: true
image: natsio/nats-server-config-reloader:0.6.1
pullPolicy: IfNotPresent
# Prometheus NATS Exporter configuration.
exporter:
enabled: true
image: natsio/prometheus-nats-exporter:0.8.0
pullPolicy: IfNotPresent
resources: {}
# Prometheus operator ServiceMonitor support. Exporter has to be enabled
serviceMonitor:
enabled: false
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
labels: {}
annotations: {}
path: /metrics
# interval:
# scrapeTimeout:
# Authentication setup
auth:
enabled: false
# basic:
# noAuthUser:
# # List of users that can connect with basic auth,
# # that belong to the global account.
# users:
# # List of accounts with users that can connect
# # using basic auth.
# accounts:
# Reference to the Operator JWT.
# operatorjwt:
# configMap:
# name: operator-jwt
# key: KO.jwt
# Token authentication
# token:
# Public key of the System Account
# systemAccount:
resolver:
# Disables the resolver by default
type: none
##########################################
# #
# Embedded NATS Account Server Resolver #
# #
##########################################
# type: full
# If the resolver type is 'full', delete when enabled will rename the jwt.
allowDelete: false
# Interval at which a nats-server with a nats based account resolver will compare
# it's state with one random nats based account resolver in the cluster and if needed,
# exchange jwt and converge on the same set of jwt.
interval: 2m
# Operator JWT
operator:
# System Account Public NKEY
systemAccount:
# resolverPreload:
# <ACCOUNT>: <JWT>
# Directory in which the account JWTs will be stored.
store:
dir: "/accounts/jwt"
# Size of the account JWT storage.
size: 1Gi
##############################
# #
# Memory resolver settings #
# #
##############################
# type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
# configMap:
# name: nats-accounts
# key: resolver.conf
##########################
# #
# URL resolver settings #
# #
##########################
# type: URL
# url: "http://nats-account-server:9090/jwt/v1/accounts/"
websocket:
enabled: false
port: 443
appProtocol:
enabled: false
# Cluster Domain configured on the kubelets
# https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
k8sClusterDomain: cluster.local
# Add labels to all the deployed resources
commonLabels: {}

View File

@ -1,4 +1,4 @@
{{- if .Values.nats.promExporter.podMonitor.enabled }} {{- if .Values.nats.exporter.serviceMonitor.enabled }}
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:

View File

@ -1,13 +1,13 @@
##!/bin/bash #!/bin/bash
set -ex set -ex
. ../../scripts/lib-update.sh helm dep update
#login_ecr_public ## NATS
update_helm
NATS_VERSION=0.8.4
rm -rf charts/nats && curl -L -s -o - https://github.com/nats-io/k8s/releases/download/v$NATS_VERSION/nats-$NATS_VERSION.tgz | tar xfz - -C charts
# Fetch dashboards # Fetch dashboards
../kubezero-metrics/sync_grafana_dashboards.py dashboards-nats.yaml templates/nats/grafana-dashboards.yaml ../kubezero-metrics/sync_grafana_dashboards.py dashboards-nats.yaml templates/nats/grafana-dashboards.yaml
../kubezero-metrics/sync_grafana_dashboards.py dashboards-rabbitmq.yaml templates/rabbitmq/grafana-dashboards.yaml ../kubezero-metrics/sync_grafana_dashboards.py dashboards-rabbitmq.yaml templates/rabbitmq/grafana-dashboards.yaml
update_docs

View File

@ -2,16 +2,17 @@
nats: nats:
enabled: false enabled: false
config: nats:
advertise: false
jetstream: jetstream:
enabled: true enabled: true
natsBox: natsbox:
enabled: false enabled: false
promExporter: exporter:
enabled: false serviceMonitor:
podMonitor:
enabled: false enabled: false
mqtt: mqtt:

View File

@ -28,8 +28,6 @@ spec:
containers: containers:
- name: kube-multus - name: kube-multus
image: {{ .Values.multus.image.repository }}:{{ .Values.multus.image.tag }} image: {{ .Values.multus.image.repository }}:{{ .Values.multus.image.tag }}
# Always used cached images
imagePullPolicy: Never
command: ["/entrypoint.sh"] command: ["/entrypoint.sh"]
args: args:
- "--multus-conf-file=/tmp/multus-conf/00-multus.conf" - "--multus-conf-file=/tmp/multus-conf/00-multus.conf"
@ -47,7 +45,6 @@ spec:
privileged: true privileged: true
capabilities: capabilities:
add: ["SYS_ADMIN"] add: ["SYS_ADMIN"]
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts: volumeMounts:
- name: run - name: run
mountPath: /run mountPath: /run

View File

@ -27,10 +27,9 @@ multus:
cilium: cilium:
enabled: false enabled: false
# Always use cached images # breaks preloaded images otherwise
image: image:
useDigest: false useDigest: false
pullPolicy: Never
resources: resources:
requests: requests:

View File

@ -1,8 +1,8 @@
apiVersion: v2 apiVersion: v2
name: kubezero-keyvalue name: kubezero-redis
description: KubeZero KeyValue Module description: KubeZero Umbrella Chart for Redis HA
type: application type: application
version: 0.1.0 version: 0.4.2
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -17,12 +17,12 @@ dependencies:
version: ">= 0.1.6" version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/ repository: https://cdn.zero-downtime.net/charts/
- name: redis - name: redis
version: 20.0.3 version: 16.13.2
repository: https://charts.bitnami.com/bitnami repository: https://charts.bitnami.com/bitnami
condition: redis.enabled condition: redis.enabled
- name: redis-cluster - name: redis-cluster
version: 11.0.2 version: 7.6.4
repository: https://charts.bitnami.com/bitnami repository: https://charts.bitnami.com/bitnami
condition: redis-cluster.enabled condition: redis-cluster.enabled
kubeVersion: ">= 1.26.0" kubeVersion: ">= 1.25.0"

View File

@ -1,8 +1,8 @@
# kubezero-keyvalue # kubezero-redis
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.4.1](https://img.shields.io/badge/Version-0.4.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero KeyValue Module KubeZero Umbrella Chart for Redis HA
**Homepage:** <https://kubezero.com> **Homepage:** <https://kubezero.com>
@ -14,13 +14,13 @@ KubeZero KeyValue Module
## Requirements ## Requirements
Kubernetes: `>= 1.26.0` Kubernetes: `>= 1.25.0`
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 | | https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.bitnami.com/bitnami | redis | 20.0.3 | | https://charts.bitnami.com/bitnami | redis | 16.10.1 |
| https://charts.bitnami.com/bitnami | redis-cluster | 11.0.2 | | https://charts.bitnami.com/bitnami | redis-cluster | 7.6.1 |
## Values ## Values

View File

@ -7,8 +7,3 @@ dashboards:
url: https://grafana.com/api/dashboards/11835/revisions/1/download url: https://grafana.com/api/dashboards/11835/revisions/1/download
tags: tags:
- Redis - Redis
- name: redis-cluster
url: https://grafana.com/api/dashboards/14615/revisions/1/download
tags:
- Redis
- Redis-Cluster

View File

@ -0,0 +1,13 @@
{{- if or .Values.redis.metrics.enabled ( index .Values "redis-cluster" "metrics" "enabled") }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" (include "kubezero-lib.fullname" $) "grafana-dashboards" | trunc 63 | trimSuffix "-" }}
namespace: {{ .Release.Namespace }}
labels:
grafana_dashboard: "1"
{{- include "kubezero-lib.labels" . | nindent 4 }}
binaryData:
redis.json.gz:
H4sIAAAAAAAC/+1daVPcOBr+Pr9C5Zndgq0OsQ0dmqmaDySBmdQAyQYyW7UD1aW2RbcWX2PJQIdifvvq8CHZ6jMNNMEfOKxX1vGejw5Ldz8AYPX7OEoySqyfwZ/sGYA78ZtRIhgilmq9P+1/+vzx+ODst4Mvp1anIAdwgAJO/5TGIaIjlJGK6CPipTihOI54lopAx4ko1IcUkjhLPVTRkiAb4uiDz+mJoVBJP8mbpVQrMtyz3xcd2aUU/ZXhFBk6VdQ/TOEljGBVOPaNyQUTfq0TrlFK8t692epuuXkjOubqEhgxZjUrS0bGqtRkpaLpdZhYiicyM2qy0VSls2Vv2Uv0jeBoGCBCIW1WeWqgNXtZihNGUczyMiqXp6zeCjChpXSrRjHKIMMB/cBLcjpVqsIcc6dZHhTBQcDpNM2Qkj7CviEVe3H0Lg7ilBeYDgdww+4A13HYr263A5xNteii6/tVX8A/wX6AUqo1oZIlGQ1imPpWTrsXfy9+yMVQN7DPyMcEvC/eApdxCqpOAkk+uE3ilKIUOFu3HYApuInTKwJuMB2BEQpCwETCGPA65blfjSBA+Qtb5+l5xH8+XIJxnIEQEy5fIDKCEIVxOgYZxQH+KvrWAUmAIEEgjH18OQbnVghvZbZzC1zDIEMAR/IfsjWGoVQfixVHayKwhhGiwiU4Tm+7K5O4gZzFcUBxwgi2SBRqF2VBIJ9Yq2HOHKfb23Z2uju2280LCHB0JXyDVDChwAZf4UFvhM5wiOKMKoVLGpf8W+hdDdM4i3jdlzAgSKf/wTtoJhFFeUGuP+4O05vujvyxt/Y0DZI5tneZZrl7HbBj8yw9Q54uozi7LsvSFaXsbhY6dFG551nWwPTLwyEUXLHLRIN8ZHqaCjPQ+8l0MIRUOIOq3CHMhqg0Y5HEdKNglGPbSndCHBUENZmM4pt6Zdx0Rszhj+LAP+KBiUzLcQzTKyREwLtRWFjVxhT7n2Kit3LEHneVwngTXOX5ttbKMX9uFC20dK96jJieMjOoaxcmJ+imzmVda3PWJQkzxDPpNRxTuq5oVYcUpyQNksaAoluq6BPISbzoMvG+M72wFEbDGYW5VWENzWTK8J4p56eYsYboGmFxFgnCcSwcMrOkKEIeRb6l5TnjNdc4GidlAKmkkcSEXuJbHaHkiYdxRE/xV1FP1/6HQk9R8x2RNvEVwZNjmEyRxSWzQf4eb7TONSp7Y5283q8R4vKFKQwlCVN2pjo1o7vEQaBHr20WuZiPZb96e9xxOD3Nt1zyepo2xUtWy5HFuNwH7W1rBeRma7Q44VVYMVlYh4swHSI6hW8sRImqmd5ssJ9+zIBEnzKfvSGCUz9L+APDuH2CmLr45A5HLMxFHvrl73Prp+Lh3Lr/8ycOhaU9Xmxu6ryufJkojqAUI6JnUWzZMlMOoUcFo1yNHKAhivzDsgb9ZeaZU+w105nGSaRc0wpCEQ+KTs+2p2hF6QpJjd+YCvdufRF8a+J2E64Tdq0q/66q/II6Xflj3mLrlzn1vnAjM1VfZCyGC16WpiiiZijbRvoVRfoojtALCfbuzGDPxw1cqT4yn5Q2xxASDDhuiwZaNNCigZWiARn7S4XoewFmvn9y7F862D9mSHeXiuchOsyVyQn19NMRvmxYRgkA3kmWLYkAei0CmI4ADLF7GQjQBBJPjQASxGqK6EOAgNoE3BNggJ0aBthZEQZwWgzQYoAWA6wUAzA9Af8C+TSAnHzuZ4SBgcGYoslQAIDXQHuHzyrMeGXzO8MPPbuz110FiDiWSwNfCByil4IkYIAheVfEfcW3DGDaCFJioecIRUM6Eq5NS0em7LOC+YJxGwsbdrSEX1Poc/Cn4QEeI/X+zB01e7Wo2ZsnavooOpWGU2+zaIc7IzIKA9KbBq+HBndVyLJJYYZvSsWRIXXSNEVMoclHypWvgtDoPnehxBDs0Q32haK40yDAPEFRCX76GulRWV6jVTmyg8MmzE14dVxrMv5uV09vSpAx3EcpEm76MogV85e+sgBLtY6xwOUhk7EwF+JdNWrhbi9B/pGMdDpt/pEkpMVUsheHIYx80k/S2EOEhxIh38kzyk54sYbzyPuLRY0dW3elgnPVOv0c8UQRoRJKahEjRJ/RMNfJ2gszBqq5WMDBLfIypuUsgBPkKeGmXClWhyDkMyIMXuRrxE17hkzozTEHiVNaG5wIW+4XMc3LwiyAFF8jq4lpqr0n6gaPW3iLa5Y4yLwrqZ5qn3mzc5PWJucrrF3LbR43lc7HYONjeDttfKAs6Y44J3T9yzcE6Y3ghHj4FhKk78goHWwju/SwjeQmWpw89Fi7djYMZNwUOgMOQ5MuivQjdF02WtsSsmbwQ5lLcJ8TJnHerACUOC0oeVJQojXt+8UkWAElV2gsmtAfYUpm4ZHuMngk3/1W05ZVAhXe9NWMcOfBKp0lmMs3vKHnyl7Z+AUZ/PZ5gsHfmCox/HcsugyYcwCnTwYFceTja+xnMHixULCmXUsBLLtFgd+MAuv4wfrx7aHz1rabivliJ6mcWZs5d5cAhLsPDwh5/DgIEzqeQPsvSuPV4Mj68ttawEieACABX3k32+mt5TZKLLA68tArHbwNa4MFF10HWj34mxfiMWteEMetB147404ETFgmauftVg/WhAY/JBJq4drzm7R7DnDMcVeAxxRmrdMM3Voiq3bVcCWrhhGi8gtrCR4eZA5pQbRwdwdEi8D9/aKQan7opHMgzujjsOCbZ8sYc2RjJ3NnzTHVCaL8w17w4fXHxcBUHTO1WOqJsVS7Bvpy4NTu085uOTtLwKnuEmiKC3KfnJlPWpiFtWrZF4FaKR6O6KnxJIclUZhx32oLwpYDYRq/VoLBSBYWu3/9gVjKmwg5NsFgDDb8wdPCDn/AIAf4vjZnyfmdDxSFcjXu/dt2Me5BMYn2AW4LSVpI8nwgSWOGZylM4my3UzwtulgvdPEKNLP3WUk4xdFwyntPB0eimIKigY++7PbcmGVm1PJbrNYDuh3kvQLXBJzE9FX5/DsTSIvi2t31LYxTikfX2BMqZf3Y27MPbVfVnxCF4nwBv4/5SKgvMjc/Hzq3nF13y9ndsrecn/ecrn1udf4XD1h6dYAnP6zCXEmKvADiULZh+/BNb3u73d41/wSY6ywBNnst1nwZWNPsCIU3kBuFctvTfFXhhdzpH+M/+uokw1cbyvqcgC/MGwmgNcfanJwtK1LXccd73qO12UdWZ3ju/h+L4avjq2z4M8az4uvQg1o32p1mLYZtpyLXGhz2nngqcil06Oy8ZHjotFORjzMVSePkaqPbAdh0VsV0eAHEERVPu+zphX5jq1WF1bTu9D0YBHmflt+5tgaHVYB3vCPLHFSxqj1a7Uxai0JeMgpZbNJpRVNMvWVAxJsWL0z47u+RoIL73X3vN+Vg5FWfi+xMCtPrEY/lxUglP4DXOPXYFJPbqNtG3WcSdctb1Jjpcavivd62pYJbxBuhEP5RXr3mOjKZjoP8FrL0SuZkbrISvLQaqyyaojDhM1rRcJ772arYdGc6otN4NZt6Umf93r+apIvpZdUIceQFmY/2zefVTrnHkMsxY/ZveG367YhaTKo8Ekv+K0PpeMKlfJqYHC11iG5rgy2LXOHkSxqcjiPPdMpT81bAuj1r2hUUB2/XvIQirvsF79Pz0SWOcHFFneBzX7qX8nKeDuB8FJFQOxp4aSGeFMUtIsPI9NIs+S3Un0qs7reJ1RBimG0K2ZF/F02zdGqjDzzNnDlXGtlJhZARdCYLMqK7NdCku5LxHMOUDwzEdEAS+6vRrk+xD8QhwAsoF6u8H9XeWUq3FunhqvXNafVtXmkIYfC0QvBCQKbVu6X1sA77ZmhgUffDaeACff5ONbN2PatA4CUYKo/yj29eOQW2L07rZ3hQey3BDN2n1cs5w8q76VQAbnWVoYpjKw/b6oN6aHtX+d9RH7ZtlaKMQFzlfye/kPai6AMfCCq6NLMWteA3asFqLe6O+qAsCez6anuLtmjs+xqL8bE1SOMbNkLN4as+3pv3ntwN0624UputTN5zfPv+6OTz71/+81WmVtcY7/xw/38XppizYHsAAA==
{{- end }}

View File

@ -1,12 +1,4 @@
#!/bin/bash #!/bin/bash
set -ex
. ../../scripts/lib-update.sh
#login_ecr_public
update_helm
# Fetch dashboards from Grafana.com and update ZDT CM # Fetch dashboards from Grafana.com and update ZDT CM
../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/grafana-dashboards.yaml ../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/grafana-dashboards.yaml
update_docs

View File

@ -3,10 +3,6 @@ redis:
architecture: standalone architecture: standalone
# Stick to last OSS version for now
image:
tag: 7.2.5-debian-12-r4
replica: replica:
replicaCount: 0 replicaCount: 0

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin" "subdir": "contrib/mixin"
} }
}, },
"version": "9f59ef8ead097f836271f125d0e3774ddae4e71d", "version": "e7f572914d79f0705b3dc8ca28d9a14b0f854d49",
"sum": "IXI3LQIT9NmTPJAk8WLUJd5+qZfcGpeNCyWIK7oEpws=" "sum": "IXI3LQIT9NmTPJAk8WLUJd5+qZfcGpeNCyWIK7oEpws="
}, },
{ {
@ -58,7 +58,7 @@
"subdir": "gen/grafonnet-latest" "subdir": "gen/grafonnet-latest"
} }
}, },
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a", "version": "119d65363dff84a1976bba609f2ac3a8f450e760",
"sum": "eyuJ0jOXeA4MrobbNgU4/v5a7ASDHslHZ0eS6hDdWoI=" "sum": "eyuJ0jOXeA4MrobbNgU4/v5a7ASDHslHZ0eS6hDdWoI="
}, },
{ {
@ -68,7 +68,7 @@
"subdir": "gen/grafonnet-v10.0.0" "subdir": "gen/grafonnet-v10.0.0"
} }
}, },
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a", "version": "119d65363dff84a1976bba609f2ac3a8f450e760",
"sum": "xdcrJPJlpkq4+5LpGwN4tPAuheNNLXZjE6tDcyvFjr0=" "sum": "xdcrJPJlpkq4+5LpGwN4tPAuheNNLXZjE6tDcyvFjr0="
}, },
{ {
@ -78,7 +78,7 @@
"subdir": "gen/grafonnet-v11.0.0" "subdir": "gen/grafonnet-v11.0.0"
} }
}, },
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a", "version": "119d65363dff84a1976bba609f2ac3a8f450e760",
"sum": "Fuo+qTZZzF+sHDBWX/8fkPsUmwW6qhH8hRVz45HznfI=" "sum": "Fuo+qTZZzF+sHDBWX/8fkPsUmwW6qhH8hRVz45HznfI="
}, },
{ {
@ -88,8 +88,8 @@
"subdir": "grafana-builder" "subdir": "grafana-builder"
} }
}, },
"version": "3f0a5b0eeb2f5dc381a420b35d27198bd9b72e8c", "version": "ea6f2601969aa12c02dbca761ce4316aff036af2",
"sum": "yxqWcq/N3E/a/XreeU6EuE6X7kYPnG0AspAQFKOjASo=" "sum": "udZaafkbKYMGodLqsFhEe+Oy/St2p0edrK7hiMPEey0="
}, },
{ {
"source": { "source": {
@ -118,8 +118,8 @@
"subdir": "" "subdir": ""
} }
}, },
"version": "dd5c59ab4491159593ed370a344a553b57146a7d", "version": "3dfa72d1d1ab31a686b1f52ec28bbf77c972bd23",
"sum": "2tFZyRtLw9nasUQdFn5LGGqJplJyAeJxd59u6mHU+mw=" "sum": "7ufhpvzoDqAYLrfAsGkTAIRmu2yWQkmHukTE//jOsJU="
}, },
{ {
"source": { "source": {
@ -128,8 +128,8 @@
"subdir": "jsonnet/kube-state-metrics" "subdir": "jsonnet/kube-state-metrics"
} }
}, },
"version": "e96dfc0a39d8e2ae759a954a98d8bc9b29bf1a3e", "version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"sum": "h6H5AsU7JsCAWttnPgevTNituobj2eIr2ebxdkaABQo=" "sum": "pvInhJNQVDOcC3NGWRMKRIP954mAvLXCQpTlafIg7fA="
}, },
{ {
"source": { "source": {
@ -138,7 +138,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin" "subdir": "jsonnet/kube-state-metrics-mixin"
} }
}, },
"version": "e96dfc0a39d8e2ae759a954a98d8bc9b29bf1a3e", "version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c=" "sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
}, },
{ {
@ -168,7 +168,7 @@
"subdir": "jsonnet/mixin" "subdir": "jsonnet/mixin"
} }
}, },
"version": "105b88afada91ecd4dab14b6d091b0933c749972", "version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=", "sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin" "name": "prometheus-operator-mixin"
}, },
@ -179,8 +179,8 @@
"subdir": "jsonnet/prometheus-operator" "subdir": "jsonnet/prometheus-operator"
} }
}, },
"version": "105b88afada91ecd4dab14b6d091b0933c749972", "version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"sum": "lTyttpFADJ40Zd7FuwgXcXswU+7grlQBeXms7gyabYc=" "sum": "z2/5LjQpWC7snhT+n/mtQqoy5986uI95sTqcKQziwGU="
}, },
{ {
"source": { "source": {
@ -189,7 +189,7 @@
"subdir": "doc/alertmanager-mixin" "subdir": "doc/alertmanager-mixin"
} }
}, },
"version": "cad5fa580108431e6ed209f2a23a373aa50c098f", "version": "eb8369ec510d76f63901379a8437c4b55885d6c5",
"sum": "IpF46ZXsm+0wJJAPtAre8+yxTNZA57mBqGpBP/r7/kw=", "sum": "IpF46ZXsm+0wJJAPtAre8+yxTNZA57mBqGpBP/r7/kw=",
"name": "alertmanager" "name": "alertmanager"
}, },
@ -210,7 +210,7 @@
"subdir": "documentation/prometheus-mixin" "subdir": "documentation/prometheus-mixin"
} }
}, },
"version": "d4f098ae80fb276153efc757e373c813163da0e8", "version": "e9dec5fc537b1709f3a0e4c959043fb159b5d413",
"sum": "dYLcLzGH4yF3qB7OGC/7z4nqeTNjv42L7Q3BENU8XJI=", "sum": "dYLcLzGH4yF3qB7OGC/7z4nqeTNjv42L7Q3BENU8XJI=",
"name": "prometheus" "name": "prometheus"
}, },
@ -232,7 +232,7 @@
"subdir": "mixin" "subdir": "mixin"
} }
}, },
"version": "639bf8f216494ad9c375ebaac45f5d15715065ba", "version": "35c0dbec856f97683a846e9c53f83156a3a44ff3",
"sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=", "sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=",
"name": "thanos-mixin" "name": "thanos-mixin"
} }

View File

@ -155,10 +155,6 @@ aws-ebs-csi-driver:
loggingFormat: json loggingFormat: json
tolerateAllTaints: false tolerateAllTaints: false
priorityClassName: system-node-critical priorityClassName: system-node-critical
# We have /var on additional volume: root + ENI + /var = 3
reservedVolumeAttachments: 3
tolerations: tolerations:
- key: kubezero-workergroup - key: kubezero-workergroup
effect: NoSchedule effect: NoSchedule
@ -244,7 +240,7 @@ aws-efs-csi-driver:
cpu: 20m cpu: 20m
memory: 96Mi memory: 96Mi
limits: limits:
memory: 256Mi memory: 128Mi
affinity: affinity:
nodeAffinity: nodeAffinity:

View File

@ -29,43 +29,8 @@ Kubernetes: `>= 1.26.0`
| Key | Type | Default | Description | | Key | Type | Default | Description |
|-----|------|---------|-------------| |-----|------|---------|-------------|
| data-prepper.config."log4j2-rolling.properties" | string | `"status = error\ndest = err\nname = PropertiesConfig\n\nappender.console.type = Console\nappender.console.name = STDOUT\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %d{ISO8601} [%t] %-5p %40C - %m%n\n\nrootLogger.level = warn\nrootLogger.appenderRef.stdout.ref = STDOUT\n\nlogger.pipeline.name = org.opensearch.dataprepper.pipeline\nlogger.pipeline.level = info\n\nlogger.parser.name = org.opensearch.dataprepper.parser\nlogger.parser.level = info\n\nlogger.plugins.name = org.opensearch.dataprepper.plugins\nlogger.plugins.level = info\n"` | |
| data-prepper.enabled | bool | `false` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.buffer.bounded_blocking | string | `nil` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.delay | int | `3000` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.processor[0].service_map.window_duration | int | `180` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.bulk_size | int | `4` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.hosts[0] | string | `"https://telemetry:9200"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.index_type | string | `"trace-analytics-service-map"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.insecure | bool | `true` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.password | string | `"admin"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.username | string | `"admin"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.source.pipeline.name | string | `"otel-trace-pipeline"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.workers | int | `1` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.buffer.bounded_blocking | string | `nil` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.delay | string | `"100"` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.sink[0].pipeline.name | string | `"raw-traces-pipeline"` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.sink[1].pipeline.name | string | `"otel-service-map-pipeline"` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.source.otel_trace_source.ssl | bool | `false` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.workers | int | `1` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.buffer.bounded_blocking | string | `nil` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.delay | int | `3000` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[0].otel_traces | string | `nil` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.hosts[0] | string | `"https://telemetry:9200"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.insecure | bool | `true` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.password | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.username | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.hosts[0] | string | `"https://telemetry:9200"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.index_type | string | `"trace-analytics-raw"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.insecure | bool | `true` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.password | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.username | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.source.pipeline.name | string | `"otel-trace-pipeline"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.workers | int | `1` | |
| data-prepper.pipelineConfig.config.simple-sample-pipeline | string | `nil` | |
| data-prepper.securityContext.capabilities.drop[0] | string | `"ALL"` | |
| fluent-bit.config.customParsers | string | `"[PARSER]\n Name cri-log\n Format regex\n Regex ^(?<time>.+) (?<stream>stdout|stderr) (?<logtag>F|P) (?<log>.*)$\n Time_Key time\n Time_Format %Y-%m-%dT%H:%M:%S.%L%z\n"` | | | fluent-bit.config.customParsers | string | `"[PARSER]\n Name cri-log\n Format regex\n Regex ^(?<time>.+) (?<stream>stdout|stderr) (?<logtag>F|P) (?<log>.*)$\n Time_Key time\n Time_Format %Y-%m-%dT%H:%M:%S.%L%z\n"` | |
| fluent-bit.config.filters | string | `"[FILTER]\n Name parser\n Match cri.*\n Parser cri-log\n Key_Name log\n\n[FILTER]\n Name kubernetes\n Match cri.*\n Merge_Log On\n Merge_Log_Key kube\n Kube_Tag_Prefix cri.var.log.containers.\n Keep_Log Off\n Annotations Off\n K8S-Logging.Parser Off\n K8S-Logging.Exclude Off\n Kube_Meta_Cache_TTL 3600s\n Buffer_Size 0\n #Use_Kubelet true\n\n{{- if index .Values \"config\" \"extraRecords\" }}\n\n[FILTER]\n Name record_modifier\n Match cri.*\n {{- range $k,$v := index .Values \"config\" \"extraRecords\" }}\n Record {{ $k }} {{ $v }}\n {{- end }}\n{{- end }}\n\n[FILTER]\n Name rewrite_tag\n Match cri.*\n Emitter_Name kube_tag_rewriter\n Rule $kubernetes['pod_id'] .* kube.$kubernetes['namespace_name'].$kubernetes['container_name'] false\n\n[FILTER]\n Name lua\n Match kube.*\n script /fluent-bit/scripts/kubezero.lua\n call nest_k8s_ns\n"` | | | fluent-bit.config.filters | string | `"[FILTER]\n Name parser\n Match cri.*\n Parser cri-log\n Key_Name log\n\n[FILTER]\n Name kubernetes\n Match cri.*\n Merge_Log On\n Merge_Log_Key kube\n Kube_Tag_Prefix cri.var.log.containers.\n Keep_Log Off\n K8S-Logging.Parser Off\n K8S-Logging.Exclude Off\n Kube_Meta_Cache_TTL 3600s\n Buffer_Size 0\n #Use_Kubelet true\n\n{{- if index .Values \"config\" \"extraRecords\" }}\n\n[FILTER]\n Name record_modifier\n Match cri.*\n {{- range $k,$v := index .Values \"config\" \"extraRecords\" }}\n Record {{ $k }} {{ $v }}\n {{- end }}\n{{- end }}\n\n[FILTER]\n Name rewrite_tag\n Match cri.*\n Emitter_Name kube_tag_rewriter\n Rule $kubernetes['pod_id'] .* kube.$kubernetes['namespace_name'].$kubernetes['container_name'] false\n\n[FILTER]\n Name lua\n Match kube.*\n script /fluent-bit/scripts/kubezero.lua\n call nest_k8s_ns\n"` | |
| fluent-bit.config.flushInterval | int | `5` | | | fluent-bit.config.flushInterval | int | `5` | |
| fluent-bit.config.input.memBufLimit | string | `"16MB"` | | | fluent-bit.config.input.memBufLimit | string | `"16MB"` | |
| fluent-bit.config.input.refreshInterval | int | `5` | | | fluent-bit.config.input.refreshInterval | int | `5` | |
@ -155,8 +120,6 @@ Kubernetes: `>= 1.26.0`
| jaeger.provisionDataStore.elasticsearch | bool | `false` | | | jaeger.provisionDataStore.elasticsearch | bool | `false` | |
| jaeger.query.agentSidecar.enabled | bool | `false` | | | jaeger.query.agentSidecar.enabled | bool | `false` | |
| jaeger.query.serviceMonitor.enabled | bool | `false` | | | jaeger.query.serviceMonitor.enabled | bool | `false` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.num-replicas" | int | `1` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.num-shards" | int | `2` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.tls.enabled" | string | `""` | | | jaeger.storage.elasticsearch.cmdlineParams."es.tls.enabled" | string | `""` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.tls.skip-host-verify" | string | `""` | | | jaeger.storage.elasticsearch.cmdlineParams."es.tls.skip-host-verify" | string | `""` | |
| jaeger.storage.elasticsearch.host | string | `"telemetry"` | | | jaeger.storage.elasticsearch.host | string | `"telemetry"` | |
@ -170,9 +133,7 @@ Kubernetes: `>= 1.26.0`
| opensearch.dashboard.istio.url | string | `"telemetry-dashboard.example.com"` | | | opensearch.dashboard.istio.url | string | `"telemetry-dashboard.example.com"` | |
| opensearch.nodeSets | list | `[]` | | | opensearch.nodeSets | list | `[]` | |
| opensearch.prometheus | bool | `false` | | | opensearch.prometheus | bool | `false` | |
| opensearch.version | string | `"2.16.0"` | | | opensearch.version | string | `"2.15.0"` | |
| opentelemetry-collector.config.exporters.otlp/data-prepper.endpoint | string | `"telemetry-data-prepper:21890"` | |
| opentelemetry-collector.config.exporters.otlp/data-prepper.tls.insecure | bool | `true` | |
| opentelemetry-collector.config.exporters.otlp/jaeger.endpoint | string | `"telemetry-jaeger-collector:4317"` | | | opentelemetry-collector.config.exporters.otlp/jaeger.endpoint | string | `"telemetry-jaeger-collector:4317"` | |
| opentelemetry-collector.config.exporters.otlp/jaeger.tls.insecure | bool | `true` | | | opentelemetry-collector.config.exporters.otlp/jaeger.tls.insecure | bool | `true` | |
| opentelemetry-collector.config.extensions.health_check.endpoint | string | `"${env:MY_POD_IP}:13133"` | | | opentelemetry-collector.config.extensions.health_check.endpoint | string | `"${env:MY_POD_IP}:13133"` | |
@ -188,7 +149,6 @@ Kubernetes: `>= 1.26.0`
| opentelemetry-collector.config.service.pipelines.logs | string | `nil` | | | opentelemetry-collector.config.service.pipelines.logs | string | `nil` | |
| opentelemetry-collector.config.service.pipelines.metrics | string | `nil` | | | opentelemetry-collector.config.service.pipelines.metrics | string | `nil` | |
| opentelemetry-collector.config.service.pipelines.traces.exporters[0] | string | `"otlp/jaeger"` | | | opentelemetry-collector.config.service.pipelines.traces.exporters[0] | string | `"otlp/jaeger"` | |
| opentelemetry-collector.config.service.pipelines.traces.exporters[1] | string | `"otlp/data-prepper"` | |
| opentelemetry-collector.config.service.pipelines.traces.processors[0] | string | `"memory_limiter"` | | | opentelemetry-collector.config.service.pipelines.traces.processors[0] | string | `"memory_limiter"` | |
| opentelemetry-collector.config.service.pipelines.traces.processors[1] | string | `"batch"` | | | opentelemetry-collector.config.service.pipelines.traces.processors[1] | string | `"batch"` | |
| opentelemetry-collector.config.service.pipelines.traces.receivers[0] | string | `"otlp"` | | | opentelemetry-collector.config.service.pipelines.traces.receivers[0] | string | `"otlp"` | |

View File

@ -35,5 +35,5 @@ spec:
indexPatterns: indexPatterns:
- "logstash-*" - "logstash-*"
- "jaeger-*" - "jaeger-*"
- "otel-v1-apm-span-*" - "otel-v1-apm-span-*"
{{- end }} {{- end }}

View File

@ -62,6 +62,9 @@ data-prepper:
name: "otel-trace-pipeline" name: "otel-trace-pipeline"
processor: processor:
- service_map: - service_map:
# The window duration is the maximum length of time the data prepper stores the most recent trace data to evaluvate service-map relationships.
# The default is 3 minutes, this means we can detect relationships between services from spans reported in last 3 minutes.
# Set higher value if your applications have higher latency.
window_duration: 180 window_duration: 180
buffer: buffer:
bounded_blocking: bounded_blocking:
@ -228,7 +231,7 @@ jaeger:
url: jaeger.example.com url: jaeger.example.com
opensearch: opensearch:
version: 2.16.0 version: 2.15.0
prometheus: false prometheus: false
# custom cluster settings # custom cluster settings
@ -574,7 +577,6 @@ fluent-bit:
Merge_Log_Key kube Merge_Log_Key kube
Kube_Tag_Prefix cri.var.log.containers. Kube_Tag_Prefix cri.var.log.containers.
Keep_Log Off Keep_Log Off
Annotations Off
K8S-Logging.Parser Off K8S-Logging.Parser Off
K8S-Logging.Exclude Off K8S-Logging.Exclude Off
Kube_Meta_Cache_TTL 3600s Kube_Meta_Cache_TTL 3600s

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero name: kubezero
description: KubeZero - Root App of Apps chart description: KubeZero - Root App of Apps chart
type: application type: application
version: 1.29.7-1 version: 1.29.7
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:
@ -15,4 +15,4 @@ dependencies:
- name: kubezero-lib - name: kubezero-lib
version: ">= 0.1.6" version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts repository: https://cdn.zero-downtime.net/charts
kubeVersion: ">= 1.26.0-0" kubeVersion: ">= 1.26.0"

View File

@ -181,7 +181,6 @@ aws-eks-asg-rolling-update-handler:
- name: AWS_WEB_IDENTITY_TOKEN_FILE - name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token" value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS - name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@ -13,9 +13,6 @@ argo-cd:
repoServer: repoServer:
metrics: metrics:
enabled: {{ .Values.metrics.enabled }} enabled: {{ .Values.metrics.enabled }}
{{- with index .Values "argo" "argo-cd" "repoServer" }}
{{- toYaml . | nindent 4 }}
{{- end }}
server: server:
metrics: metrics:
enabled: {{ .Values.metrics.enabled }} enabled: {{ .Values.metrics.enabled }}

View File

@ -9,32 +9,11 @@ cert-manager:
type: Recreate type: Recreate
{{- end }} {{- end }}
{{- if eq .Values.global.platform "aws" }} prometheus:
# map everything to the control-plane servicemonitor:
nodeSelector: enabled: {{ $.Values.metrics.enabled }}
node-role.kubernetes.io/control-plane: "" {{ with index .Values "cert-manager" "IamArn" }}
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
webhook:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
cainjector:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
extraEnv: extraEnv:
- name: AWS_REGION
value: {{ .Values.global.aws.region }}
{{ with index .Values "cert-manager" "IamArn" }}
- name: AWS_ROLE_ARN - name: AWS_ROLE_ARN
value: "{{ . }}" value: "{{ . }}"
- name: AWS_WEB_IDENTITY_TOKEN_FILE - name: AWS_WEB_IDENTITY_TOKEN_FILE
@ -55,19 +34,7 @@ cert-manager:
- name: aws-token - name: aws-token
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/" mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true readOnly: true
{{- end }} {{- end }}
{{- end }}
{{- if eq .Values.global.platform "gke" }}
serviceAccount:
annotations:
iam.gke.io/gcp-service-account: "dns01-solver@{{ .Values.global.gcp.projectId }}.iam.gserviceaccount.com"
{{- end }}
prometheus:
servicemonitor:
enabled: {{ $.Values.metrics.enabled }}
{{- with index .Values "cert-manager" "clusterIssuer" }} {{- with index .Values "cert-manager" "clusterIssuer" }}
clusterIssuer: clusterIssuer:

View File

@ -3,10 +3,6 @@
gateway: gateway:
name: istio-ingressgateway name: istio-ingressgateway
{{- if ne .Values.global.platform "gke" }}
priorityClassName: "system-cluster-critical"
{{- end }}
{{- with index .Values "istio-ingress" "gateway" "replicaCount" }} {{- with index .Values "istio-ingress" "gateway" "replicaCount" }}
replicaCount: {{ . }} replicaCount: {{ . }}
{{- if gt (int .) 1 }} {{- if gt (int .) 1 }}
@ -15,7 +11,7 @@ gateway:
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if eq .Values.global.platform "aws" }} {{- if not (index .Values "istio-ingress" "gateway" "affinity") }}
# Only nodes who are fronted with matching LB # Only nodes who are fronted with matching LB
affinity: affinity:
nodeAffinity: nodeAffinity:

View File

@ -3,10 +3,6 @@
gateway: gateway:
name: istio-private-ingressgateway name: istio-private-ingressgateway
{{- if ne .Values.global.platform "gke" }}
priorityClassName: "system-cluster-critical"
{{- end }}
{{- with index .Values "istio-private-ingress" "gateway" "replicaCount" }} {{- with index .Values "istio-private-ingress" "gateway" "replicaCount" }}
replicaCount: {{ . }} replicaCount: {{ . }}
{{- if gt (int .) 1 }} {{- if gt (int .) 1 }}
@ -15,7 +11,7 @@ gateway:
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if eq .Values.global.platform "aws" }} {{- if not (index .Values "istio-private-ingress" "gateway" "affinity") }}
# Only nodes who are fronted with matching LB # Only nodes who are fronted with matching LB
affinity: affinity:
nodeAffinity: nodeAffinity:

View File

@ -1,37 +1,21 @@
{{- define "istio-values" }} {{- define "istio-values" }}
{{- if .Values.global.highAvailable }}
global:
defaultPodDisruptionBudget:
enabled: true
{{- if ne .Values.global.platform "gke" }}
priorityClassName: "system-cluster-critical"
{{- end }}
{{- end }}
istiod: istiod:
telemetry: telemetry:
enabled: {{ $.Values.metrics.enabled }} enabled: {{ $.Values.metrics.enabled }}
pilot: pilot:
{{- if eq .Values.global.platform "aws" }}
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
{{- end }}
{{- if .Values.global.highAvailable }} {{- if .Values.global.highAvailable }}
replicaCount: 2 replicaCount: 2
global:
defaultPodDisruptionBudget:
enabled: true
{{- else }} {{- else }}
extraContainerArgs: extraContainerArgs:
- --leader-elect=false - --leader-elect=false
{{- end }} {{- end }}
{{- with index .Values "istio" "kiali-server" }} {{- with index .Values "istio" "kiali-server" }}
kiali-server: kiali-server:
{{- toYaml . | nindent 2 }} {{- toYaml . | nindent 2 }}
{{- end }} {{- end }}
{{- with .Values.istio.rateLimiting }} {{- with .Values.istio.rateLimiting }}
rateLimiting: rateLimiting:
{{- toYaml . | nindent 2 }} {{- toYaml . | nindent 2 }}

View File

@ -5,15 +5,9 @@ kubezero:
gitSync: {} gitSync: {}
global: global:
clusterName: zdt-trial-cluster
# platform: aws (kubeadm, default), gke, or nocloud
platform: "aws"
highAvailable: false highAvailable: false
clusterName: zdt-trial-cluster
aws: {} aws: {}
gcp: {}
addons: addons:
enabled: true enabled: true
@ -43,7 +37,7 @@ network:
cert-manager: cert-manager:
enabled: false enabled: false
namespace: cert-manager namespace: cert-manager
targetRevision: 0.9.9 targetRevision: 0.9.8
storage: storage:
enabled: false enabled: false
@ -64,13 +58,13 @@ storage:
istio: istio:
enabled: false enabled: false
namespace: istio-system namespace: istio-system
targetRevision: 0.22.3-1 targetRevision: 0.22.3
istio-ingress: istio-ingress:
enabled: false enabled: false
chart: kubezero-istio-gateway chart: kubezero-istio-gateway
namespace: istio-ingress namespace: istio-ingress
targetRevision: 0.22.3-1 targetRevision: 0.22.3
gateway: gateway:
service: {} service: {}
@ -78,7 +72,7 @@ istio-private-ingress:
enabled: false enabled: false
chart: kubezero-istio-gateway chart: kubezero-istio-gateway
namespace: istio-ingress namespace: istio-ingress
targetRevision: 0.22.3-1 targetRevision: 0.22.3
gateway: gateway:
service: {} service: {}
@ -114,7 +108,7 @@ metrics:
logging: logging:
enabled: false enabled: false
namespace: logging namespace: logging
targetRevision: 0.8.12 targetRevision: 0.8.11
argo: argo:
enabled: false enabled: false

View File

Before

Width:  |  Height:  |  Size: 166 KiB

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

View File

@ -1,22 +0,0 @@
# ![k8s-v1.29](images/k8s-v129.png) KubeZero 1.29
## What's new - Major themes
- all KubeZero and support AMIs based on Alpine 3.20.1
- new (optional) Telemetry module integrated consisting of OpenTelemetry Collector, Jaeger UI, OpenSearch + Dashboards backend
- custom KubeZero ArgoCD edition adding support for referring to external secrets via helm-secrets + vals
- updated Nvidia and AWS Neuron drivers to latest versions for AI/ML workloads
- Falco IDS now using the modern eBPF event source ( preview )
## Version upgrades
- cilium 1.15.7
- istio 1.22.3
- ArgoCD 2.11.5
- Prometheus 2.53 / Grafana 11.1 ( fixing many of the previous warnings )
- ...
### FeatureGates
- CustomCPUCFSQuotaPeriod
- KubeProxyDrainingTerminatingNodes
- ImageMaximumGCAge
## Known issues