Compare commits

..

49 Commits

Author SHA1 Message Date
355cf21fe5 feat: add support for custom argo-cd repoServer options 2024-09-13 05:48:03 +00:00
38510e7a6c feat: keycloak version bump 2024-09-10 01:46:40 +00:00
595b3ba863 Merge pull request 'chore(deps): update keycloak docker tag to v22.2.1' (#364) from renovate/kubezero-auth-kubezero-auth-dependencies into main
Reviewed-on: #364
2024-09-10 01:45:32 +00:00
0fddeed052 feat: set network pullpolicy to Never, fix for cert-manager on AWS, doc updates 2024-08-29 12:49:31 +00:00
48e58c00ce Merge pull request 'chore(deps): update helm release renovate to v38' (#356) from renovate/kubezero-ci-major-kubezero-ci-dependencies into main
Reviewed-on: #356
2024-08-29 10:41:57 +00:00
e25514eb96 chore(deps): update helm release renovate to v38 2024-08-29 03:28:19 +00:00
cc6dfab616 chore(deps): update keycloak docker tag to v22.2.1 2024-08-28 03:34:44 +00:00
cd093fd2ac Merge pull request 'release/v1.29' (#368) from release/v1.29 into main
Reviewed-on: #368
2024-08-24 10:33:33 +00:00
6dc5e43c24 fix: set Istio attributes properly again 2024-08-24 10:19:39 +00:00
1058ec1a1d fix: switch default platform to aws for easy migrate 2024-08-24 09:53:25 +00:00
bda47405ac Add pause to upgrade script after control plane 2024-08-24 10:52:24 +01:00
5f01d4ec85 feat: Major version bump for all MQ components 2024-08-23 14:56:24 +00:00
9697c120bf fix: adjust keycloak postgresql resources to reasonable values 2024-08-23 14:56:06 +00:00
9f78ef35c2 Merge pull request 'chore(deps): update kubezero-mq-dependencies (major)' (#323) from renovate/kubezero-mq-major-kubezero-mq-dependencies into main
Reviewed-on: #323
2024-08-23 14:42:28 +00:00
26c65a3f99 chore(deps): update kubezero-mq-dependencies 2024-08-23 12:31:29 +00:00
a578f04249 Merge pull request 'chore(deps): update kubezero-mq-dependencies' (#319) from renovate/kubezero-mq-kubezero-mq-dependencies into main
Reviewed-on: #319
2024-08-23 12:14:12 +00:00
8d819c9d02 feat: latest CI Jenkins, some doc fixes 2024-08-22 13:08:13 +00:00
e834017bdb Merge pull request 'chore(deps): update helm release jenkins to v5.5.8' (#359) from renovate/kubezero-ci-kubezero-ci-dependencies into main
Reviewed-on: #359
2024-08-22 12:25:12 +00:00
25dedc7252 Merge pull request 'fix: bump memory limit for new EFS ds to prevent OOMs' (#366) from release/v1.29 into main
Reviewed-on: #366
2024-08-21 19:17:01 +00:00
852292102b fix: bump memory limit for new EFS ds to prevent OOMs 2024-08-21 19:10:48 +00:00
432da07f63 fix: pin Redis to latest OSS version for now 2024-08-21 19:08:37 +00:00
001b7ae0dc feat: rename kubezero-redis to kubezero-keyvalue, bump all redis versions 2024-08-21 12:52:55 +00:00
5583e79ba5 chore(deps): update helm release jenkins to v5.5.8 2024-08-20 03:16:57 +00:00
e3ab74e970 Merge pull request 'Docs updates - cherry pick' (#363) from release/v1.29 into main
Reviewed-on: #363
2024-08-19 14:51:23 +00:00
b4ad8ee3bc doc: update support timelines 2024-08-19 15:50:12 +01:00
8849e76a9c doc: update support timelines 2024-08-19 15:22:40 +01:00
0e3980cb73 chore: doc fixes 2024-08-19 15:11:32 +01:00
1f8329f631 feat: latest OpenSearch, typos 2024-08-19 14:10:28 +00:00
c5f82c0948 feat: major version bump for auth / keycloak 2024-08-16 13:06:32 +00:00
339085c928 Merge pull request 'chore(deps): update keycloak docker tag to v22' (#350) from renovate/kubezero-auth-major-kubezero-auth-dependencies into main
Reviewed-on: #350
2024-08-16 09:38:04 +00:00
bbc0984f16 chore(deps): update keycloak docker tag to v22 2024-08-14 03:37:17 +00:00
ac75adf604 Fix: use latest jenkins-podman image 2024-08-09 16:09:42 +00:00
c2d348a597 Merge pull request 'Adjust basic modules to support KubeZero on GKE' (#358) from gcp into main
Reviewed-on: #358
2024-08-09 11:15:52 +00:00
af0b7fea01 Merge branch 'main' into gcp 2024-08-09 11:15:30 +00:00
cb79383c3e ci: release first GKE charts 2024-08-09 10:57:06 +00:00
12bd7199f9 feat: CI tools version bump, new annotation to jenkins agents to prevent autoscaler evictions 2024-08-09 10:52:23 +00:00
9190961935 feat: first working version of KubeZero on GKE 2024-08-09 11:45:27 +01:00
e7f40804c6 feat: some prep work for gcp support 2024-08-09 10:41:24 +00:00
342883c4ae Merge pull request 'chore(deps): update kubezero-ci-dependencies' (#342) from renovate/kubezero-ci-kubezero-ci-dependencies into main
Reviewed-on: #342
2024-08-09 10:37:54 +00:00
41b9d2bd77 Merge pull request 'chore(deps): update helm release cert-manager to v1.15.2' (#355) from renovate/kubezero-cert-manager-kubezero-cert-manager-dependencies into main
Reviewed-on: #355
2024-08-08 20:49:33 +00:00
58ee697697 chore(deps): update kubezero-ci-dependencies 2024-08-08 03:07:02 +00:00
f2ea52da7d fix: make fluent-bit run on control-plane again, latest logging module 2024-07-31 19:05:43 +00:00
f2b38c3b6b docs: v129 changelog updates 2024-07-31 19:44:49 +01:00
299eef216f chore(deps): update helm release cert-manager to v1.15.2 2024-07-31 03:06:40 +00:00
8a382417c7 docs: first draft of V1.29 changelog 2024-07-29 15:37:55 +00:00
1c58a691e3 fix: set proper attachement limits for EBS CSI driver 2024-07-26 17:26:23 +00:00
d7feeb15b1 fix: remove double labels, make upgrade_cluster work reliably on clustered control plane 2024-07-26 13:46:58 +00:00
9b56ccd447 fix: various minor fixes 2024-07-25 14:39:16 +01:00
24d341ec43 chore(deps): update kubezero-mq-dependencies 2024-06-27 15:32:09 +00:00
94 changed files with 625 additions and 2761 deletions

View File

@ -14,12 +14,12 @@ KubeZero is a Kubernetes distribution providing an integrated container platform
# Architecture
![aws_architecture](docs/aws_architecture.png)
![aws_architecture](docs/images/aws_architecture.png)
# Version / Support Matrix
KubeZero releases track the same *minor* version of Kubernetes.
Any 1.26.X-Y release of Kubezero supports any Kubernetes cluster 1.26.X.
Any 1.30.X-Y release of Kubezero supports any Kubernetes cluster 1.30.X.
KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed.
@ -28,15 +28,15 @@ KubeZero is distributed as a collection of versioned Helm charts, allowing custo
gantt
title KubeZero Support Timeline
dateFormat YYYY-MM-DD
section 1.27
beta :127b, 2023-09-01, 2023-09-30
release :after 127b, 2024-04-30
section 1.28
beta :128b, 2024-03-01, 2024-04-30
release :after 128b, 2024-08-31
section 1.29
beta :129b, 2024-07-01, 2024-08-30
beta :129b, 2024-07-01, 2024-07-31
release :after 129b, 2024-11-30
section 1.30
beta :130b, 2024-09-01, 2024-10-31
release :after 130b, 2025-02-28
```
[Upstream release policy](https://kubernetes.io/releases/)
@ -44,7 +44,7 @@ gantt
# Components
## OS
- all compute nodes are running on Alpine V3.19
- all compute nodes are running on Alpine V3.20
- 1 or 2 GB encrypted root file system
- no external dependencies at boot time, apart from container registries
- minimal attack surface

View File

@ -19,6 +19,22 @@ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
. "$SCRIPT_DIR"/libhelm.sh
CHARTS="$(dirname $SCRIPT_DIR)/charts"
# Guess platform from current context
_auth_cmd=$(kubectl config view | yq .users[0].user.exec.command)
if [ "$_auth_cmd" == "gke-gcloud-auth-plugin" ]; then
PLATFORM=gke
elif [ "$_auth_cmd" == "aws-iam-authenticator" ]; then
PLATFORM=aws
else
PLATFORM=nocloud
fi
parse_version() {
echo $([[ $1 =~ ^v[0-9]+\.[0-9]+\.[0-9]+ ]] && echo "${BASH_REMATCH[0]//v/}")
}
KUBE_VERSION=$(parse_version $KUBE_VERSION)
### Various hooks for modules
################
@ -71,7 +87,7 @@ if [ ${ARTIFACTS[0]} == "all" ]; then
fi
# Delete in reverse order, continue even if errors
if [ $ACTION == "delete" ]; then
if [ "$ACTION" == "delete" ]; then
set +e
for (( idx=${#ARTIFACTS[@]}-1 ; idx>=0 ; idx-- )) ; do
_helm delete ${ARTIFACTS[idx]} || true

View File

@ -66,6 +66,7 @@ render_kubeadm() {
parse_kubezero() {
export CLUSTERNAME=$(yq eval '.global.clusterName // .clusterName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export PLATFORM=$(yq eval '.global.platform // "nocloud"' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export HIGHAVAILABLE=$(yq eval '.global.highAvailable // .highAvailable // "false"' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export ETCD_NODENAME=$(yq eval '.etcd.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
export NODENAME=$(yq eval '.nodeName' ${HOSTFS}/etc/kubernetes/kubeadm-values.yaml)
@ -148,10 +149,8 @@ kubeadm_upgrade() {
post_kubeadm
# If we have a re-cert kubectl config install for root
if [ -f ${HOSTFS}/etc/kubernetes/super-admin.conf ]; then
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config
fi
# install re-certed kubectl config for root
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config
# post upgrade hook
[ -f /var/lib/kubezero/post-upgrade.sh ] && . /var/lib/kubezero/post-upgrade.sh
@ -183,6 +182,10 @@ control_plane_node() {
# restore latest backup
retry 10 60 30 restic restore latest --no-lock -t / # --tag $KUBE_VERSION_MINOR
# get timestamp from latest snap for debug / message
# we need a way to surface this info to eg. Slack
#snapTime="$(restic snapshots latest --json | jq -r '.[].time')"
# Make last etcd snapshot available
cp ${WORKDIR}/etcd_snapshot ${HOSTFS}/etc/kubernetes
@ -260,7 +263,12 @@ control_plane_node() {
_kubeadm init phase kubelet-start
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config
# Remove conditional with 1.30
if [ -f ${HOSTFS}/etc/kubernetes/super-admin.conf ]; then
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${HOSTFS}/root/.kube/config
else
cp ${HOSTFS}/etc/kubernetes/admin.conf ${HOSTFS}/root/.kube/config
fi
# Wait for api to be online
echo "Waiting for Kubernetes API to be online ..."
@ -306,7 +314,7 @@ control_plane_node() {
post_kubeadm
echo "${1} cluster $CLUSTERNAME successfull."
echo "${CMD}ed cluster $CLUSTERNAME successfully."
}
@ -364,7 +372,9 @@ backup() {
# pki & cluster-admin access
cp -r ${HOSTFS}/etc/kubernetes/pki ${WORKDIR}
cp ${HOSTFS}/etc/kubernetes/admin.conf ${WORKDIR}
cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${WORKDIR}
# Remove conditional with 1.30
[ -f ${HOSTFS}/etc/kubernetes/super-admin.conf ] && cp ${HOSTFS}/etc/kubernetes/super-admin.conf ${WORKDIR}
# Backup via restic
restic backup ${WORKDIR} -H $CLUSTERNAME --tag $CLUSTER_VERSION

View File

@ -34,9 +34,11 @@ function argo_used() {
# get kubezero-values from ArgoCD if available or use in-cluster CM without Argo
function get_kubezero_values() {
local _namespace="kube-system"
[ "$PLATFORM" == "gke" ] && _namespace=kubezero
argo_used && \
{ kubectl get application kubezero -n argocd -o yaml | yq .spec.source.helm.values > ${WORKDIR}/kubezero-values.yaml; } || \
{ kubectl get configmap -n kube-system kubezero-values -o yaml | yq '.data."values.yaml"' > ${WORKDIR}/kubezero-values.yaml ;}
{ kubectl get configmap -n $_namespace kubezero-values -o yaml | yq '.data."values.yaml"' > ${WORKDIR}/kubezero-values.yaml ;}
}
@ -169,14 +171,14 @@ function _helm() {
yq eval '.spec.source.helm.values' $WORKDIR/kubezero/templates/${module}.yaml > $WORKDIR/values.yaml
echo "using values to $action of module $module: "
cat $WORKDIR/values.yaml
if [ $action == "crds" ]; then
# Allow custom CRD handling
declare -F ${module}-crds && ${module}-crds || _crds
elif [ $action == "apply" ]; then
echo "using values to $action of module $module: "
cat $WORKDIR/values.yaml
# namespace must exist prior to apply
create_ns $namespace

View File

@ -21,6 +21,9 @@ argo_used && disable_argo
control_plane_upgrade kubeadm_upgrade
echo "Control plane upgraded, <Return> to continue"
read -r
#echo "Adjust kubezero values as needed:"
# shellcheck disable=SC2015
#argo_used && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kube-system
@ -38,6 +41,9 @@ echo "Applying remaining KubeZero modules..."
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo"
# Final step is to commit the new argocd kubezero app
kubectl get app kubezero -n argocd -o yaml | yq 'del(.status) | del(.metadata) | del(.operation) | .metadata.name="kubezero" | .metadata.namespace="argocd"' | yq 'sort_keys(..) | .spec.source.helm.values |= (from_yaml | to_yaml)' > $ARGO_APP
# Trigger backup of upgraded cluster state
kubectl create job --from=cronjob/kubezero-backup kubezero-backup-$KUBE_VERSION -n kube-system
while true; do
@ -45,9 +51,6 @@ while true; do
sleep 1
done
# Final step is to commit the new argocd kubezero app
kubectl get app kubezero -n argocd -o yaml | yq 'del(.status) | del(.metadata) | del(.operation) | .metadata.name="kubezero" | .metadata.namespace="argocd"' | yq 'sort_keys(..) | .spec.source.helm.values |= (from_yaml | to_yaml)' > $ARGO_APP
echo "Please commit $ARGO_APP as the updated kubezero/application.yaml for your cluster."
echo "Then head over to ArgoCD for this cluster and sync all KubeZero modules to apply remaining upgrades."

View File

@ -1,6 +1,6 @@
# kubezero-addons
![Version: 0.8.8](https://img.shields.io/badge/Version-0.8.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.28](https://img.shields.io/badge/AppVersion-v1.28-informational?style=flat-square)
![Version: 0.8.8](https://img.shields.io/badge/Version-0.8.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.29](https://img.shields.io/badge/AppVersion-v1.29-informational?style=flat-square)
KubeZero umbrella chart for various optional cluster addons
@ -63,7 +63,7 @@ Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/)
| aws-eks-asg-rolling-update-handler.environmentVars[8].name | string | `"AWS_STS_REGIONAL_ENDPOINTS"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[8].value | string | `"regional"` | |
| aws-eks-asg-rolling-update-handler.image.repository | string | `"twinproduction/aws-eks-asg-rolling-update-handler"` | |
| aws-eks-asg-rolling-update-handler.image.tag | string | `"v1.8.3"` | |
| aws-eks-asg-rolling-update-handler.image.tag | string | `"v1.8.4"` | |
| aws-eks-asg-rolling-update-handler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| aws-eks-asg-rolling-update-handler.resources.limits.memory | string | `"128Mi"` | |
| aws-eks-asg-rolling-update-handler.resources.requests.cpu | string | `"10m"` | |

View File

@ -33,4 +33,4 @@ dependencies:
version: 0.11.0
repository: https://argoproj.github.io/argo-helm
condition: argocd-image-updater.enabled
kubeVersion: ">= 1.26.0"
kubeVersion: ">= 1.26.0-0"

View File

@ -14,7 +14,7 @@ KubeZero Argo - Events, Workflow, CD
## Requirements
Kubernetes: `>= 1.26.0`
Kubernetes: `>= 1.26.0-0`
| Repository | Name | Version |
|------------|------|---------|
@ -65,7 +65,7 @@ Kubernetes: `>= 1.26.0`
| argo-cd.repoServer.initContainers[0].command[0] | string | `"/usr/local/bin/sa2kubeconfig.sh"` | |
| argo-cd.repoServer.initContainers[0].command[1] | string | `"/home/argocd/.kube/config"` | |
| argo-cd.repoServer.initContainers[0].image | string | `"{{ default .Values.global.image.repository .Values.repoServer.image.repository }}:{{ default (include \"argo-cd.defaultTag\" .) .Values.repoServer.image.tag }}"` | |
| argo-cd.repoServer.initContainers[0].imagePullPolicy | string | `"IfNotPresent"` | |
| argo-cd.repoServer.initContainers[0].imagePullPolicy | string | `"{{ default .Values.global.image.imagePullPolicy .Values.repoServer.image.imagePullPolicy }}"` | |
| argo-cd.repoServer.initContainers[0].name | string | `"create-kubeconfig"` | |
| argo-cd.repoServer.initContainers[0].securityContext.allowPrivilegeEscalation | bool | `false` | |
| argo-cd.repoServer.initContainers[0].securityContext.capabilities.drop[0] | string | `"ALL"` | |

View File

@ -26,7 +26,7 @@ argo-events:
versions:
- version: 2.10.11
natsImage: nats:2.10.11-scratch
metricsExporterImage: natsio/prometheus-nats-exporter:0.15.0
metricsExporterImage: natsio/prometheus-nats-exporter:0.14.0
configReloaderImage: natsio/nats-server-config-reloader:0.14.1
startCommand: /nats-server
@ -91,7 +91,7 @@ argo-cd:
secret:
createSecret: false
# `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/'`
# `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/' | base64 -w0`
# argocdServerAdminPassword: "$2a$10$ivKzaXVxMqdeDSfS3nqi1Od3iDbnL7oXrixzDfZFRHlXHnAG6LydG"
# argocdServerAdminPassword: "ref+file://secrets.yaml#/test"
# argocdServerAdminPasswordMtime: "2020-04-24T15:33:09BST"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-auth
description: KubeZero umbrella chart for all things Authentication and Identity management
type: application
version: 0.4.6
version: 0.5.1
appVersion: 22.0.5
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
@ -16,9 +16,8 @@ dependencies:
- name: kubezero-lib
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- #! renovate: datasource=docker
name: keycloak
- name: keycloak
repository: "oci://registry-1.docker.io/bitnamicharts"
version: 18.7.1
version: 22.2.1
condition: keycloak.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-auth
![Version: 0.4.6](https://img.shields.io/badge/Version-0.4.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 22.0.5](https://img.shields.io/badge/AppVersion-22.0.5-informational?style=flat-square)
![Version: 0.5.1](https://img.shields.io/badge/Version-0.5.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 22.0.5](https://img.shields.io/badge/AppVersion-22.0.5-informational?style=flat-square)
KubeZero umbrella chart for all things Authentication and Identity management
@ -19,7 +19,7 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| oci://registry-1.docker.io/bitnamicharts | keycloak | 18.7.1 |
| oci://registry-1.docker.io/bitnamicharts | keycloak | 22.2.1 |
# Keycloak
@ -41,6 +41,7 @@ https://github.com/keycloak/keycloak-benchmark/tree/main/provision/minikube/keyc
| keycloak.auth.existingSecret | string | `"kubezero-auth"` | |
| keycloak.auth.passwordSecretKey | string | `"admin-password"` | |
| keycloak.enabled | bool | `false` | |
| keycloak.hostnameStrict | bool | `false` | |
| keycloak.istio.admin.enabled | bool | `false` | |
| keycloak.istio.admin.gateway | string | `"istio-ingress/private-ingressgateway"` | |
| keycloak.istio.admin.url | string | `""` | |
@ -55,9 +56,13 @@ https://github.com/keycloak/keycloak-benchmark/tree/main/provision/minikube/keyc
| keycloak.postgresql.auth.existingSecret | string | `"kubezero-auth"` | |
| keycloak.postgresql.auth.username | string | `"keycloak"` | |
| keycloak.postgresql.primary.persistence.size | string | `"1Gi"` | |
| keycloak.postgresql.primary.resources.limits.memory | string | `"128Mi"` | |
| keycloak.postgresql.primary.resources.requests.cpu | string | `"100m"` | |
| keycloak.postgresql.primary.resources.requests.memory | string | `"64Mi"` | |
| keycloak.postgresql.readReplicas.replicaCount | int | `0` | |
| keycloak.production | bool | `true` | |
| keycloak.proxy | string | `"edge"` | |
| keycloak.proxyHeaders | string | `"xforwarded"` | |
| keycloak.replicaCount | int | `1` | |
| keycloak.resources.limits.memory | string | `"768Mi"` | |
| keycloak.resources.requests.cpu | string | `"100m"` | |
| keycloak.resources.requests.memory | string | `"512Mi"` | |

View File

@ -2,11 +2,11 @@
## backup
- shell into running posgres-auth pod
- shell into running postgres-auth pod
```
export PGPASSWORD="<postgres_password from secret>"
cd /bitnami/posgres
pg_dumpall > backup
export PGPASSWORD="$POSTGRES_POSTGRES_PASSWORD"
cd /bitnami/postgresql
pg_dumpall -U postgres > /bitnami/postgresql/backup
```
- store backup off-site
@ -29,15 +29,17 @@ kubectl cp keycloak/kubezero-auth-postgresql-0:/bitnami/postgresql/backup postgr
kubectl cp postgres-backup keycloak/kubezero-auth-postgresql-0:/bitnami/postgresql/backup
```
- log into psql as admin ( shell on running pod )
- shell into running postgres-auth pod
```
export PGPASSWORD="$POSTGRES_POSTGRES_PASSWORD"
cd /bitnami/postgresql
psql -U postgres
```
- drop database `keycloak` in case the keycloak instances connected early
```
DROP database keycloak
```
```
- actual restore
```

View File

@ -1,8 +1,9 @@
keycloak:
enabled: false
proxy: edge
production: true
hostnameStrict: false
proxyHeaders: xforwarded
auth:
adminUser: admin
@ -15,14 +16,18 @@ keycloak:
create: false
minAvailable: 1
resources:
limits:
#cpu: 750m
memory: 768Mi
requests:
cpu: 100m
memory: 512Mi
metrics:
enabled: false
serviceMonitor:
enabled: true
resources:
requests:
cpu: 100m
memory: 512Mi
postgresql:
auth:
@ -34,6 +39,14 @@ keycloak:
persistence:
size: 1Gi
resources:
limits:
#cpu: 750m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
readReplicas:
replicaCount: 0

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-cert-manager
description: KubeZero Umbrella Chart for cert-manager
type: application
version: 0.9.8
version: 0.9.9
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -16,6 +16,6 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: cert-manager
version: v1.15.1
version: v1.15.2
repository: https://charts.jetstack.io
kubeVersion: ">= 1.26.0"
kubeVersion: ">= 1.26.0-0"

View File

@ -1,6 +1,6 @@
# kubezero-cert-manager
![Version: 0.9.8](https://img.shields.io/badge/Version-0.9.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.9.9](https://img.shields.io/badge/Version-0.9.9-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for cert-manager
@ -14,12 +14,12 @@ KubeZero Umbrella Chart for cert-manager
## Requirements
Kubernetes: `>= 1.26.0`
Kubernetes: `>= 1.26.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.jetstack.io | cert-manager | v1.15.1 |
| https://charts.jetstack.io | cert-manager | v1.15.2 |
## AWS - OIDC IAM roles
@ -34,9 +34,6 @@ If your resolvers need additional sercrets like CloudFlare API tokens etc. make
|-----|------|---------|-------------|
| cert-manager.cainjector.extraArgs[0] | string | `"--logging-format=json"` | |
| cert-manager.cainjector.extraArgs[1] | string | `"--leader-elect=false"` | |
| cert-manager.cainjector.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cert-manager.cainjector.tolerations[0].effect | string | `"NoSchedule"` | |
| cert-manager.cainjector.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| cert-manager.crds.enabled | bool | `true` | |
| cert-manager.enableCertificateOwnerRef | bool | `true` | |
| cert-manager.enabled | bool | `true` | |
@ -46,15 +43,9 @@ If your resolvers need additional sercrets like CloudFlare API tokens etc. make
| cert-manager.global.leaderElection.namespace | string | `"cert-manager"` | |
| cert-manager.ingressShim.defaultIssuerKind | string | `"ClusterIssuer"` | |
| cert-manager.ingressShim.defaultIssuerName | string | `"letsencrypt-dns-prod"` | |
| cert-manager.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cert-manager.prometheus.servicemonitor.enabled | bool | `false` | |
| cert-manager.startupapicheck.enabled | bool | `false` | |
| cert-manager.tolerations[0].effect | string | `"NoSchedule"` | |
| cert-manager.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| cert-manager.webhook.extraArgs[0] | string | `"--logging-format=json"` | |
| cert-manager.webhook.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cert-manager.webhook.tolerations[0].effect | string | `"NoSchedule"` | |
| cert-manager.webhook.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| clusterIssuer | object | `{}` | |
| localCA.enabled | bool | `false` | |
| localCA.selfsigning | bool | `true` | |

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin"
}
},
"version": "010d462c0ff03a70f5c5fd32efbb76ad4c1e7c81",
"version": "df4e472a2d09813560ba44b21a29c0453dbec18c",
"sum": "IXI3LQIT9NmTPJAk8WLUJd5+qZfcGpeNCyWIK7oEpws="
},
{
@ -58,7 +58,7 @@
"subdir": "gen/grafonnet-latest"
}
},
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"version": "733beadbc8dab55c5fe1bcdcf0d8a2d215759a55",
"sum": "eyuJ0jOXeA4MrobbNgU4/v5a7ASDHslHZ0eS6hDdWoI="
},
{
@ -68,7 +68,7 @@
"subdir": "gen/grafonnet-v10.0.0"
}
},
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"version": "733beadbc8dab55c5fe1bcdcf0d8a2d215759a55",
"sum": "xdcrJPJlpkq4+5LpGwN4tPAuheNNLXZjE6tDcyvFjr0="
},
{
@ -78,8 +78,8 @@
"subdir": "gen/grafonnet-v11.0.0"
}
},
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"sum": "Fuo+qTZZzF+sHDBWX/8fkPsUmwW6qhH8hRVz45HznfI="
"version": "733beadbc8dab55c5fe1bcdcf0d8a2d215759a55",
"sum": "0BvzR0i4bS4hc2O3xDv6i9m52z7mPrjvqxtcPrGhynA="
},
{
"source": {
@ -88,8 +88,8 @@
"subdir": "grafana-builder"
}
},
"version": "1d877bb0651ef92176f651d0be473c06e372a8a0",
"sum": "udZaafkbKYMGodLqsFhEe+Oy/St2p0edrK7hiMPEey0="
"version": "d9ba581fb27aa6689e911f288d4df06948eb8aad",
"sum": "yxqWcq/N3E/a/XreeU6EuE6X7kYPnG0AspAQFKOjASo="
},
{
"source": {
@ -128,8 +128,8 @@
"subdir": ""
}
},
"version": "3dfa72d1d1ab31a686b1f52ec28bbf77c972bd23",
"sum": "7ufhpvzoDqAYLrfAsGkTAIRmu2yWQkmHukTE//jOsJU="
"version": "1b71e399caee334af8ba2d15d0dd615043a652d0",
"sum": "qcRxavmCpuWQuwCMqYaOZ+soA8jxwWLrK7LYqohN5NA="
},
{
"source": {
@ -138,8 +138,8 @@
"subdir": "jsonnet/kube-state-metrics"
}
},
"version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"sum": "pvInhJNQVDOcC3NGWRMKRIP954mAvLXCQpTlafIg7fA="
"version": "f8aa7d9bb9d8e29876e19f4859391a54a7e61d63",
"sum": "lO7jUSzAIy8Yk9pOWJIWgPRhubkWzVh56W6wtYfbVH4="
},
{
"source": {
@ -148,7 +148,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin"
}
},
"version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"version": "f8aa7d9bb9d8e29876e19f4859391a54a7e61d63",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
},
{
@ -158,8 +158,8 @@
"subdir": "jsonnet/kube-prometheus"
}
},
"version": "defa2bd1e242519c62a5c2b3b786b1caa6d906d4",
"sum": "INKeZ+QIIPImq+TrfHT8CpYdoRzzxRk0txG07XlOo/Q="
"version": "33c43a4067a174a99529e41d537eef290a7028ea",
"sum": "/jU8uXWR202aR7K/3zOefhc4JBUAUkTdHvE9rhfzI/g="
},
{
"source": {
@ -168,7 +168,7 @@
"subdir": "jsonnet/mixin"
}
},
"version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"version": "aa74b0d377d32648ca50f2531fe2253895629d9f",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin"
},
@ -179,8 +179,8 @@
"subdir": "jsonnet/prometheus-operator"
}
},
"version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"sum": "z2/5LjQpWC7snhT+n/mtQqoy5986uI95sTqcKQziwGU="
"version": "aa74b0d377d32648ca50f2531fe2253895629d9f",
"sum": "EZR4sBAtmFRsUR7U4SybuBUhK9ncMCvEu9xHtu8B9KA="
},
{
"source": {
@ -189,7 +189,7 @@
"subdir": "doc/alertmanager-mixin"
}
},
"version": "eb8369ec510d76f63901379a8437c4b55885d6c5",
"version": "27b6eb7ce02680c84b9a06503edbddc9213f586d",
"sum": "IpF46ZXsm+0wJJAPtAre8+yxTNZA57mBqGpBP/r7/kw=",
"name": "alertmanager"
},
@ -210,7 +210,7 @@
"subdir": "documentation/prometheus-mixin"
}
},
"version": "ac85bd47e1cfa0d63520e4c0b4e26900c42c326b",
"version": "616038f2b64656b2c9c6053f02aee544c5b8bb17",
"sum": "dYLcLzGH4yF3qB7OGC/7z4nqeTNjv42L7Q3BENU8XJI=",
"name": "prometheus"
},
@ -232,7 +232,7 @@
"subdir": "mixin"
}
},
"version": "35c0dbec856f97683a846e9c53f83156a3a44ff3",
"version": "dcadaae80fcce1fb05452b37ca8d3b2809d7cef9",
"sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=",
"name": "thanos-mixin"
}

View File

@ -61,31 +61,15 @@ cert-manager:
# mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
# readOnly: true
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
ingressShim:
defaultIssuerName: letsencrypt-dns-prod
defaultIssuerKind: ClusterIssuer
webhook:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
extraArgs:
- "--logging-format=json"
cainjector:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
extraArgs:
- "--logging-format=json"
- "--leader-elect=false"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-ci
description: KubeZero umbrella chart for all things CI
type: application
version: 0.8.13
version: 0.8.16
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -22,7 +22,7 @@ dependencies:
repository: https://dl.gitea.io/charts/
condition: gitea.enabled
- name: jenkins
version: 5.4.3
version: 5.5.8
repository: https://charts.jenkins.io
condition: jenkins.enabled
- name: trivy
@ -30,7 +30,7 @@ dependencies:
repository: https://aquasecurity.github.io/helm-charts/
condition: trivy.enabled
- name: renovate
version: 37.438.2
version: 38.57.0
repository: https://docs.renovatebot.com/helm-charts
condition: renovate.enabled
kubeVersion: ">= 1.25.0"

View File

@ -1,6 +1,6 @@
# kubezero-ci
![Version: 0.8.13](https://img.shields.io/badge/Version-0.8.13-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.8.16](https://img.shields.io/badge/Version-0.8.16-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for all things CI
@ -20,9 +20,9 @@ Kubernetes: `>= 1.25.0`
|------------|------|---------|
| https://aquasecurity.github.io/helm-charts/ | trivy | 0.7.0 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.jenkins.io | jenkins | 5.4.3 |
| https://charts.jenkins.io | jenkins | 5.5.8 |
| https://dl.gitea.io/charts/ | gitea | 10.4.0 |
| https://docs.renovatebot.com/helm-charts | renovate | 37.438.2 |
| https://docs.renovatebot.com/helm-charts | renovate | 38.57.0 |
# Jenkins
- default build retention 10 builds, 32days
@ -84,13 +84,14 @@ Kubernetes: `>= 1.25.0`
| gitea.securityContext.capabilities.drop[0] | string | `"ALL"` | |
| gitea.strategy.type | string | `"Recreate"` | |
| gitea.test.enabled | bool | `false` | |
| jenkins.agent.annotations."cluster-autoscaler.kubernetes.io/safe-to-evict" | string | `"false"` | |
| jenkins.agent.annotations."container.apparmor.security.beta.kubernetes.io/jnlp" | string | `"unconfined"` | |
| jenkins.agent.containerCap | int | `2` | |
| jenkins.agent.customJenkinsLabels[0] | string | `"podman-aws-trivy"` | |
| jenkins.agent.defaultsProviderTemplate | string | `"podman-aws"` | |
| jenkins.agent.idleMinutes | int | `30` | |
| jenkins.agent.image.repository | string | `"public.ecr.aws/zero-downtime/jenkins-podman"` | |
| jenkins.agent.image.tag | string | `"v0.6.0"` | |
| jenkins.agent.image.tag | string | `"v0.6.2"` | |
| jenkins.agent.inheritYamlMergeStrategy | bool | `true` | |
| jenkins.agent.podName | string | `"podman-aws"` | |
| jenkins.agent.podRetention | string | `"Default"` | |
@ -103,7 +104,7 @@ Kubernetes: `>= 1.25.0`
| jenkins.agent.serviceAccount | string | `"jenkins-podman-aws"` | |
| jenkins.agent.showRawYaml | bool | `false` | |
| jenkins.agent.yamlMergeStrategy | string | `"merge"` | |
| jenkins.agent.yamlTemplate | string | `"apiVersion: v1\nkind: Pod\nspec:\n securityContext:\n fsGroup: 1000\n containers:\n - name: jnlp\n resources:\n requests:\n cpu: \"512m\"\n memory: \"1024Mi\"\n limits:\n cpu: \"4\"\n memory: \"6144Mi\"\n github.com/fuse: 1\n volumeMounts:\n - name: aws-token\n mountPath: \"/var/run/secrets/sts.amazonaws.com/serviceaccount/\"\n readOnly: true\n - name: host-registries-conf\n mountPath: \"/home/jenkins/.config/containers/registries.conf\"\n readOnly: true\n volumes:\n - name: aws-token\n projected:\n sources:\n - serviceAccountToken:\n path: token\n expirationSeconds: 86400\n audience: \"sts.amazonaws.com\"\n - name: host-registries-conf\n hostPath:\n path: /etc/containers/registries.conf\n type: File"` | |
| jenkins.agent.yamlTemplate | string | `"apiVersion: v1\nkind: Pod\nspec:\n securityContext:\n fsGroup: 1000\n containers:\n - name: jnlp\n resources:\n requests:\n cpu: \"200m\"\n memory: \"512Mi\"\n limits:\n cpu: \"4\"\n memory: \"6144Mi\"\n github.com/fuse: 1\n volumeMounts:\n - name: aws-token\n mountPath: \"/var/run/secrets/sts.amazonaws.com/serviceaccount/\"\n readOnly: true\n - name: host-registries-conf\n mountPath: \"/home/jenkins/.config/containers/registries.conf\"\n readOnly: true\n volumes:\n - name: aws-token\n projected:\n sources:\n - serviceAccountToken:\n path: token\n expirationSeconds: 86400\n audience: \"sts.amazonaws.com\"\n - name: host-registries-conf\n hostPath:\n path: /etc/containers/registries.conf\n type: File"` | |
| jenkins.controller.JCasC.configScripts.zdt-settings | string | `"jenkins:\n noUsageStatistics: true\n disabledAdministrativeMonitors:\n - \"jenkins.security.ResourceDomainRecommendation\"\nappearance:\n themeManager:\n disableUserThemes: true\n theme: \"dark\"\nunclassified:\n openTelemetry:\n configurationProperties: |-\n otel.exporter.otlp.protocol=grpc\n otel.instrumentation.jenkins.web.enabled=false\n ignoredSteps: \"dir,echo,isUnix,pwd,properties\"\n #endpoint: \"telemetry-jaeger-collector.telemetry:4317\"\n exportOtelConfigurationAsEnvironmentVariables: false\n #observabilityBackends:\n # - jaeger:\n # jaegerBaseUrl: \"https://jaeger.example.com\"\n # name: \"KubeZero Jaeger\"\n serviceName: \"Jenkins\"\n buildDiscarders:\n configuredBuildDiscarders:\n - \"jobBuildDiscarder\"\n - defaultBuildDiscarder:\n discarder:\n logRotator:\n artifactDaysToKeepStr: \"32\"\n artifactNumToKeepStr: \"10\"\n daysToKeepStr: \"100\"\n numToKeepStr: \"10\"\n"` | |
| jenkins.controller.containerEnv[0].name | string | `"OTEL_LOGS_EXPORTER"` | |
| jenkins.controller.containerEnv[0].value | string | `"none"` | |

View File

@ -12,6 +12,48 @@ Use the following links to reference issues, PRs, and commits prior to v2.6.0.
The changelog until v1.5.7 was auto-generated based on git commits.
Those entries include a reference to the git commit to be able to get more details.
## 5.5.8
Add `agent.garbageCollection` to support setting [kubernetes plugin garbage collection](https://plugins.jenkins.io/kubernetes/#plugin-content-garbage-collection-beta).
## 5.5.7
Update `kubernetes` to version `4285.v50ed5f624918`
## 5.5.6
Add `agent.useDefaultServiceAccount` to support omitting setting `serviceAccount` in the default pod template from `serviceAgentAccount.name`.
Add `agent.serviceAccount` to support setting the default pod template value.
## 5.5.5
Update `jenkins/inbound-agent` to version `3261.v9c670a_4748a_9-1`
## 5.5.4
Update `jenkins/jenkins` to version `2.462.1-jdk17`
## 5.5.3
Update `git` to version `5.3.0`
## 5.5.2
Update `kubernetes` to version `4280.vd919fa_528c7e`
## 5.5.1
Update `kubernetes` to version `4265.v78b_d4a_1c864a_`
## 5.5.0
Introduce capability of set skipTlsVerify and usageRestricted flags in additionalClouds
## 5.4.4
Update CHANGELOG.md, README.md, and UPGRADING.md for linting
## 5.4.3
Update `configuration-as-code` to version `1836.vccda_4a_122a_a_e`
@ -39,7 +81,6 @@ Update `kubernetes` to version `4253.v7700d91739e5`
## 5.3.4
Update `jenkins/jenkins` to version `2.452.3-jdk17`
## 5.3.3
Update `jenkins/inbound-agent` to version `3256.v88a_f6e922152-1`
@ -374,7 +415,7 @@ Changes in 4.7.0 were reverted.
## 4.7.0
Runs `config-reload` as an init container, in addition to the sidecar container, to ensure that JCasC YAMLS are present before the main Jenkins container starts. This should fix some race conditions and crashes on startup.
Runs `config-reload` as an init container, in addition to the sidecar container, to ensure that JCasC YAMLs are present before the main Jenkins container starts. This should fix some race conditions and crashes on startup.
## 4.6.7
@ -540,7 +581,7 @@ Disable volume mount if disableSecretMount enabled
## 4.3.9
Document `.Values.agent.directConnection` in README.
Document `.Values.agent.directConnection` in readme.
Add default value for `.Values.agent.directConnection` to `values.yaml`
## 4.3.8
@ -732,7 +773,7 @@ Fix path of projected secrets from `additionalExistingSecrets`.
## 4.1.7
Update README with explanation on the required environmental variable `AWS_REGION` in case of using an S3 bucket.
Update readme with explanation on the required environmental variable `AWS_REGION` in case of using an S3 bucket.
## 4.1.6
@ -740,7 +781,7 @@ project adminSecret, additionalSecrets and additionalExistingSecrets instead of
## 4.1.5
Update README to fix `JAVA_OPTS` name.
Update readme to fix `JAVA_OPTS` name.
## 4.1.4
Update plugins
@ -855,7 +896,7 @@ Update default plugin versions
## 3.9.4
Add JAVA_OPTIONS to the README so proxy settings get picked by jenkins-plugin-cli
Add JAVA_OPTIONS to the readme so proxy settings get picked by jenkins-plugin-cli
## 3.9.3
@ -1148,7 +1189,7 @@ Update Jenkins image and appVersion to jenkins lts release version 2.263.4
## 3.1.12
Added GitHub action to automate the updating of LTS releases.
Added GitHub Action to automate the updating of LTS releases.
## 3.1.11
@ -1352,7 +1393,7 @@ Added unit tests for most resources in the Helm chart.
## 2.12.1
Helm chart README update
Helm chart readme update
## 2.12.0
@ -1414,7 +1455,7 @@ Fixes #19
## 2.6.0 First release in jenkinsci GitHub org
Updated README for new location
Updated readme for new location
## 2.5.2
@ -1430,7 +1471,7 @@ Add an option to specify that Jenkins master should be initialized only once, du
## 2.4.1
Reorder README parameters into sections to facilitate chart usage and maintenance
Reorder readme parameters into sections to facilitate chart usage and maintenance
## 2.4.0 Update default agent image
@ -1464,7 +1505,7 @@ Configure `REQ_RETRY_CONNECT` to `10` to give Jenkins more time to start up.
Value can be configured via `master.sidecars.configAutoReload.reqRetryConnect`
## 2.1.2 updated README
## 2.1.2 updated readme
## 2.1.1 update credentials-binding plugin to 1.23
@ -1478,7 +1519,7 @@ Only render authorizationStrategy and securityRealm when values are set.
## 2.0.0 Configuration as Code now default + container does not run as root anymore
The README contains more details for this update.
The readme contains more details for this update.
Please note that the updated values contain breaking changes.
## 1.27.0 Update plugin versions & sidecar container
@ -1643,7 +1684,7 @@ In recent version of configuration-as-code-plugin this is no longer necessary.
## 1.9.24
Update JCasC auto-reload docs and remove stale ssh key references from version "1.8.0 JCasC auto reload works without ssh keys"
Update JCasC auto-reload docs and remove stale SSH key references from version "1.8.0 JCasC auto reload works without SSH keys"
## 1.9.23 Support jenkinsUriPrefix when JCasC is enabled
@ -1768,7 +1809,7 @@ Revert fix in `1.7.10` since direct connection is now disabled by default.
Add `master.schedulerName` to allow setting a Kubernetes custom scheduler
## 1.8.0 JCasC auto reload works without ssh keys
## 1.8.0 JCasC auto reload works without SSH keys
We make use of the fact that the Jenkins Configuration as Code Plugin can be triggered via http `POST` to `JENKINS_URL/configuration-as-code/reload`and a pre-shared key.
The sidecar container responsible for reloading config changes is now `kiwigrid/k8s-sidecar:0.1.20` instead of it's fork `shadwell/k8s-sidecar`.
@ -2296,7 +2337,7 @@ commit: 9de96faa0
## 0.32.7
Fix Markdown syntax in README (#11496)
Fix Markdown syntax in readme (#11496)
commit: a32221a95
## 0.32.6
@ -2526,7 +2567,7 @@ commit: e0a20b0b9
## 0.16.22
avoid lint errors when adding Values.Ingress.Annotations (#7425)
avoid linting errors when adding Values.Ingress.Annotations (#7425)
commit: 99eacc854
## 0.16.21
@ -2551,7 +2592,7 @@ commit: bf8180018
## 0.16.17
Add Master.AdminPassword in README (#6987)
Add Master.AdminPassword in readme (#6987)
commit: 13e754ad7
## 0.16.16
@ -2621,7 +2662,7 @@ commit: fc6100c38
## 0.16.1
fix typo in jenkins README (#5228)
fix typo in jenkins readme (#5228)
commit: 3cd3f4b8b
## 0.16.0
@ -2742,7 +2783,7 @@ commit: 9a230a6b1
Double retry count for Jenkins test
commit: 129c8e824
Jenkins: Update README | Master.ServiceAnnotations (#2757)
Jenkins: Update readme | Master.ServiceAnnotations (#2757)
commit: 6571810bc
## 0.10.0
@ -2814,7 +2855,7 @@ commit: 4af5810ff
## 0.8.4
Add support for supplying JENKINS_OPTS and/or uri prefix (#1405)
Add support for supplying JENKINS_OPTS and/or URI prefix (#1405)
commit: 6a331901a
## 0.8.3
@ -3024,7 +3065,7 @@ commit: 3cbd3ced6
Remove 'Getting Started:' from various NOTES.txt. (#181)
commit: 2f63fd524
docs(\*): update READMEs to reference chart repos (#119)
docs(\*): update readmes to reference chart repos (#119)
commit: c7d1bff05
## 0.1.0

View File

@ -1,14 +1,14 @@
annotations:
artifacthub.io/category: integration-delivery
artifacthub.io/changes: |
- Update `configuration-as-code` to version `1836.vccda_4a_122a_a_e`
- Add `agent.garbageCollection` to support setting [kubernetes plugin garbage collection](https://plugins.jenkins.io/kubernetes/#plugin-content-garbage-collection-beta).
artifacthub.io/images: |
- name: jenkins
image: docker.io/jenkins/jenkins:2.452.3-jdk17
image: docker.io/jenkins/jenkins:2.462.1-jdk17
- name: k8s-sidecar
image: docker.io/kiwigrid/k8s-sidecar:1.27.5
- name: inbound-agent
image: jenkins/inbound-agent:3256.v88a_f6e922152-1
image: jenkins/inbound-agent:3261.v9c670a_4748a_9-1
artifacthub.io/license: Apache-2.0
artifacthub.io/links: |
- name: Chart Source
@ -18,7 +18,7 @@ annotations:
- name: support
url: https://github.com/jenkinsci/helm-charts/issues
apiVersion: v2
appVersion: 2.452.3
appVersion: 2.462.1
description: 'Jenkins - Build great things at any scale! As the leading open source
automation server, Jenkins provides over 1800 plugins to support building, deploying
and automating any project. '
@ -46,4 +46,4 @@ sources:
- https://github.com/maorfr/kube-tasks
- https://github.com/jenkinsci/configuration-as-code-plugin
type: application
version: 5.4.3
version: 5.5.8

View File

@ -122,7 +122,7 @@ So think of the list below more as a general guideline of what should be done.
- Test drive those setting on a separate installation
- Put Jenkins to Quiet Down mode so that it does not accept new jobs
`<JENKINS_URL>/quietDown`
- Change permissions of all files and folders to the new user and group id:
- Change permissions of all files and folders to the new user and group ID:
```console
kubectl exec -it <jenkins_pod> -c jenkins /bin/bash

View File

@ -8,64 +8,71 @@ The following tables list the configurable parameters of the Jenkins chart and t
| Key | Type | Description | Default |
|:----|:-----|:---------|:------------|
| [additionalAgents](./values.yaml#L1165) | object | Configure additional | `{}` |
| [additionalClouds](./values.yaml#L1190) | object | | `{}` |
| [agent.TTYEnabled](./values.yaml#L1083) | bool | Allocate pseudo tty to the side container | `false` |
| [agent.additionalContainers](./values.yaml#L1118) | list | Add additional containers to the agents | `[]` |
| [agent.alwaysPullImage](./values.yaml#L976) | bool | Always pull agent container image before build | `false` |
| [agent.annotations](./values.yaml#L1114) | object | Annotations to apply to the pod | `{}` |
| [agent.args](./values.yaml#L1077) | string | Arguments passed to command to execute | `"${computer.jnlpmac} ${computer.name}"` |
| [agent.command](./values.yaml#L1075) | string | Command to execute when side container starts | `nil` |
| [agent.componentName](./values.yaml#L944) | string | | `"jenkins-agent"` |
| [agent.connectTimeout](./values.yaml#L1112) | int | Timeout in seconds for an agent to be online | `100` |
| [agent.containerCap](./values.yaml#L1085) | int | Max number of agents to launch | `10` |
| [agent.customJenkinsLabels](./values.yaml#L941) | list | Append Jenkins labels to the agent | `[]` |
| [additionalAgents](./values.yaml#L1189) | object | Configure additional | `{}` |
| [additionalClouds](./values.yaml#L1214) | object | | `{}` |
| [agent.TTYEnabled](./values.yaml#L1095) | bool | Allocate pseudo tty to the side container | `false` |
| [agent.additionalContainers](./values.yaml#L1142) | list | Add additional containers to the agents | `[]` |
| [agent.alwaysPullImage](./values.yaml#L988) | bool | Always pull agent container image before build | `false` |
| [agent.annotations](./values.yaml#L1138) | object | Annotations to apply to the pod | `{}` |
| [agent.args](./values.yaml#L1089) | string | Arguments passed to command to execute | `"${computer.jnlpmac} ${computer.name}"` |
| [agent.command](./values.yaml#L1087) | string | Command to execute when side container starts | `nil` |
| [agent.componentName](./values.yaml#L956) | string | | `"jenkins-agent"` |
| [agent.connectTimeout](./values.yaml#L1136) | int | Timeout in seconds for an agent to be online | `100` |
| [agent.containerCap](./values.yaml#L1097) | int | Max number of agents to launch | `10` |
| [agent.customJenkinsLabels](./values.yaml#L953) | list | Append Jenkins labels to the agent | `[]` |
| [agent.defaultsProviderTemplate](./values.yaml#L907) | string | The name of the pod template to use for providing default values | `""` |
| [agent.directConnection](./values.yaml#L947) | bool | | `false` |
| [agent.disableDefaultAgent](./values.yaml#L1136) | bool | Disable the default Jenkins Agent configuration | `false` |
| [agent.directConnection](./values.yaml#L959) | bool | | `false` |
| [agent.disableDefaultAgent](./values.yaml#L1160) | bool | Disable the default Jenkins Agent configuration | `false` |
| [agent.enabled](./values.yaml#L905) | bool | Enable Kubernetes plugin jnlp-agent podTemplate | `true` |
| [agent.envVars](./values.yaml#L1058) | list | Environment variables for the agent Pod | `[]` |
| [agent.hostNetworking](./values.yaml#L955) | bool | Enables the agent to use the host network | `false` |
| [agent.idleMinutes](./values.yaml#L1090) | int | Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it | `0` |
| [agent.image.repository](./values.yaml#L934) | string | Repository to pull the agent jnlp image from | `"jenkins/inbound-agent"` |
| [agent.image.tag](./values.yaml#L936) | string | Tag of the image to pull | `"3256.v88a_f6e922152-1"` |
| [agent.imagePullSecretName](./values.yaml#L943) | string | Name of the secret to be used to pull the image | `nil` |
| [agent.inheritYamlMergeStrategy](./values.yaml#L1110) | bool | Controls whether the defined yaml merge strategy will be inherited if another defined pod template is configured to inherit from the current one | `false` |
| [agent.jenkinsTunnel](./values.yaml#L915) | string | Overrides the Kubernetes Jenkins tunnel | `nil` |
| [agent.jenkinsUrl](./values.yaml#L911) | string | Overrides the Kubernetes Jenkins URL | `nil` |
| [agent.jnlpregistry](./values.yaml#L931) | string | Custom registry used to pull the agent jnlp image from | `nil` |
| [agent.kubernetesConnectTimeout](./values.yaml#L917) | int | The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5 | `5` |
| [agent.kubernetesReadTimeout](./values.yaml#L919) | int | The read timeout in seconds for connections to Kubernetes API. The minimum value is 15 | `15` |
| [agent.livenessProbe](./values.yaml#L966) | object | | `{}` |
| [agent.maxRequestsPerHostStr](./values.yaml#L921) | string | The maximum concurrent connections to Kubernetes API | `"32"` |
| [agent.namespace](./values.yaml#L927) | string | Namespace in which the Kubernetes agents should be launched | `nil` |
| [agent.nodeSelector](./values.yaml#L1069) | object | Node labels for pod assignment | `{}` |
| [agent.nodeUsageMode](./values.yaml#L939) | string | | `"NORMAL"` |
| [agent.podLabels](./values.yaml#L929) | object | Custom Pod labels (an object with `label-key: label-value` pairs) | `{}` |
| [agent.podName](./values.yaml#L1087) | string | Agent Pod base name | `"default"` |
| [agent.podRetention](./values.yaml#L985) | string | | `"Never"` |
| [agent.podTemplates](./values.yaml#L1146) | object | Configures extra pod templates for the default kubernetes cloud | `{}` |
| [agent.privileged](./values.yaml#L949) | bool | Agent privileged container | `false` |
| [agent.resources](./values.yaml#L957) | object | Resources allocation (Requests and Limits) | `{"limits":{"cpu":"512m","memory":"512Mi"},"requests":{"cpu":"512m","memory":"512Mi"}}` |
| [agent.restrictedPssSecurityContext](./values.yaml#L982) | bool | Set a restricted securityContext on jnlp containers | `false` |
| [agent.retentionTimeout](./values.yaml#L923) | int | Time in minutes after which the Kubernetes cloud plugin will clean up an idle worker that has not already terminated | `5` |
| [agent.runAsGroup](./values.yaml#L953) | string | Configure container group | `nil` |
| [agent.runAsUser](./values.yaml#L951) | string | Configure container user | `nil` |
| [agent.secretEnvVars](./values.yaml#L1062) | list | Mount a secret as environment variable | `[]` |
| [agent.showRawYaml](./values.yaml#L989) | bool | | `true` |
| [agent.sideContainerName](./values.yaml#L1079) | string | Side container name | `"jnlp"` |
| [agent.volumes](./values.yaml#L996) | list | Additional volumes | `[]` |
| [agent.waitForPodSec](./values.yaml#L925) | int | Seconds to wait for pod to be running | `600` |
| [agent.websocket](./values.yaml#L946) | bool | Enables agent communication via websockets | `false` |
| [agent.workingDir](./values.yaml#L938) | string | Configure working directory for default agent | `"/home/jenkins/agent"` |
| [agent.workspaceVolume](./values.yaml#L1031) | object | Workspace volume (defaults to EmptyDir) | `{}` |
| [agent.yamlMergeStrategy](./values.yaml#L1108) | string | Defines how the raw yaml field gets merged with yaml definitions from inherited pod templates. Possible values: "merge" or "override" | `"override"` |
| [agent.yamlTemplate](./values.yaml#L1097) | string | The raw yaml of a Pod API Object to merge into the agent spec | `""` |
| [awsSecurityGroupPolicies.enabled](./values.yaml#L1316) | bool | | `false` |
| [awsSecurityGroupPolicies.policies[0].name](./values.yaml#L1318) | string | | `""` |
| [awsSecurityGroupPolicies.policies[0].podSelector](./values.yaml#L1320) | object | | `{}` |
| [awsSecurityGroupPolicies.policies[0].securityGroupIds](./values.yaml#L1319) | list | | `[]` |
| [checkDeprecation](./values.yaml#L1313) | bool | Checks if any deprecated values are used | `true` |
| [agent.envVars](./values.yaml#L1070) | list | Environment variables for the agent Pod | `[]` |
| [agent.garbageCollection.enabled](./values.yaml#L1104) | bool | When enabled, Jenkins will periodically check for orphan pods that have not been touched for the given timeout period and delete them. | `false` |
| [agent.garbageCollection.namespaces](./values.yaml#L1106) | string | Namespaces to look at for garbage collection, in addition to the default namespace defined for the cloud. One namespace per line. | `""` |
| [agent.garbageCollection.timeout](./values.yaml#L1111) | int | Timeout value for orphaned pods | `300` |
| [agent.hostNetworking](./values.yaml#L967) | bool | Enables the agent to use the host network | `false` |
| [agent.idleMinutes](./values.yaml#L1114) | int | Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it | `0` |
| [agent.image.repository](./values.yaml#L946) | string | Repository to pull the agent jnlp image from | `"jenkins/inbound-agent"` |
| [agent.image.tag](./values.yaml#L948) | string | Tag of the image to pull | `"3261.v9c670a_4748a_9-1"` |
| [agent.imagePullSecretName](./values.yaml#L955) | string | Name of the secret to be used to pull the image | `nil` |
| [agent.inheritYamlMergeStrategy](./values.yaml#L1134) | bool | Controls whether the defined yaml merge strategy will be inherited if another defined pod template is configured to inherit from the current one | `false` |
| [agent.jenkinsTunnel](./values.yaml#L923) | string | Overrides the Kubernetes Jenkins tunnel | `nil` |
| [agent.jenkinsUrl](./values.yaml#L919) | string | Overrides the Kubernetes Jenkins URL | `nil` |
| [agent.jnlpregistry](./values.yaml#L943) | string | Custom registry used to pull the agent jnlp image from | `nil` |
| [agent.kubernetesConnectTimeout](./values.yaml#L929) | int | The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5 | `5` |
| [agent.kubernetesReadTimeout](./values.yaml#L931) | int | The read timeout in seconds for connections to Kubernetes API. The minimum value is 15 | `15` |
| [agent.livenessProbe](./values.yaml#L978) | object | | `{}` |
| [agent.maxRequestsPerHostStr](./values.yaml#L933) | string | The maximum concurrent connections to Kubernetes API | `"32"` |
| [agent.namespace](./values.yaml#L939) | string | Namespace in which the Kubernetes agents should be launched | `nil` |
| [agent.nodeSelector](./values.yaml#L1081) | object | Node labels for pod assignment | `{}` |
| [agent.nodeUsageMode](./values.yaml#L951) | string | | `"NORMAL"` |
| [agent.podLabels](./values.yaml#L941) | object | Custom Pod labels (an object with `label-key: label-value` pairs) | `{}` |
| [agent.podName](./values.yaml#L1099) | string | Agent Pod base name | `"default"` |
| [agent.podRetention](./values.yaml#L997) | string | | `"Never"` |
| [agent.podTemplates](./values.yaml#L1170) | object | Configures extra pod templates for the default kubernetes cloud | `{}` |
| [agent.privileged](./values.yaml#L961) | bool | Agent privileged container | `false` |
| [agent.resources](./values.yaml#L969) | object | Resources allocation (Requests and Limits) | `{"limits":{"cpu":"512m","memory":"512Mi"},"requests":{"cpu":"512m","memory":"512Mi"}}` |
| [agent.restrictedPssSecurityContext](./values.yaml#L994) | bool | Set a restricted securityContext on jnlp containers | `false` |
| [agent.retentionTimeout](./values.yaml#L935) | int | Time in minutes after which the Kubernetes cloud plugin will clean up an idle worker that has not already terminated | `5` |
| [agent.runAsGroup](./values.yaml#L965) | string | Configure container group | `nil` |
| [agent.runAsUser](./values.yaml#L963) | string | Configure container user | `nil` |
| [agent.secretEnvVars](./values.yaml#L1074) | list | Mount a secret as environment variable | `[]` |
| [agent.serviceAccount](./values.yaml#L915) | string | Override the default service account | `serviceAccountAgent.name` if `agent.useDefaultServiceAccount` is `true` |
| [agent.showRawYaml](./values.yaml#L1001) | bool | | `true` |
| [agent.sideContainerName](./values.yaml#L1091) | string | Side container name | `"jnlp"` |
| [agent.skipTlsVerify](./values.yaml#L925) | bool | Disables the verification of the controller certificate on remote connection. This flag correspond to the "Disable https certificate check" flag in kubernetes plugin UI | `false` |
| [agent.usageRestricted](./values.yaml#L927) | bool | Enable the possibility to restrict the usage of this agent to specific folder. This flag correspond to the "Restrict pipeline support to authorized folders" flag in kubernetes plugin UI | `false` |
| [agent.useDefaultServiceAccount](./values.yaml#L911) | bool | Use `serviceAccountAgent.name` as the default value for defaults template `serviceAccount` | `true` |
| [agent.volumes](./values.yaml#L1008) | list | Additional volumes | `[]` |
| [agent.waitForPodSec](./values.yaml#L937) | int | Seconds to wait for pod to be running | `600` |
| [agent.websocket](./values.yaml#L958) | bool | Enables agent communication via websockets | `false` |
| [agent.workingDir](./values.yaml#L950) | string | Configure working directory for default agent | `"/home/jenkins/agent"` |
| [agent.workspaceVolume](./values.yaml#L1043) | object | Workspace volume (defaults to EmptyDir) | `{}` |
| [agent.yamlMergeStrategy](./values.yaml#L1132) | string | Defines how the raw yaml field gets merged with yaml definitions from inherited pod templates. Possible values: "merge" or "override" | `"override"` |
| [agent.yamlTemplate](./values.yaml#L1121) | string | The raw yaml of a Pod API Object to merge into the agent spec | `""` |
| [awsSecurityGroupPolicies.enabled](./values.yaml#L1340) | bool | | `false` |
| [awsSecurityGroupPolicies.policies[0].name](./values.yaml#L1342) | string | | `""` |
| [awsSecurityGroupPolicies.policies[0].podSelector](./values.yaml#L1344) | object | | `{}` |
| [awsSecurityGroupPolicies.policies[0].securityGroupIds](./values.yaml#L1343) | list | | `[]` |
| [checkDeprecation](./values.yaml#L1337) | bool | Checks if any deprecated values are used | `true` |
| [clusterZone](./values.yaml#L21) | string | Override the cluster name for FQDN resolving | `"cluster.local"` |
| [controller.JCasC.authorizationStrategy](./values.yaml#L533) | string | Jenkins Config as Code Authorization Strategy-section | `"loggedInUsersCanDoAnything:\n allowAnonymousRead: false"` |
| [controller.JCasC.configMapAnnotations](./values.yaml#L538) | object | Annotations for the JCasC ConfigMap | `{}` |
@ -157,7 +164,7 @@ The following tables list the configurable parameters of the Jenkins chart and t
| [controller.initializeOnce](./values.yaml#L414) | bool | Initialize only on first installation. Ensures plugins do not get updated inadvertently. Requires `persistence.enabled` to be set to `true` | `false` |
| [controller.installLatestPlugins](./values.yaml#L403) | bool | Download the minimum required version or latest version of all dependencies | `true` |
| [controller.installLatestSpecifiedPlugins](./values.yaml#L406) | bool | Set to true to download the latest version of any plugin that is requested to have the latest version | `false` |
| [controller.installPlugins](./values.yaml#L395) | list | List of Jenkins plugins to install. If you don't want to install plugins, set it to `false` | `["kubernetes:4253.v7700d91739e5","workflow-aggregator:600.vb_57cdd26fdd7","git:5.2.2","configuration-as-code:1836.vccda_4a_122a_a_e"]` |
| [controller.installPlugins](./values.yaml#L395) | list | List of Jenkins plugins to install. If you don't want to install plugins, set it to `false` | `["kubernetes:4285.v50ed5f624918","workflow-aggregator:600.vb_57cdd26fdd7","git:5.3.0","configuration-as-code:1836.vccda_4a_122a_a_e"]` |
| [controller.javaOpts](./values.yaml#L156) | string | Append to `JAVA_OPTS` env var | `nil` |
| [controller.jenkinsAdminEmail](./values.yaml#L96) | string | Email address for the administrator of the Jenkins instance | `nil` |
| [controller.jenkinsHome](./values.yaml#L101) | string | Custom Jenkins home path | `"/var/jenkins_home"` |
@ -270,40 +277,40 @@ The following tables list the configurable parameters of the Jenkins chart and t
| [controller.usePodSecurityContext](./values.yaml#L176) | bool | Enable pod security context (must be `true` if podSecurityContextOverride, runAsUser or fsGroup are set) | `true` |
| [credentialsId](./values.yaml#L27) | string | The Jenkins credentials to access the Kubernetes API server. For the default cluster it is not needed. | `nil` |
| [fullnameOverride](./values.yaml#L13) | string | Override the full resource names | `jenkins-(release-name)` or `jenkins` if the release-name is `jenkins` |
| [helmtest.bats.image.registry](./values.yaml#L1329) | string | Registry of the image used to test the framework | `"docker.io"` |
| [helmtest.bats.image.repository](./values.yaml#L1331) | string | Repository of the image used to test the framework | `"bats/bats"` |
| [helmtest.bats.image.tag](./values.yaml#L1333) | string | Tag of the image to test the framework | `"1.11.0"` |
| [helmtest.bats.image.registry](./values.yaml#L1353) | string | Registry of the image used to test the framework | `"docker.io"` |
| [helmtest.bats.image.repository](./values.yaml#L1355) | string | Repository of the image used to test the framework | `"bats/bats"` |
| [helmtest.bats.image.tag](./values.yaml#L1357) | string | Tag of the image to test the framework | `"1.11.0"` |
| [kubernetesURL](./values.yaml#L24) | string | The URL of the Kubernetes API server | `"https://kubernetes.default"` |
| [nameOverride](./values.yaml#L10) | string | Override the resource name prefix | `Chart.Name` |
| [namespaceOverride](./values.yaml#L16) | string | Override the deployment namespace | `Release.Namespace` |
| [networkPolicy.apiVersion](./values.yaml#L1259) | string | NetworkPolicy ApiVersion | `"networking.k8s.io/v1"` |
| [networkPolicy.enabled](./values.yaml#L1254) | bool | Enable the creation of NetworkPolicy resources | `false` |
| [networkPolicy.externalAgents.except](./values.yaml#L1273) | list | A list of IP sub-ranges to be excluded from the allowlisted IP range | `[]` |
| [networkPolicy.externalAgents.ipCIDR](./values.yaml#L1271) | string | The IP range from which external agents are allowed to connect to controller, i.e., 172.17.0.0/16 | `nil` |
| [networkPolicy.internalAgents.allowed](./values.yaml#L1263) | bool | Allow internal agents (from the same cluster) to connect to controller. Agent pods will be filtered based on PodLabels | `true` |
| [networkPolicy.internalAgents.namespaceLabels](./values.yaml#L1267) | object | A map of labels (keys/values) that agents namespaces must have to be able to connect to controller | `{}` |
| [networkPolicy.internalAgents.podLabels](./values.yaml#L1265) | object | A map of labels (keys/values) that agent pods must have to be able to connect to controller | `{}` |
| [persistence.accessMode](./values.yaml#L1229) | string | The PVC access mode | `"ReadWriteOnce"` |
| [persistence.annotations](./values.yaml#L1225) | object | Annotations for the PVC | `{}` |
| [persistence.dataSource](./values.yaml#L1235) | object | Existing data source to clone PVC from | `{}` |
| [persistence.enabled](./values.yaml#L1209) | bool | Enable the use of a Jenkins PVC | `true` |
| [persistence.existingClaim](./values.yaml#L1215) | string | Provide the name of a PVC | `nil` |
| [persistence.labels](./values.yaml#L1227) | object | Labels for the PVC | `{}` |
| [persistence.mounts](./values.yaml#L1247) | list | Additional mounts | `[]` |
| [persistence.size](./values.yaml#L1231) | string | The size of the PVC | `"8Gi"` |
| [persistence.storageClass](./values.yaml#L1223) | string | Storage class for the PVC | `nil` |
| [persistence.subPath](./values.yaml#L1240) | string | SubPath for jenkins-home mount | `nil` |
| [persistence.volumes](./values.yaml#L1242) | list | Additional volumes | `[]` |
| [rbac.create](./values.yaml#L1279) | bool | Whether RBAC resources are created | `true` |
| [rbac.readSecrets](./values.yaml#L1281) | bool | Whether the Jenkins service account should be able to read Kubernetes secrets | `false` |
| [networkPolicy.apiVersion](./values.yaml#L1283) | string | NetworkPolicy ApiVersion | `"networking.k8s.io/v1"` |
| [networkPolicy.enabled](./values.yaml#L1278) | bool | Enable the creation of NetworkPolicy resources | `false` |
| [networkPolicy.externalAgents.except](./values.yaml#L1297) | list | A list of IP sub-ranges to be excluded from the allowlisted IP range | `[]` |
| [networkPolicy.externalAgents.ipCIDR](./values.yaml#L1295) | string | The IP range from which external agents are allowed to connect to controller, i.e., 172.17.0.0/16 | `nil` |
| [networkPolicy.internalAgents.allowed](./values.yaml#L1287) | bool | Allow internal agents (from the same cluster) to connect to controller. Agent pods will be filtered based on PodLabels | `true` |
| [networkPolicy.internalAgents.namespaceLabels](./values.yaml#L1291) | object | A map of labels (keys/values) that agents namespaces must have to be able to connect to controller | `{}` |
| [networkPolicy.internalAgents.podLabels](./values.yaml#L1289) | object | A map of labels (keys/values) that agent pods must have to be able to connect to controller | `{}` |
| [persistence.accessMode](./values.yaml#L1253) | string | The PVC access mode | `"ReadWriteOnce"` |
| [persistence.annotations](./values.yaml#L1249) | object | Annotations for the PVC | `{}` |
| [persistence.dataSource](./values.yaml#L1259) | object | Existing data source to clone PVC from | `{}` |
| [persistence.enabled](./values.yaml#L1233) | bool | Enable the use of a Jenkins PVC | `true` |
| [persistence.existingClaim](./values.yaml#L1239) | string | Provide the name of a PVC | `nil` |
| [persistence.labels](./values.yaml#L1251) | object | Labels for the PVC | `{}` |
| [persistence.mounts](./values.yaml#L1271) | list | Additional mounts | `[]` |
| [persistence.size](./values.yaml#L1255) | string | The size of the PVC | `"8Gi"` |
| [persistence.storageClass](./values.yaml#L1247) | string | Storage class for the PVC | `nil` |
| [persistence.subPath](./values.yaml#L1264) | string | SubPath for jenkins-home mount | `nil` |
| [persistence.volumes](./values.yaml#L1266) | list | Additional volumes | `[]` |
| [rbac.create](./values.yaml#L1303) | bool | Whether RBAC resources are created | `true` |
| [rbac.readSecrets](./values.yaml#L1305) | bool | Whether the Jenkins service account should be able to read Kubernetes secrets | `false` |
| [renderHelmLabels](./values.yaml#L30) | bool | Enables rendering of the helm.sh/chart label to the annotations | `true` |
| [serviceAccount.annotations](./values.yaml#L1291) | object | Configures annotations for the ServiceAccount | `{}` |
| [serviceAccount.create](./values.yaml#L1285) | bool | Configures if a ServiceAccount with this name should be created | `true` |
| [serviceAccount.extraLabels](./values.yaml#L1293) | object | Configures extra labels for the ServiceAccount | `{}` |
| [serviceAccount.imagePullSecretName](./values.yaml#L1295) | string | Controller ServiceAccount image pull secret | `nil` |
| [serviceAccount.name](./values.yaml#L1289) | string | | `nil` |
| [serviceAccountAgent.annotations](./values.yaml#L1306) | object | Configures annotations for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.create](./values.yaml#L1300) | bool | Configures if an agent ServiceAccount should be created | `false` |
| [serviceAccountAgent.extraLabels](./values.yaml#L1308) | object | Configures extra labels for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.imagePullSecretName](./values.yaml#L1310) | string | Agent ServiceAccount image pull secret | `nil` |
| [serviceAccountAgent.name](./values.yaml#L1304) | string | The name of the agent ServiceAccount to be used by access-controlled resources | `nil` |
| [serviceAccount.annotations](./values.yaml#L1315) | object | Configures annotations for the ServiceAccount | `{}` |
| [serviceAccount.create](./values.yaml#L1309) | bool | Configures if a ServiceAccount with this name should be created | `true` |
| [serviceAccount.extraLabels](./values.yaml#L1317) | object | Configures extra labels for the ServiceAccount | `{}` |
| [serviceAccount.imagePullSecretName](./values.yaml#L1319) | string | Controller ServiceAccount image pull secret | `nil` |
| [serviceAccount.name](./values.yaml#L1313) | string | | `nil` |
| [serviceAccountAgent.annotations](./values.yaml#L1330) | object | Configures annotations for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.create](./values.yaml#L1324) | bool | Configures if an agent ServiceAccount should be created | `false` |
| [serviceAccountAgent.extraLabels](./values.yaml#L1332) | object | Configures extra labels for the agent ServiceAccount | `{}` |
| [serviceAccountAgent.imagePullSecretName](./values.yaml#L1334) | string | Agent ServiceAccount image pull secret | `nil` |
| [serviceAccountAgent.name](./values.yaml#L1328) | string | The name of the agent ServiceAccount to be used by access-controlled resources | `nil` |

View File

@ -140,6 +140,14 @@ jenkins:
clouds:
- kubernetes:
containerCapStr: "{{ .Values.agent.containerCap }}"
{{- if .Values.agent.garbageCollection.enabled }}
garbageCollection:
{{- if .Values.agent.garbageCollection.namespaces }}
namespaces: |-
{{- .Values.agent.garbageCollection.namespaces | nindent 10 }}
{{- end }}
timeout: "{{ .Values.agent.garbageCollection.timeout }}"
{{- end }}
{{- if .Values.agent.jnlpregistry }}
jnlpregistry: "{{ .Values.agent.jnlpregistry }}"
{{- end }}
@ -164,6 +172,8 @@ jenkins:
webSocket: true
{{- end }}
{{- end }}
skipTlsVerify: {{ .Values.agent.skipTlsVerify | default false}}
usageRestricted: {{ .Values.agent.usageRestricted | default false}}
maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }}
retentionTimeout: {{ .Values.agent.retentionTimeout | quote }}
waitForPodSec: {{ .Values.agent.waitForPodSec | quote }}
@ -248,6 +258,8 @@ jenkins:
webSocket: true
{{- end }}
{{- end }}
skipTlsVerify: {{ .Values.agent.skipTlsVerify | default false}}
usageRestricted: {{ .Values.agent.usageRestricted | default false}}
maxRequestsPerHostStr: {{ .Values.agent.maxRequestsPerHostStr | quote }}
retentionTimeout: {{ .Values.agent.retentionTimeout | quote }}
waitForPodSec: {{ .Values.agent.waitForPodSec | quote }}
@ -471,7 +483,10 @@ Returns kubernetes pod template configuration as code
nodeUsageMode: {{ quote .Values.agent.nodeUsageMode }}
podRetention: {{ .Values.agent.podRetention }}
showRawYaml: {{ .Values.agent.showRawYaml }}
serviceAccount: "{{ include "jenkins.serviceAccountAgentName" . }}"
{{- $asaname := default (include "jenkins.serviceAccountAgentName" .) .Values.agent.serviceAccount -}}
{{- if or (.Values.agent.useDefaultServiceAccount) (.Values.agent.serviceAccount) }}
serviceAccount: "{{ $asaname }}"
{{- end }}
slaveConnectTimeoutStr: "{{ .Values.agent.connectTimeout }}"
{{- if .Values.agent.volumes }}
volumes:

View File

@ -393,9 +393,9 @@ controller:
# Plugins will be installed during Jenkins controller start
# -- List of Jenkins plugins to install. If you don't want to install plugins, set it to `false`
installPlugins:
- kubernetes:4253.v7700d91739e5
- kubernetes:4285.v50ed5f624918
- workflow-aggregator:600.vb_57cdd26fdd7
- git:5.2.2
- git:5.3.0
- configuration-as-code:1836.vccda_4a_122a_a_e
# If set to false, Jenkins will download the minimum required version of all dependencies.
@ -906,6 +906,14 @@ agent:
# -- The name of the pod template to use for providing default values
defaultsProviderTemplate: ""
# Useful for not including a serviceAccount in the template if `false`
# -- Use `serviceAccountAgent.name` as the default value for defaults template `serviceAccount`
useDefaultServiceAccount: true
# -- Override the default service account
# @default -- `serviceAccountAgent.name` if `agent.useDefaultServiceAccount` is `true`
serviceAccount:
# For connecting to the Jenkins controller
# -- Overrides the Kubernetes Jenkins URL
jenkinsUrl:
@ -913,6 +921,10 @@ agent:
# connects to the specified host and port, instead of connecting directly to the Jenkins controller
# -- Overrides the Kubernetes Jenkins tunnel
jenkinsTunnel:
# -- Disables the verification of the controller certificate on remote connection. This flag correspond to the "Disable https certificate check" flag in kubernetes plugin UI
skipTlsVerify: false
# -- Enable the possibility to restrict the usage of this agent to specific folder. This flag correspond to the "Restrict pipeline support to authorized folders" flag in kubernetes plugin UI
usageRestricted: false
# -- The connection timeout in seconds for connections to Kubernetes API. The minimum value is 5
kubernetesConnectTimeout: 5
# -- The read timeout in seconds for connections to Kubernetes API. The minimum value is 15
@ -933,7 +945,7 @@ agent:
# -- Repository to pull the agent jnlp image from
repository: "jenkins/inbound-agent"
# -- Tag of the image to pull
tag: "3256.v88a_f6e922152-1"
tag: "3261.v9c670a_4748a_9-1"
# -- Configure working directory for default agent
workingDir: "/home/jenkins/agent"
nodeUsageMode: "NORMAL"
@ -1086,6 +1098,18 @@ agent:
# -- Agent Pod base name
podName: "default"
# Enables garbage collection of orphan pods for this Kubernetes cloud. (beta)
garbageCollection:
# -- When enabled, Jenkins will periodically check for orphan pods that have not been touched for the given timeout period and delete them.
enabled: false
# -- Namespaces to look at for garbage collection, in addition to the default namespace defined for the cloud. One namespace per line.
namespaces: ""
# namespaces: |-
# namespaceOne
# namespaceTwo
# -- Timeout value for orphaned pods
timeout: 300
# -- Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it
idleMinutes: 0

View File

@ -183,14 +183,15 @@ jenkins:
agent:
image:
repository: public.ecr.aws/zero-downtime/jenkins-podman
tag: v0.6.0
tag: v0.6.2
#alwaysPullImage: true
podRetention: "Default"
showRawYaml: false
podName: "podman-aws"
defaultsProviderTemplate: "podman-aws"
annotations:
container.apparmor.security.beta.kubernetes.io/jnlp: unconfined
container.apparmor.security.beta.kubernetes.io/jnlp: "unconfined"
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
customJenkinsLabels:
- podman-aws-trivy
idleMinutes: 30
@ -224,8 +225,8 @@ jenkins:
- name: jnlp
resources:
requests:
cpu: "512m"
memory: "1024Mi"
cpu: "200m"
memory: "512Mi"
limits:
cpu: "4"
memory: "6144Mi"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio-gateway
description: KubeZero Umbrella Chart for Istio gateways
type: application
version: 0.22.3
version: 0.22.3-1
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -19,4 +19,4 @@ dependencies:
- name: gateway
version: 1.22.3
repository: https://istio-release.storage.googleapis.com/charts
kubeVersion: ">= 1.26.0"
kubeVersion: ">= 1.26.0-0"

View File

@ -1,6 +1,6 @@
# kubezero-istio-gateway
![Version: 0.22.3](https://img.shields.io/badge/Version-0.22.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.22.3-1](https://img.shields.io/badge/Version-0.22.3--1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio gateways
@ -16,7 +16,7 @@ Installs Istio Ingress Gateways, requires kubezero-istio to be installed !
## Requirements
Kubernetes: `>= 1.26.0`
Kubernetes: `>= 1.26.0-0`
| Repository | Name | Version |
|------------|------|---------|
@ -33,7 +33,6 @@ Kubernetes: `>= 1.26.0`
| gateway.autoscaling.minReplicas | int | `1` | |
| gateway.autoscaling.targetCPUUtilizationPercentage | int | `80` | |
| gateway.podAnnotations."proxy.istio.io/config" | string | `"{ \"terminationDrainDuration\": \"20s\" }"` | |
| gateway.priorityClassName | string | `"system-cluster-critical"` | |
| gateway.replicaCount | int | `1` | |
| gateway.resources.limits.memory | string | `"512Mi"` | |
| gateway.resources.requests.cpu | string | `"50m"` | |

View File

@ -19,6 +19,5 @@ Installs Istio Ingress Gateways, requires kubezero-istio to be installed !
## Resources
- https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#IstioOperatorSpec
- https://github.com/istio/istio/blob/master/manifests/profiles/default.yaml
- https://istio.io/latest/docs/setup/install/standalone-operator/
- https://github.com/cilium/cilium/blob/main/operator/pkg/model/translation/envoy_listener.go#L134

View File

@ -8,7 +8,6 @@ gateway:
replicaCount: 1
terminationGracePeriodSeconds: 120
priorityClassName: system-cluster-critical
resources:
requests:

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio
description: KubeZero Umbrella Chart for Istio
type: application
version: 0.22.3
version: 0.22.3-1
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -22,7 +22,7 @@ dependencies:
version: 1.22.3
repository: https://istio-release.storage.googleapis.com/charts
- name: kiali-server
version: "1.87.0"
version: "1.88.0"
repository: https://kiali.org/helm-charts
condition: kiali-server.enabled
kubeVersion: ">= 1.26.0"
kubeVersion: ">= 1.26.0-0"

View File

@ -1,6 +1,6 @@
# kubezero-istio
![Version: 0.22.3](https://img.shields.io/badge/Version-0.22.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.22.3-1](https://img.shields.io/badge/Version-0.22.3--1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio
@ -16,14 +16,14 @@ Installs the Istio control plane
## Requirements
Kubernetes: `>= 1.26.0`
Kubernetes: `>= 1.26.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://istio-release.storage.googleapis.com/charts | base | 1.22.3 |
| https://istio-release.storage.googleapis.com/charts | istiod | 1.22.3 |
| https://kiali.org/helm-charts | kiali-server | 1.87.0 |
| https://kiali.org/helm-charts | kiali-server | 1.88.0 |
## Values
@ -31,19 +31,15 @@ Kubernetes: `>= 1.26.0`
|-----|------|---------|-------------|
| global.defaultPodDisruptionBudget.enabled | bool | `false` | |
| global.logAsJson | bool | `true` | |
| global.priorityClassName | string | `"system-cluster-critical"` | |
| global.variant | string | `"distroless"` | |
| istiod.meshConfig.accessLogEncoding | string | `"JSON"` | |
| istiod.meshConfig.accessLogFile | string | `"/dev/stdout"` | |
| istiod.meshConfig.tcpKeepalive.interval | string | `"60s"` | |
| istiod.meshConfig.tcpKeepalive.time | string | `"120s"` | |
| istiod.pilot.autoscaleEnabled | bool | `false` | |
| istiod.pilot.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| istiod.pilot.replicaCount | int | `1` | |
| istiod.pilot.resources.requests.cpu | string | `"100m"` | |
| istiod.pilot.resources.requests.memory | string | `"128Mi"` | |
| istiod.pilot.tolerations[0].effect | string | `"NoSchedule"` | |
| istiod.pilot.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| istiod.telemetry.enabled | bool | `false` | |
| kiali-server.auth.strategy | string | `"anonymous"` | |
| kiali-server.deployment.ingress_enabled | bool | `false` | |

View File

@ -6,19 +6,11 @@ global:
defaultPodDisruptionBudget:
enabled: false
priorityClassName: "system-cluster-critical"
istiod:
pilot:
autoscaleEnabled: false
replicaCount: 1
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
resources:
requests:
cpu: 100m
@ -57,7 +49,7 @@ kiali-server:
prometheus:
url: "http://metrics-kube-prometheus-st-prometheus.monitoring:9090"
istio:
enabled: false
gateway: istio-ingress/private-ingressgateway

View File

@ -1,8 +1,8 @@
apiVersion: v2
name: kubezero-redis
description: KubeZero Umbrella Chart for Redis HA
name: kubezero-keyvalue
description: KubeZero KeyValue Module
type: application
version: 0.4.2
version: 0.1.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -17,12 +17,12 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: redis
version: 16.13.2
version: 20.0.3
repository: https://charts.bitnami.com/bitnami
condition: redis.enabled
- name: redis-cluster
version: 7.6.4
version: 11.0.2
repository: https://charts.bitnami.com/bitnami
condition: redis-cluster.enabled
kubeVersion: ">= 1.25.0"
kubeVersion: ">= 1.26.0"

View File

@ -1,8 +1,8 @@
# kubezero-redis
# kubezero-keyvalue
![Version: 0.4.1](https://img.shields.io/badge/Version-0.4.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Redis HA
KubeZero KeyValue Module
**Homepage:** <https://kubezero.com>
@ -14,13 +14,13 @@ KubeZero Umbrella Chart for Redis HA
## Requirements
Kubernetes: `>= 1.25.0`
Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.bitnami.com/bitnami | redis | 16.10.1 |
| https://charts.bitnami.com/bitnami | redis-cluster | 7.6.1 |
| https://charts.bitnami.com/bitnami | redis | 20.0.3 |
| https://charts.bitnami.com/bitnami | redis-cluster | 11.0.2 |
## Values

View File

@ -7,3 +7,8 @@ dashboards:
url: https://grafana.com/api/dashboards/11835/revisions/1/download
tags:
- Redis
- name: redis-cluster
url: https://grafana.com/api/dashboards/14615/revisions/1/download
tags:
- Redis
- Redis-Cluster

View File

@ -0,0 +1,15 @@
{{- if or .Values.redis.metrics.enabled ( index .Values "redis-cluster" "metrics" "enabled") }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" (include "kubezero-lib.fullname" $) "grafana-dashboards" | trunc 63 | trimSuffix "-" }}
namespace: {{ .Release.Namespace }}
labels:
grafana_dashboard: "1"
{{- include "kubezero-lib.labels" . | nindent 4 }}
binaryData:
redis.json.gz:
H4sIAAAAAAAC/+1daVPcOBr+Pr9C5Zndgq0OsQ0dmqmaDySBmdQAyQYyW7UD1aW2RbcWX2PJQIdifvvq8CHZ6jMNNMEfOKxX1vGejw5Ldz8AYPX7OEoySqyfwZ/sGYA78ZtRIhgilmq9P+1/+vzx+ODst4Mvp1anIAdwgAJO/5TGIaIjlJGK6CPipTihOI54lopAx4ko1IcUkjhLPVTRkiAb4uiDz+mJoVBJP8mbpVQrMtyz3xcd2aUU/ZXhFBk6VdQ/TOEljGBVOPaNyQUTfq0TrlFK8t692epuuXkjOubqEhgxZjUrS0bGqtRkpaLpdZhYiicyM2qy0VSls2Vv2Uv0jeBoGCBCIW1WeWqgNXtZihNGUczyMiqXp6zeCjChpXSrRjHKIMMB/cBLcjpVqsIcc6dZHhTBQcDpNM2Qkj7CviEVe3H0Lg7ilBeYDgdww+4A13HYr263A5xNteii6/tVX8A/wX6AUqo1oZIlGQ1imPpWTrsXfy9+yMVQN7DPyMcEvC/eApdxCqpOAkk+uE3ilKIUOFu3HYApuInTKwJuMB2BEQpCwETCGPA65blfjSBA+Qtb5+l5xH8+XIJxnIEQEy5fIDKCEIVxOgYZxQH+KvrWAUmAIEEgjH18OQbnVghvZbZzC1zDIEMAR/IfsjWGoVQfixVHayKwhhGiwiU4Tm+7K5O4gZzFcUBxwgi2SBRqF2VBIJ9Yq2HOHKfb23Z2uju2280LCHB0JXyDVDChwAZf4UFvhM5wiOKMKoVLGpf8W+hdDdM4i3jdlzAgSKf/wTtoJhFFeUGuP+4O05vujvyxt/Y0DZI5tneZZrl7HbBj8yw9Q54uozi7LsvSFaXsbhY6dFG551nWwPTLwyEUXLHLRIN8ZHqaCjPQ+8l0MIRUOIOq3CHMhqg0Y5HEdKNglGPbSndCHBUENZmM4pt6Zdx0Rszhj+LAP+KBiUzLcQzTKyREwLtRWFjVxhT7n2Kit3LEHneVwngTXOX5ttbKMX9uFC20dK96jJieMjOoaxcmJ+imzmVda3PWJQkzxDPpNRxTuq5oVYcUpyQNksaAoluq6BPISbzoMvG+M72wFEbDGYW5VWENzWTK8J4p56eYsYboGmFxFgnCcSwcMrOkKEIeRb6l5TnjNdc4GidlAKmkkcSEXuJbHaHkiYdxRE/xV1FP1/6HQk9R8x2RNvEVwZNjmEyRxSWzQf4eb7TONSp7Y5283q8R4vKFKQwlCVN2pjo1o7vEQaBHr20WuZiPZb96e9xxOD3Nt1zyepo2xUtWy5HFuNwH7W1rBeRma7Q44VVYMVlYh4swHSI6hW8sRImqmd5ssJ9+zIBEnzKfvSGCUz9L+APDuH2CmLr45A5HLMxFHvrl73Prp+Lh3Lr/8ycOhaU9Xmxu6ryufJkojqAUI6JnUWzZMlMOoUcFo1yNHKAhivzDsgb9ZeaZU+w105nGSaRc0wpCEQ+KTs+2p2hF6QpJjd+YCvdufRF8a+J2E64Tdq0q/66q/II6Xflj3mLrlzn1vnAjM1VfZCyGC16WpiiiZijbRvoVRfoojtALCfbuzGDPxw1cqT4yn5Q2xxASDDhuiwZaNNCigZWiARn7S4XoewFmvn9y7F862D9mSHeXiuchOsyVyQn19NMRvmxYRgkA3kmWLYkAei0CmI4ADLF7GQjQBBJPjQASxGqK6EOAgNoE3BNggJ0aBthZEQZwWgzQYoAWA6wUAzA9Af8C+TSAnHzuZ4SBgcGYoslQAIDXQHuHzyrMeGXzO8MPPbuz110FiDiWSwNfCByil4IkYIAheVfEfcW3DGDaCFJioecIRUM6Eq5NS0em7LOC+YJxGwsbdrSEX1Poc/Cn4QEeI/X+zB01e7Wo2ZsnavooOpWGU2+zaIc7IzIKA9KbBq+HBndVyLJJYYZvSsWRIXXSNEVMoclHypWvgtDoPnehxBDs0Q32haK40yDAPEFRCX76GulRWV6jVTmyg8MmzE14dVxrMv5uV09vSpAx3EcpEm76MogV85e+sgBLtY6xwOUhk7EwF+JdNWrhbi9B/pGMdDpt/pEkpMVUsheHIYx80k/S2EOEhxIh38kzyk54sYbzyPuLRY0dW3elgnPVOv0c8UQRoRJKahEjRJ/RMNfJ2gszBqq5WMDBLfIypuUsgBPkKeGmXClWhyDkMyIMXuRrxE17hkzozTEHiVNaG5wIW+4XMc3LwiyAFF8jq4lpqr0n6gaPW3iLa5Y4yLwrqZ5qn3mzc5PWJucrrF3LbR43lc7HYONjeDttfKAs6Y44J3T9yzcE6Y3ghHj4FhKk78goHWwju/SwjeQmWpw89Fi7djYMZNwUOgMOQ5MuivQjdF02WtsSsmbwQ5lLcJ8TJnHerACUOC0oeVJQojXt+8UkWAElV2gsmtAfYUpm4ZHuMngk3/1W05ZVAhXe9NWMcOfBKp0lmMs3vKHnyl7Z+AUZ/PZ5gsHfmCox/HcsugyYcwCnTwYFceTja+xnMHixULCmXUsBLLtFgd+MAuv4wfrx7aHz1rabivliJ6mcWZs5d5cAhLsPDwh5/DgIEzqeQPsvSuPV4Mj68ttawEieACABX3k32+mt5TZKLLA68tArHbwNa4MFF10HWj34mxfiMWteEMetB147404ETFgmauftVg/WhAY/JBJq4drzm7R7DnDMcVeAxxRmrdMM3Voiq3bVcCWrhhGi8gtrCR4eZA5pQbRwdwdEi8D9/aKQan7opHMgzujjsOCbZ8sYc2RjJ3NnzTHVCaL8w17w4fXHxcBUHTO1WOqJsVS7Bvpy4NTu085uOTtLwKnuEmiKC3KfnJlPWpiFtWrZF4FaKR6O6KnxJIclUZhx32oLwpYDYRq/VoLBSBYWu3/9gVjKmwg5NsFgDDb8wdPCDn/AIAf4vjZnyfmdDxSFcjXu/dt2Me5BMYn2AW4LSVpI8nwgSWOGZylM4my3UzwtulgvdPEKNLP3WUk4xdFwyntPB0eimIKigY++7PbcmGVm1PJbrNYDuh3kvQLXBJzE9FX5/DsTSIvi2t31LYxTikfX2BMqZf3Y27MPbVfVnxCF4nwBv4/5SKgvMjc/Hzq3nF13y9ndsrecn/ecrn1udf4XD1h6dYAnP6zCXEmKvADiULZh+/BNb3u73d41/wSY6ywBNnst1nwZWNPsCIU3kBuFctvTfFXhhdzpH+M/+uokw1cbyvqcgC/MGwmgNcfanJwtK1LXccd73qO12UdWZ3ju/h+L4avjq2z4M8az4uvQg1o32p1mLYZtpyLXGhz2nngqcil06Oy8ZHjotFORjzMVSePkaqPbAdh0VsV0eAHEERVPu+zphX5jq1WF1bTu9D0YBHmflt+5tgaHVYB3vCPLHFSxqj1a7Uxai0JeMgpZbNJpRVNMvWVAxJsWL0z47u+RoIL73X3vN+Vg5FWfi+xMCtPrEY/lxUglP4DXOPXYFJPbqNtG3WcSdctb1Jjpcavivd62pYJbxBuhEP5RXr3mOjKZjoP8FrL0SuZkbrISvLQaqyyaojDhM1rRcJ772arYdGc6otN4NZt6Umf93r+apIvpZdUIceQFmY/2zefVTrnHkMsxY/ZveG367YhaTKo8Ekv+K0PpeMKlfJqYHC11iG5rgy2LXOHkSxqcjiPPdMpT81bAuj1r2hUUB2/XvIQirvsF79Pz0SWOcHFFneBzX7qX8nKeDuB8FJFQOxp4aSGeFMUtIsPI9NIs+S3Un0qs7reJ1RBimG0K2ZF/F02zdGqjDzzNnDlXGtlJhZARdCYLMqK7NdCku5LxHMOUDwzEdEAS+6vRrk+xD8QhwAsoF6u8H9XeWUq3FunhqvXNafVtXmkIYfC0QvBCQKbVu6X1sA77ZmhgUffDaeACff5ONbN2PatA4CUYKo/yj29eOQW2L07rZ3hQey3BDN2n1cs5w8q76VQAbnWVoYpjKw/b6oN6aHtX+d9RH7ZtlaKMQFzlfye/kPai6AMfCCq6NLMWteA3asFqLe6O+qAsCez6anuLtmjs+xqL8bE1SOMbNkLN4as+3pv3ntwN0624UputTN5zfPv+6OTz71/+81WmVtcY7/xw/38XppizYHsAAA==
redis-cluster.json.gz:
H4sIAAAAAAAC/+1d63PTOhb/3r/CY5idMpMW23mWmfuhLfTCLAUuLezsQjej2Goialu+stw2dLp/++rhh2Q7aRJKm4BhKLWOrMfRefx0JEs3W4ZhDocojBIamy+ML+zZMG7ET0YJQQBZqvnyZPjh4/vjV6evX306MVsZ2Qcj6HP6B4IDSCcwiQuiB2OXoIgiHPIsBYFOI1GoByiIcUJcWNAiPxmj8I3H6VFNoZL+Lm2WUq3IcMt+nrVklwj8O0EE1nQqq39MwDkIQVE48mqTMyb8WSZcQhKnvevvOrt22ohWfXURCBmzqpVFk9qq1GSlohXqCAGF/o6H2HBACndKefIhTum1tVq79q41v+q60UQzxzGsjmBdvazWu+qt7XJMAa1WdqKlVpmaSw8IQ8zyMioXH1ml6aOY5sJUNIRRRgny6Rtekt0qUhWG1HeU5YEhGPmcTkkClfQJ8mpSkYvDQ+xjwgsk4xHYtlqGY9vsR7fbMuxnatFZp/eLvhj/MPZ9SKjWhGL84skIA+KZKe1W/H+2lbK+rM9Fh4z8VeMcE+MjZKJmxJBwBouaTJZCSx01xyGkQs/tTs/uyiQu9acY+xRFjGCJRDGgYeL78olCAtIm2D2n0+kN7L2Okxbgo/BCKLwcRiEaNQbgzpExzxH0vUMcnqNxLgCpUTsHiU9jLZWlu0lMccBTb1tqegCiCIXjolEF3yfMOE2w75XL4m9hMfwmGMXYTxSdTOkxhVGsyGL256b0zBuWycuYQBiWCpJ6APwEpjwuUW+15zPlSe9lEiIq1M7cqslgYiYJhEm05MJWKQNrGfI+YJ0N5oQ99pUyrthzT3m+zgQkfZ7y50rRQnjsQf6MI12pCxYdpyyX3FAKFkKZUUMcasRvbNjR+TQjuzBk8qlmwASxtExiTZBQrJIJ9BIXvq9pFW8X8N3qKDOnG9N3mL5jw2UqFF26hADzl83n//3Mu/T0uTb2sp88wznwY1g7bBReU50tFf5Kd/y57AcLVw/IGFK9E1of4XUkpDMA19vs35CLypCiAG4TbkaGScQfGD4ZxpBZPy+++YZHf/zvq/nU9RnvIflq3n55ygEM+5018uzZM72fKGS+IKQVYypI8h0NnUgWwzEMvSNMAkCrVALPJUDZV8R9qzQKJm/3ERE2IbNeefrJBJ3TKoEKC2keyp4Zn0Tfq7hJOLdanzgC7sWY4CT0dDdhO4NW9o+58oJFTMbcCTxl1eCk0iCpFyCaM3xPn06Y7f8n5Npn4tE36NIXg85A51dugp4cHh7qJC5inPLu+VxeugTH8QQgknfryeDI6luHCtq806YzcYp8IFrK+cp8hcJbJoeAjTh77a8EkumJal0YkTJxeMv07nNqK1VZ+hFnoWs28NE4LI1CXofPPZ/0obrKlo3xhruc1hIVMBsxr/iBtbg/e1DH1bvTcU3QeMKkYULfh8c4iSFvSVnuhHPbKx4Lc6apsTRmJ5jQAyH8O4GiFFwLypBS/NWshI6rUim7fslU7gNm1XKSbRX6wlSPIJdP0gp9tcQfc2HPQUA4hssbn+7A0i3MuTTBJmfJbNujE3D+whybRPDVa8hHiGXuFn1n+nXFplIUhS49ZJa47HlEhrdiSGZTxIvxbDpn7RzyB0g4EpmdgRv82dTPGTSo0Pl7+9eonsZGLEYcyNR2e2EkkPl94eg1P18a2Nw7C4TA5hsIxuaCDj6jHAGXCgF15vr/m5sIe7elBkgpnwsPtHRuQQUatax5sIFJ5Qn6zofH6VRS04GznYVgRoErZ/A8myP+GwIS1+BDTpsKWp15rC3qGId0MqusQBIXLuxfEF7MKutK0BYu6iWYzirJ46SFC3rNQMaskiaCtjivUMg87UxmpdSFizuRCHlGcSl+XqZ1vo/iuWUGapY7sPAHAl2UGvubrQV4MZMTt4tg6QCeMp0p/I838PY8JXaXo+1ZKHt+1IzPeMX0ipk+ogSVmKs+LXVWoLUi6sQ7tYJf65dANea2xPxjQZeWs/IuryYy6pwre+4rgijc9/3P2gxSJ8/EyoJ6nOODzJPc12xmtRlLb9D9wRlLa/HK+rYzs7LBkXPo1NdnrVpbe1Zt/fbBUW+vvjZn1do6s2o72u91LKu+Nvvnz/302OUx4JAiNibgEhohDne+Q4INZsZC1gvoGbHPCGs+dbzfGcugNGOxndKURUv4wTmL3bvPyckDzTYeYyLRzBPumifkOjuUOnvXrGHxmN/NjcFAv1FG/QsG/xoU36D4BsU/HIqXHn2dUPwaAfQqAr8XfA58BGLRjFhb+GTAnVTnBXyR+C0Mx3QiooVaOqzLfhemq1lUlumECMbo5T000jpnSqVuCBAJfxLgIelxLQ2T6fxbOK5cQWnlFdF+DUjzPBieyGhZmUmiIU4JnOnNEGsU+/Fp/b4FcDkuFypYSQisWYPjQeSa7MxQ1KQSDpFOavdFcFhRTaWYCj9fLuhSVYsKfzjmrOgMT7xCnhBdZ15UnGuvCInnK8IZPjHnr0EDvi/jNFufmdW4SAI2MIaVobsL4Ua8VVz8El5zV0+vSgIbLw8Sgd/Ncx8rtlEGWt9r0l8QI+DCOi2PKZvMV2rhIdEIem/5fKpMWxwIMhO/naHBIGBzsXgYEezCOGawUEhBFRV+6QZnz9Y2oLy/VETZ6Vi6kxGcK/Y1zfDm6mLgl8XWrT/CcQZFz5ZZ0E6HxXh1Dd2ET7CfGwxvKN463/OjqEQQf4RiLbLGAQulB3z5r2oMuCfXLaHQ+GEGCdwkSHxA0WXNfgZla6C6/+4ayMmMtufLvZDiqa/W5gupWlSxgEil3PWmKzdRNY5lCq7hkvEt2+pZs2RdRz7ars7yMrTp4/EBiKG+zy0345Xs0o5XkpUurxRkYr2x17w3FWWbVgUoW+wvy7VIfwsv80Zr2/HWDIN5DCAHQOyxcxpgtkz4bLACMrMbZLYRyExr4e8LzJCCzC7gVDRhOGEz7vsFZele6ZIw/QS0tsPbfi97ABZCbK0VuBsgBno3lr+y9Uty+GAzMfFrJksMBh+LLhvMfBgnPxkROzMRMQo9dIm8BPi/CyLeGzgrQMjyisEqANK6Vyy8N2g3ULgeCpexjPnk4Mg+sKyqiDfhynUJV9qdFVBx/+FQMXeFr4KITmte4rT/QILrYPMvD6Z5ggFig++jaCKdPy3SKcBeAANMpsOERzdHU1q35m08emRzhzdvbbByvleAmdMh00bJwvvg2wOCY2ZElkTA64F0T7mBMo4Fy41PMVfpx4G5v2Hg1+7PxLnCdPxMfHi/SJd1pYG6zcr75gR47e4qEV6rCfE2i++bufgeQirPkZGodL1X3ndES5cFoosDTp0tOKGbwhfZ1M1Emu8gvcLkwnjz/P1yELNi0hqEuSgsc7rWoyHM+99a4HSbrQUNyFwcZA4eN17q2KtgzE6DMedhTLvBmD+AMfXvfe4DYlIcXWx3Wwaq2+g5A0wZX+ygchTQw8Ipww2qXw8VMVCtH0MX+H7amdUx6Rps8TQOeUdW2d7ZQLCVIVjbGfxCuzvbzl4DwRoItjAE6z9ynM9xVsFg7QaDNXG+zcFgcRIYKfTyRmK7YRVzPTN2jGq+ISsCERSO6154PHwWYmpkLXvw5eiN4VI9h1bfkrkeWPVV2ivjMjbeYbqTPzMkEjdbMB8Ktnas3i8EWztWv4GtDWxdV9haiRyu8v1RtwGtDWj91UDraGpse6PHDRV6o3nnDG3k9y1y198bCgP5dcvLg0f63PvXRlbnyowmhGsLmRowVP0sBV4iV54+9GSwZx1Z6k5NNscKxNH83hBxFRqKzPlxFvJSAxcyW2b3mTHv71q79os9u2t9NVvSxhWX7vAz0OorIdD1AQpkG9pHvUG73Xwds8ZbCp29FUDboAFtDWhbCrTVm3Bhx+TKZWo1NCub2U9njo19jA2KDARuKwvHIqLF7KgIus3biSeBIfA88uyR429r9B1NmZmpU9oQZsrWbnDMUhyc9KrUjeYbmp+Bptc7Avk7wen8UkamqlzdeK/bVnqxYMwgcgAKN+n0ZDKd+umhsORC5mRethh4U9xRmLJNPuykt26ZeX0Md0dc1MPxIndAgvS8zaqUFnBJvRtwoeshGZxFIcpOLRcDO5TinV+P1jKYydUufkw/SVRnxChkBtnjZ4LW8L9eXkym5xTVZM+Oek1NvHbZXXFSsXLxlPk3PxZ9iQ4UA21rqWN4XXICZnyBok/EP5mGbp2Vq8YGmBjIY1H/yhpl6tRK63lafebU3MnuKQR+eqwsSL2oq3ydJpWHQKeClZ8ujq927Mx4ZgeIM4XTXosQM5+keDllWH4JnmrhzK4yy7It5aGtPthB8XtX+d1WH9qWSlFMvKP8bqcXiJ5lfeCOHVfOsJ5di1pwTy1YrcXpqA+KN+x7anuztmjs+44FoDRHBF/FqQQXnlZeX5pag9i4RMDIoSOTJ0iM9q7c8m4m8rLZ92A62e99DL/L1OJe2b2t2/8DY8J/imB5AAA=
{{- end }}

View File

@ -1,4 +1,12 @@
#!/bin/bash
set -ex
. ../../scripts/lib-update.sh
#login_ecr_public
update_helm
# Fetch dashboards from Grafana.com and update ZDT CM
../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/grafana-dashboards.yaml
update_docs

View File

@ -3,12 +3,16 @@ redis:
architecture: standalone
# Stick to last OSS version for now
image:
tag: 7.2.5-debian-12-r4
replica:
replicaCount: 0
auth:
enabled: false
master:
persistence:
enabled: false
@ -16,7 +20,7 @@ redis:
# requests:
# memory: 256Mi
# cpu: 100m
metrics:
enabled: false
serviceMonitor:

View File

@ -90,7 +90,14 @@ Kubernetes: `>= 1.26.0`
| fluent-bit.serviceMonitor.selector.release | string | `"metrics"` | |
| fluent-bit.testFramework.enabled | bool | `false` | |
| fluent-bit.tolerations[0].effect | string | `"NoSchedule"` | |
| fluent-bit.tolerations[0].key | string | `"kubezero-workergroup"` | |
| fluent-bit.tolerations[0].operator | string | `"Exists"` | |
| fluent-bit.tolerations[1].effect | string | `"NoSchedule"` | |
| fluent-bit.tolerations[1].key | string | `"nvidia.com/gpu"` | |
| fluent-bit.tolerations[1].operator | string | `"Exists"` | |
| fluent-bit.tolerations[2].effect | string | `"NoSchedule"` | |
| fluent-bit.tolerations[2].key | string | `"aws.amazon.com/neuron"` | |
| fluent-bit.tolerations[2].operator | string | `"Exists"` | |
| fluentd.configMapConfigs[0] | string | `"fluentd-prometheus-conf"` | |
| fluentd.dashboards.enabled | bool | `false` | |
| fluentd.enabled | bool | `false` | |

View File

@ -6,10 +6,9 @@ metadata:
labels:
common.k8s.elastic.co/type: elasticsearch
elasticsearch.k8s.elastic.co/cluster-name: {{ template "kubezero-lib.fullname" $ }}
{{ include "kubezero-lib.labels" . | nindent 4 }}
name: {{ template "kubezero-lib.fullname" $ }}-es-elastic-user
namespace: {{ .Release.Namespace }}
labels:
{{ include "kubezero-lib.labels" . | indent 4 }}
data:
elastic: {{ .Values.elastic_password | b64enc | quote }}
---
@ -20,10 +19,9 @@ metadata:
labels:
common.k8s.elastic.co/type: elasticsearch
elasticsearch.k8s.elastic.co/cluster-name: {{ template "kubezero-lib.fullname" $ }}
{{ include "kubezero-lib.labels" . | nindent 4 }}
name: {{ template "kubezero-lib.fullname" $ }}-es-elastic-username
namespace: {{ .Release.Namespace }}
labels:
{{ include "kubezero-lib.labels" . | indent 4 }}
data:
username: {{ "elastic" | b64enc | quote }}
{{- end }}

View File

@ -240,6 +240,8 @@ fluent-bit:
#dnsPolicy: ClusterFirstWithHostNet
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: kubezero-workergroup
effect: NoSchedule
operator: Exists

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-mq
description: KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
type: application
version: 0.3.8
version: 0.3.10
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -18,15 +18,15 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: nats
version: 0.8.4
#repository: https://nats-io.github.io/k8s/helm/charts/
version: 1.2.2
repository: https://nats-io.github.io/k8s/helm/charts/
condition: nats.enabled
- name: rabbitmq
version: 12.5.7
version: 14.6.6
repository: https://charts.bitnami.com/bitnami
condition: rabbitmq.enabled
- name: rabbitmq-cluster-operator
version: 3.10.7
version: 4.3.19
repository: https://charts.bitnami.com/bitnami
condition: rabbitmq-cluster-operator.enabled
kubeVersion: ">= 1.25.0"
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-mq
![Version: 0.3.5](https://img.shields.io/badge/Version-0.3.5-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.3.10](https://img.shields.io/badge/Version-0.3.10-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
@ -14,14 +14,14 @@ KubeZero umbrella chart for MQ systems like NATS, RabbitMQ
## Requirements
Kubernetes: `>= 1.20.0`
Kubernetes: `>= 1.25.0`
| Repository | Name | Version |
|------------|------|---------|
| | nats | 0.8.4 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.bitnami.com/bitnami | rabbitmq | 11.3.2 |
| https://charts.bitnami.com/bitnami | rabbitmq-cluster-operator | 3.1.4 |
| https://charts.bitnami.com/bitnami | rabbitmq | 14.6.6 |
| https://charts.bitnami.com/bitnami | rabbitmq-cluster-operator | 4.3.19 |
| https://nats-io.github.io/k8s/helm/charts/ | nats | 1.2.2 |
## Values
@ -64,7 +64,7 @@ Kubernetes: `>= 1.20.0`
| rabbitmq.podAntiAffinityPreset | string | `""` | |
| rabbitmq.replicaCount | int | `1` | |
| rabbitmq.resources.requests.cpu | string | `"100m"` | |
| rabbitmq.resources.requests.memory | string | `"256Mi"` | |
| rabbitmq.resources.requests.memory | string | `"512Mi"` | |
| rabbitmq.topologySpreadConstraints | string | `"- maxSkew: 1\n topologyKey: topology.kubernetes.io/zone\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels: {{- include \"common.labels.matchLabels\" . | nindent 6 }}\n- maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels: {{- include \"common.labels.matchLabels\" . | nindent 6 }}"` | |
| rabbitmq.ulimitNofiles | string | `""` | |

View File

@ -1,22 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -1,19 +0,0 @@
apiVersion: v2
appVersion: 2.3.2
description: A Helm chart for the NATS.io High Speed Cloud Native Distributed Communications
Technology.
home: http://github.com/nats-io/k8s
icon: https://nats.io/img/nats-icon-color.png
keywords:
- nats
- messaging
- cncf
maintainers:
- email: wally@nats.io
name: Waldemar Quevedo
- email: colin@nats.io
name: Colin Sullivan
- email: jaime@nats.io
name: Jaime Piña
name: nats
version: 0.8.4

View File

@ -1,596 +0,0 @@
# NATS Server
[NATS](https://nats.io) is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Foundation ([CNCF](https://cncf.io)). NATS has over [30 client language implementations](https://nats.io/download/), and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems.
## TL;DR;
```console
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats
```
## Configuration
### Server Image
```yaml
nats:
image: nats:2.1.7-alpine3.11
pullPolicy: IfNotPresent
```
### Limits
```yaml
nats:
# The number of connect attempts against discovered routes.
connectRetries: 30
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
lameDuckDuration:
# Number of seconds to wait for client connections to end after the pod termination is requested
terminationGracePeriodSeconds: 60
```
### Logging
*Note*: It is not recommended to enable trace or debug in production since enabling it will significantly degrade performance.
```yaml
nats:
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
```
### TLS setup for client connections
You can find more on how to setup and trouble shoot TLS connnections at:
https://docs.nats.io/nats-server/configuration/securing_nats/tls
```yaml
nats:
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Clustering
If clustering is enabled, then a 3-node cluster will be setup. More info at:
https://docs.nats.io/nats-server/configuration/clustering#nats-server-clustering
```yaml
cluster:
enabled: true
replicas: 3
tls:
secret:
name: nats-server-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
Example:
```sh
$ helm install nats nats/nats --set cluster.enabled=true
```
## Leafnodes
Leafnode connections to extend a cluster. More info at:
https://docs.nats.io/nats-server/configuration/leafnodes
```yaml
leafnodes:
enabled: true
remotes:
- url: "tls://connect.ngs.global:7422"
# credentials:
# secret:
# name: leafnode-creds
# key: TA.creds
# tls:
# secret:
# name: nats-leafnode-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
tls:
secret:
name: nats-client-tls
ca: "ca.crt"
cert: "tls.crt"
key: "tls.key"
```
## Setting up External Access
### Using HostPorts
In case of both external access and advertisements being enabled, an
initializer container will be used to gather the public ips. This
container will required to have enough RBAC policy to be able to make a
look up of the public ip of the node where it is running.
For example, to setup external access for a cluster and advertise the public ip to clients:
```yaml
nats:
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: true
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
```
Where the service account named `nats-server` has the following RBAC policy for example:
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nats-server
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nats-server
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nats-server-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nats-server
subjects:
- kind: ServiceAccount
name: nats-server
namespace: default
```
The container image of the initializer can be customized via:
```yaml
bootconfig:
image: natsio/nats-boot-config:latest
pullPolicy: IfNotPresent
```
### Using LoadBalancers
In case of using a load balancer for external access, it is recommended to disable no advertise
so that internal ips from the NATS Servers are not advertised to the clients connecting through
the load balancer.
```yaml
nats:
image: nats:alpine
cluster:
enabled: true
noAdvertise: true
leafnodes:
enabled: true
noAdvertise: true
natsbox:
enabled: true
```
Then could use an L4 enabled load balancer to connect to NATS, for example:
```yaml
apiVersion: v1
kind: Service
metadata:
name: nats-lb
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: nats
ports:
- protocol: TCP
port: 4222
targetPort: 4222
name: nats
- protocol: TCP
port: 7422
targetPort: 7422
name: leafnodes
- protocol: TCP
port: 7522
targetPort: 7522
name: gateways
```
## Gateways
A super cluster can be formed by pointing to remote gateways.
You can find more about gateways in the NATS documentation:
https://docs.nats.io/nats-server/configuration/gateways
```yaml
gateway:
enabled: false
name: 'default'
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
```
## Auth setup
### Auth with a Memory Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
############################
# #
# Memory resolver settings #
# #
##############################
type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
configMap:
name: nats-accounts
key: resolver.conf
```
### Auth using an Account Server Resolver
```yaml
auth:
enabled: true
# Reference to the Operator JWT.
operatorjwt:
configMap:
name: operator-jwt
key: KO.jwt
# Public key of the System Account
systemAccount:
resolver:
##########################
# #
# URL resolver settings #
# #
##########################
type: URL
url: "http://nats-account-server:9090/jwt/v1/accounts/"
```
## JetStream
### Setting up Memory and File Storage
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
memStorage:
enabled: true
size: 2Gi
fileStorage:
enabled: true
size: 1Gi
storageDirectory: /data/
storageClassName: default
```
### Using with an existing PersistentVolumeClaim
For example, given the following `PersistentVolumeClaim`:
```yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nats-js-disk
annotations:
volume.beta.kubernetes.io/storage-class: "default"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
```
You can start JetStream so that one pod is bounded to it:
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
fileStorage:
enabled: true
storageDirectory: /data/
existingClaim: nats-js-disk
claimStorageSize: 3Gi
```
### Clustering example
```yaml
nats:
image: nats:alpine
jetstream:
enabled: true
memStorage:
enabled: true
size: "2Gi"
fileStorage:
enabled: true
size: "1Gi"
storageDirectory: /data/
storageClassName: default
cluster:
enabled: true
# Cluster name is required, by default will be release name.
# name: "nats"
replicas: 3
```
## Misc
### NATS Box
A lightweight container with NATS and NATS Streaming utilities that is deployed along the cluster to confirm the setup.
You can find the image at: https://github.com/nats-io/nats-box
```yaml
natsbox:
enabled: true
image: nats:alpine
pullPolicy: IfNotPresent
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
```
### Configuration Reload sidecar
The NATS config reloader image to use:
```yaml
reloader:
enabled: true
image: natsio/nats-server-config-reloader:latest
pullPolicy: IfNotPresent
```
### Prometheus Exporter sidecar
You can toggle whether to start the sidecar that can be used to feed metrics to Prometheus:
```yaml
exporter:
enabled: true
image: natsio/prometheus-nats-exporter:latest
pullPolicy: IfNotPresent
```
### Prometheus operator ServiceMonitor support
You can enable prometheus operator ServiceMonitor:
```yaml
exporter:
# You have to enable exporter first
enabled: true
serviceMonitor:
enabled: true
## Specify the namespace where Prometheus Operator is running
# namespace: monitoring
# ...
```
### Pod Customizations
#### Security Context
```yaml
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
fsGroup: 1000
runAsUser: 1000
runAsNonRoot: true
```
#### Affinity
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity>
`matchExpressions` must be configured according to your setup
```yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/purpose
operator: In
values:
- nats
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats
- stan
topologyKey: "kubernetes.io/hostname"
```
#### Service topology
[Service topology](https://kubernetes.io/docs/concepts/services-networking/service-topology/) is disabled by default, but can be enabled by setting `topologyKeys`. For example:
```yaml
topologyKeys:
- "kubernetes.io/hostname"
- "topology.kubernetes.io/zone"
- "topology.kubernetes.io/region"
```
#### CPU/Memory Resource Requests/Limits
Sets the pods cpu/memory requests/limits
```yaml
nats:
resources:
requests:
cpu: 2
memory: 4Gi
limits:
cpu: 4
memory: 6Gi
```
No resources are set by default.
#### Annotations
<https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations>
```yaml
podAnnotations:
key1 : "value1",
key2 : "value2"
```
### Name Overides
Can change the name of the resources as needed with:
```yaml
nameOverride: "my-nats"
```
### Image Pull Secrets
```yaml
imagePullSecrets:
- name: myRegistry
```
Adds this to the StatefulSet:
```yaml
spec:
imagePullSecrets:
- name: myRegistry
```

View File

@ -1,21 +0,0 @@
// Operator "KO"
operator: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiI0U09OUjZLT05FMzNFRFhRWE5IR1JUSEg2TEhPM0dFU0xXWlJYNlNENTQ2MjQyTE80QlVRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJLTyIsInN1YiI6Ik9DREc2T1lQV1hGU0tMR1NIUEFSR1NSWUNLTEpJUUkySU5FS1VWQUYzMk1XNTZWVExMNEZXSjRJIiwidHlwZSI6Im9wZXJhdG9yIiwibmF0cyI6e319.0039eTgLj-uyYFoWB3rivGP0WyIZkb_vrrE6tnqcNgIDM59o92nw_Rvb-hrvsK30QWqwm_W8BpVZHDMEY-CiBQ
system_account: ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW
resolver: MEMORY
resolver_preload: {
// Account "A"
AA3NXTHTXOHCTPIBKEDHNAYAHJ4CO7ERCOJFYCXOXVEOPZTMW55WX32Z: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJSM0QyWUM1UVlJWk4zS0hYR1FFRTZNQTRCRVU3WkFWQk5LSElJNTNOM0tLRVRTTVZEVVRRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJBIiwic3ViIjoiQUEzTlhUSFRYT0hDVFBJQktFREhOQVlBSEo0Q083RVJDT0pGWUNYT1hWRU9QWlRNVzU1V1gzMloiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiZXhwb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIiwicmVzcG9uc2VfdHlwZSI6IlNpbmdsZXRvbiIsInNlcnZpY2VfbGF0ZW5jeSI6eyJzYW1wbGluZyI6MTAwLCJyZXN1bHRzIjoibGF0ZW5jeS5vbi50ZXN0In19XSwibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.W7oEjpQA986Hai3t8UOiJwCcVDYm2sj7L545oYZhQtYbydh_ragPn8pc0f1pA1krMz_ZDuBwKHLZRgXuNSysDQ
// Account "STAN"
AAYNFTMTKWXZEPPSEZLECMHE3VBULMIUO2QGVY3P4VCI7NNQC3TVX2PB: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRSUozV0I0MjdSVU5RSlZFM1dRVEs3TlNaVlpaNkRQT01KWkdHMlhTMzQ2WFNQTVZERElBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTVEFOIiwic3ViIjoiQUFZTkZUTVRLV1haRVBQU0VaTEVDTUhFM1ZCVUxNSVVPMlFHVlkzUDRWQ0k3Tk5RQzNUVlgyUEIiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.SPyQdAFmoON577s-eZP4K3-9QXYhTn9Xqy3aDGeHvHYRE9IVD47Eu7d38ZiySPlxgkdM_WXZn241_59d07axBA
// Account "SYS"
ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJGSk1TSEROVlVGUEM0U0pSRlcyV0NZT1hRWUFDM1hNNUJaWTRKQUZWUTc1V0lEUkdDN0lBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBQ0xaNk9TV0M3QlhGVDRWTlZCRE1XVUZOQklWR0hUVU9OT1hJNlRDQlAzUUhPRDM0SklEU1JZVyIsInR5cGUiOiJhY2NvdW50IiwibmF0cyI6eyJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.owW08dIa97STqgT0ux-5sD00Ad0I3HstJKTmh1CGVpsQwelaZdrBuia-4XgCgN88zuLokPMfWI_pkxXU_iB0BA
// Account "B"
ADOR7Q5KMWC2XIWRRRC4MZUDCPYG3UMAIWDRX6M2MFDY5SR6HQAAMHJA: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRQjdIRFg3VUZYN01KUjZPS1E2S1dRSlVUUEpWWENTNkJCWjQ3SDVVTFdVVFNRUU1NQzJRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJCIiwic3ViIjoiQURPUjdRNUtNV0MyWElXUlJSQzRNWlVEQ1BZRzNVTUFJV0RSWDZNMk1GRFk1U1I2SFFBQU1ISkEiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiaW1wb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsImFjY291bnQiOiJBQTNOWFRIVFhPSENUUElCS0VESE5BWUFISjRDTzdFUkNPSkZZQ1hPWFZFT1BaVE1XNTVXWDMyWiIsInRvIjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIn1dLCJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.r5p_sGt_hmDfWWIJGrLodAM8VfXPeUzsbRtzrMTBGGkcLdi4jqAHXRu09CmFISEzX2VKeGuOonGuAMOFotvICg
}

View File

@ -1,24 +0,0 @@
# Setup memory preload config.
auth:
enabled: true
resolver:
type: memory
preload: |
// Operator "KO"
operator: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiI0U09OUjZLT05FMzNFRFhRWE5IR1JUSEg2TEhPM0dFU0xXWlJYNlNENTQ2MjQyTE80QlVRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJLTyIsInN1YiI6Ik9DREc2T1lQV1hGU0tMR1NIUEFSR1NSWUNLTEpJUUkySU5FS1VWQUYzMk1XNTZWVExMNEZXSjRJIiwidHlwZSI6Im9wZXJhdG9yIiwibmF0cyI6e319.0039eTgLj-uyYFoWB3rivGP0WyIZkb_vrrE6tnqcNgIDM59o92nw_Rvb-hrvsK30QWqwm_W8BpVZHDMEY-CiBQ
system_account: ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW
resolver_preload: {
// Account "A"
AA3NXTHTXOHCTPIBKEDHNAYAHJ4CO7ERCOJFYCXOXVEOPZTMW55WX32Z: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJSM0QyWUM1UVlJWk4zS0hYR1FFRTZNQTRCRVU3WkFWQk5LSElJNTNOM0tLRVRTTVZEVVRRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJBIiwic3ViIjoiQUEzTlhUSFRYT0hDVFBJQktFREhOQVlBSEo0Q083RVJDT0pGWUNYT1hWRU9QWlRNVzU1V1gzMloiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiZXhwb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIiwicmVzcG9uc2VfdHlwZSI6IlNpbmdsZXRvbiIsInNlcnZpY2VfbGF0ZW5jeSI6eyJzYW1wbGluZyI6MTAwLCJyZXN1bHRzIjoibGF0ZW5jeS5vbi50ZXN0In19XSwibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.W7oEjpQA986Hai3t8UOiJwCcVDYm2sj7L545oYZhQtYbydh_ragPn8pc0f1pA1krMz_ZDuBwKHLZRgXuNSysDQ
// Account "STAN"
AAYNFTMTKWXZEPPSEZLECMHE3VBULMIUO2QGVY3P4VCI7NNQC3TVX2PB: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRSUozV0I0MjdSVU5RSlZFM1dRVEs3TlNaVlpaNkRQT01KWkdHMlhTMzQ2WFNQTVZERElBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTVEFOIiwic3ViIjoiQUFZTkZUTVRLV1haRVBQU0VaTEVDTUhFM1ZCVUxNSVVPMlFHVlkzUDRWQ0k3Tk5RQzNUVlgyUEIiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.SPyQdAFmoON577s-eZP4K3-9QXYhTn9Xqy3aDGeHvHYRE9IVD47Eu7d38ZiySPlxgkdM_WXZn241_59d07axBA
// Account "SYS"
ACLZ6OSWC7BXFT4VNVBDMWUFNBIVGHTUONOXI6TCBP3QHOD34JIDSRYW: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJGSk1TSEROVlVGUEM0U0pSRlcyV0NZT1hRWUFDM1hNNUJaWTRKQUZWUTc1V0lEUkdDN0lBIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBQ0xaNk9TV0M3QlhGVDRWTlZCRE1XVUZOQklWR0hUVU9OT1hJNlRDQlAzUUhPRDM0SklEU1JZVyIsInR5cGUiOiJhY2NvdW50IiwibmF0cyI6eyJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.owW08dIa97STqgT0ux-5sD00Ad0I3HstJKTmh1CGVpsQwelaZdrBuia-4XgCgN88zuLokPMfWI_pkxXU_iB0BA
// Account "B"
ADOR7Q5KMWC2XIWRRRC4MZUDCPYG3UMAIWDRX6M2MFDY5SR6HQAAMHJA: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJRQjdIRFg3VUZYN01KUjZPS1E2S1dRSlVUUEpWWENTNkJCWjQ3SDVVTFdVVFNRUU1NQzJRIiwiaWF0IjoxNTgzNzg1MTMyLCJpc3MiOiJPQ0RHNk9ZUFdYRlNLTEdTSFBBUkdTUllDS0xKSVFJMklORUtVVkFGMzJNVzU2VlRMTDRGV0o0SSIsIm5hbWUiOiJCIiwic3ViIjoiQURPUjdRNUtNV0MyWElXUlJSQzRNWlVEQ1BZRzNVTUFJV0RSWDZNMk1GRFk1U1I2SFFBQU1ISkEiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiaW1wb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsImFjY291bnQiOiJBQTNOWFRIVFhPSENUUElCS0VESE5BWUFISjRDTzdFUkNPSkZZQ1hPWFZFT1BaVE1XNTVXWDMyWiIsInRvIjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIn1dLCJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.r5p_sGt_hmDfWWIJGrLodAM8VfXPeUzsbRtzrMTBGGkcLdi4jqAHXRu09CmFISEzX2VKeGuOonGuAMOFotvICg
}

View File

@ -1,9 +0,0 @@
# Setup memory preload config.
auth:
enabled: true
resolver:
type: memory
configMap:
name: nats-accounts
key: resolver.conf

View File

@ -1,9 +0,0 @@
let accounts = ./accounts.conf as Text
in
''
port: 4222
${accounts}
''

View File

@ -1,21 +0,0 @@
// Operator "KO"
operator: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJKS0E2U0pKUUVOTFpYVDJEWTRWNE00UDZXUFRVUlhIQzNMU1pJWEZWRlFGV0I3U0tKVk9BIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJLTyIsInN1YiI6Ik9CRkJIQzNVNVdVVEVNWkpPM1g3SFlZMkI2M1BZSlBPRFhLQVZZR0dTRU1BNzNMS0ZNWDRMRjJBIiwidHlwZSI6Im9wZXJhdG9yIiwibmF0cyI6e319.60YToJe3Dz9OZES80jYXVgg7uCB1c3BsX6HglA8tsKKRe-Br3pMpn9yUPUujjB61MGqnA7Zmbx8qWnoj8CkuCw
system_account: ABL65FFQWUDHHTGMGRFVVSQDBAWHGEJ2CDRCMGBFV6SB4MLKFSUPN7GP
resolver: MEMORY
resolver_preload: {
// Account "B"
AAIJAGRSL2KCEPTRBP6DJCTAMSNOUXRILLZXIY6CTZ4GR27ISCZOP6QH: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJEVTdWV1BXQUtBSVdNNkhNUElDNE43TVRGTFEyV01JUFhFVU5aNVEzRE1YSTRKVkpLQU1BIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJCIiwic3ViIjoiQUFJSkFHUlNMMktDRVBUUkJQNkRKQ1RBTVNOT1VYUklMTFpYSVk2Q1RaNEdSMjdJU0NaT1A2UUgiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiaW1wb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsImFjY291bnQiOiJBQlhXNU9aV09LSzUzWDNWNUhSVkdPMlJXTlVUU1NQSU1HVDZORU9SMjNBQzRNTk1QTlFTUTZWTCIsInRvIjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIn1dLCJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.VLv3U7k8jJaIcGpDYXo0XQCYNVMNQd2PHVUOXGMvCU8ifiYpkaRJ4G0UXZHqlQl_0g3M_LEtJw0K-4HwgOeIAA
// Account "SYS"
ABL65FFQWUDHHTGMGRFVVSQDBAWHGEJ2CDRCMGBFV6SB4MLKFSUPN7GP: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJPSUpENkozTjdCVk0zSEY0M0NCTUhLMllUNlpXTlFCWkZBRzQ0VE5RSFA3SlVZT0hZR0dRIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBQkw2NUZGUVdVREhIVEdNR1JGVlZTUURCQVdIR0VKMkNEUkNNR0JGVjZTQjRNTEtGU1VQTjdHUCIsInR5cGUiOiJhY2NvdW50IiwibmF0cyI6eyJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.Jei8psto5h35bFn4y1Unsk0Noh6MYJxkB8Hs-nnLuUBrkTppSwukEkM_ufNGA_lxsmPki3zBf8y6rsQ13Ec5AA
// Account "A"
ABXW5OZWOKK53X3V5HRVGO2RWNUTSSPIMGT6NEOR23AC4MNMPNQSQ6VL: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJSRFNRTE9GT0gzRUoyUUNLSkkyUEJXNkxLMllUVFJDUUdGV0tJRFJJRVRVMzZTT0RPT1FRIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJBIiwic3ViIjoiQUJYVzVPWldPS0s1M1gzVjVIUlZHTzJSV05VVFNTUElNR1Q2TkVPUjIzQUM0TU5NUE5RU1E2VkwiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiZXhwb3J0cyI6W3sibmFtZSI6InRlc3QiLCJzdWJqZWN0IjoidGVzdCIsInR5cGUiOiJzZXJ2aWNlIiwicmVzcG9uc2VfdHlwZSI6IlNpbmdsZXRvbiIsInNlcnZpY2VfbGF0ZW5jeSI6eyJzYW1wbGluZyI6MTAwLCJyZXN1bHRzIjoibGF0ZW5jeS5vbi50ZXN0In19XSwibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.lJfHHkbXeEf6DbHFju0zktCjWL0kgll17BdYJl6f2hcZxbUtiyf3H1mGfrzELgCuEO7p8X11UpRVy_eTQfnGAA
// Account "STAN"
ACLSVE2AZYTXOBIJXOV5XHAIIM7KLL777F7GAEWW5W5P4IAR2VZJSGID: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJJT1ZPSFBPV1hJRDI2U1JYVEJQTTVUQlVKWDJRU0FSSTJMQjJTM09aRFpMU0paS1BOVU9BIiwiaWF0IjoxNTgzODIyNjYwLCJpc3MiOiJPQkZCSEMzVTVXVVRFTVpKTzNYN0hZWTJCNjNQWUpQT0RYS0FWWUdHU0VNQTczTEtGTVg0TEYyQSIsIm5hbWUiOiJTVEFOIiwic3ViIjoiQUNMU1ZFMkFaWVRYT0JJSlhPVjVYSEFJSU03S0xMNzc3RjdHQUVXVzVXNVA0SUFSMlZaSlNHSUQiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJjb25uIjotMSwibGVhZiI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwiZGF0YSI6LTEsInBheWxvYWQiOi0xLCJ3aWxkY2FyZHMiOnRydWV9fX0.CE5_K9kAdAgxesJRiJYh3kK2f74_c7T3bNQhgfaXOMzI8X6VOWqn0_5gH9jOD0xzHXIYiUMwy7a4Ou63PizHCw
}

View File

@ -1,26 +0,0 @@
{{- if or .Values.nats.logging.debug .Values.nats.logging.trace }}
*WARNING*: Keep in mind that running the server with
debug and/or trace enabled significantly affects the
performance of the server!
{{- end }}
You can find more information about running NATS on Kubernetes
in the NATS documentation website:
https://docs.nats.io/nats-on-kubernetes/nats-kubernetes
{{- if .Values.natsbox.enabled }}
NATS Box has been deployed into your cluster, you can
now use the NATS tools within the container as follows:
kubectl exec -n {{ .Release.Namespace }} -it deployment/{{ template "nats.fullname" . }}-box -- /bin/sh -l
nats-box:~# nats-sub test &
nats-box:~# nats-pub test hi
nats-box:~# nc {{ template "nats.fullname" . }} 4222
{{- end }}
Thanks for using NATS!

View File

@ -1,98 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "nats.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "nats.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nats.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "nats.labels" -}}
helm.sh/chart: {{ include "nats.chart" . }}
{{- range $name, $value := .Values.commonLabels }}
{{ $name }}: {{ $value }}
{{- end }}
{{ include "nats.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "nats.selectorLabels" -}}
app.kubernetes.io/name: {{ include "nats.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Return the proper NATS image name
*/}}
{{- define "nats.clusterAdvertise" -}}
{{- printf "$(POD_NAME).%s.$(POD_NAMESPACE).svc.%s." (include "nats.fullname" . ) $.Values.k8sClusterDomain }}
{{- end }}
{{/*
Return the NATS cluster routes.
*/}}
{{- define "nats.clusterRoutes" -}}
{{- $name := (include "nats.fullname" . ) -}}
{{- range $i, $e := until (.Values.cluster.replicas | int) -}}
{{- printf "nats://%s-%d.%s.%s.svc.%s.:6222," $name $i $name $.Release.Namespace $.Values.k8sClusterDomain -}}
{{- end -}}
{{- end }}
{{- define "nats.tlsConfig" -}}
tls {
{{- if .cert }}
cert_file: {{ .secretPath }}/{{ .secret.name }}/{{ .cert }}
{{- end }}
{{- if .key }}
key_file: {{ .secretPath }}/{{ .secret.name }}/{{ .key }}
{{- end }}
{{- if .ca }}
ca_file: {{ .secretPath }}/{{ .secret.name }}/{{ .ca }}
{{- end }}
{{- if .insecure }}
insecure: {{ .insecure }}
{{- end }}
{{- if .verify }}
verify: {{ .verify }}
{{- end }}
{{- if .verifyAndMap }}
verify_and_map: {{ .verifyAndMap }}
{{- end }}
{{- if .curvePreferences }}
curve_preferences: {{ .curvePreferences }}
{{- end }}
{{- if .timeout }}
timeout: {{ .timeout }}
{{- end }}
}
{{- end }}

View File

@ -1,15 +0,0 @@
{{- if .Values.auth.enabled }}
{{- if eq .Values.auth.resolver.type "memory" }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "nats.name" . }}-accounts
labels:
app: {{ template "nats.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
data:
accounts.conf: |-
{{- .Files.Get "accounts.conf" | indent 6 }}
{{- end }}
{{- end }}

View File

@ -1,398 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "nats.fullname" . }}-config
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
data:
nats.conf: |
# PID file shared with configuration reloader.
pid_file: "/var/run/nats/nats.pid"
###############
# #
# Monitoring #
# #
###############
http: 8222
server_name: $POD_NAME
{{- if .Values.nats.tls }}
#####################
# #
# TLS Configuration #
# #
#####################
{{- with .Values.nats.tls }}
{{- $nats_tls := merge (dict) . }}
{{- $_ := set $nats_tls "secretPath" "/etc/nats-certs/clients" }}
{{- include "nats.tlsConfig" $nats_tls | nindent 4}}
{{- end }}
{{- end }}
{{- if .Values.nats.jetstream.enabled }}
###################################
# #
# NATS JetStream #
# #
###################################
jetstream {
{{- if .Values.nats.jetstream.memStorage.enabled }}
max_mem: {{ .Values.nats.jetstream.memStorage.size }}
{{- end }}
{{- if .Values.nats.jetstream.fileStorage.enabled }}
store_dir: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
max_file:
{{- if .Values.nats.jetstream.fileStorage.existingClaim }}
{{- .Values.nats.jetstream.fileStorage.claimStorageSize }}
{{- else }}
{{- .Values.nats.jetstream.fileStorage.size }}
{{- end }}
{{- end }}
}
{{- end }}
{{- if .Values.mqtt.enabled }}
###################################
# #
# NATS MQTT #
# #
###################################
mqtt {
port: 1883
{{- with .Values.mqtt.tls }}
{{- $mqtt_tls := merge (dict) . }}
{{- $_ := set $mqtt_tls "secretPath" "/etc/nats-certs/mqtt" }}
{{- include "nats.tlsConfig" $mqtt_tls | nindent 6}}
{{- end }}
{{- if .Values.mqtt.noAuthUser }}
no_auth_user: {{ .Values.mqtt.noAuthUser | quote }}
{{- end }}
ack_wait: {{ .Values.mqtt.ackWait | quote }}
max_ack_pending: {{ .Values.mqtt.maxAckPending }}
}
{{- end }}
{{- if .Values.cluster.enabled }}
###################################
# #
# NATS Full Mesh Clustering Setup #
# #
###################################
cluster {
port: 6222
{{- if .Values.nats.jetstream.enabled }}
{{- if .Values.cluster.name }}
name: {{ .Values.cluster.name }}
{{- else }}
name: {{ template "nats.name" . }}
{{- end }}
{{- else }}
{{- with .Values.cluster.name }}
name: {{ . }}
{{- end }}
{{- end }}
{{- with .Values.cluster.tls }}
{{- $cluster_tls := merge (dict) . }}
{{- $_ := set $cluster_tls "secretPath" "/etc/nats-certs/cluster" }}
{{- include "nats.tlsConfig" $cluster_tls | nindent 6}}
{{- end }}
{{- if .Values.cluster.authorization }}
authorization {
{{- with .Values.cluster.authorization.user }}
user: {{ . }}
{{- end }}
{{- with .Values.cluster.authorization.password }}
password: {{ . }}
{{- end }}
{{- with .Values.cluster.authorization.timeout }}
timeout: {{ . }}
{{- end }}
}
{{- end }}
routes = [
{{ include "nats.clusterRoutes" . }}
]
cluster_advertise: $CLUSTER_ADVERTISE
{{- with .Values.cluster.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
connect_retries: {{ .Values.nats.connectRetries }}
}
{{ end }}
{{- if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/client_advertise.conf"
{{- end }}
{{- if or .Values.leafnodes.enabled .Values.leafnodes.remotes }}
#################
# #
# NATS Leafnode #
# #
#################
leafnodes {
{{- if .Values.leafnodes.enabled }}
listen: "0.0.0.0:7422"
{{- end }}
{{ if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{ end }}
{{- with .Values.leafnodes.noAdvertise }}
no_advertise: {{ . }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{- $leafnode_tls := merge (dict) . }}
{{- $_ := set $leafnode_tls "secretPath" "/etc/nats-certs/leafnodes" }}
{{- include "nats.tlsConfig" $leafnode_tls | nindent 6}}
{{- end }}
remotes: [
{{- range .Values.leafnodes.remotes }}
{
{{- with .url }}
url: {{ . }}
{{- end }}
{{- with .credentials }}
credentials: "/etc/nats-creds/{{ .secret.name }}/{{ .secret.key }}"
{{- end }}
{{- with .tls }}
{{ $secretName := .secret.name }}
tls: {
{{- with .cert }}
cert_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .key }}
key_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .ca }}
ca_file: /etc/nats-certs/leafnodes/{{ $secretName }}/{{ . }}
{{- end }}
}
{{- end }}
}
{{- end }}
]
}
{{ end }}
{{- if .Values.gateway.enabled }}
#################
# #
# NATS Gateways #
# #
#################
gateway {
name: {{ .Values.gateway.name }}
port: 7522
{{ if and .Values.nats.advertise .Values.nats.externalAccess }}
include "advertise/gateway_advertise.conf"
{{ end }}
{{- with .Values.gateway.tls }}
{{- $gateway_tls := merge (dict) . }}
{{- $_ := set $gateway_tls "secretPath" "/etc/nats-certs/gateway" }}
{{- include "nats.tlsConfig" $gateway_tls | nindent 6}}
{{- end }}
# Gateways array here
gateways: [
{{- range .Values.gateway.gateways }}
{
{{- with .name }}
name: {{ . }}
{{- end }}
{{- with .url }}
url: {{ . | quote }}
{{- end }}
{{- with .urls }}
urls: [{{ join "," . }}]
{{- end }}
},
{{- end }}
]
}
{{ end }}
{{- with .Values.nats.logging.debug }}
debug: {{ . }}
{{- end }}
{{- with .Values.nats.logging.trace }}
trace: {{ . }}
{{- end }}
{{- with .Values.nats.logging.logtime }}
logtime: {{ . }}
{{- end }}
{{- with .Values.nats.logging.connectErrorReports }}
connect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.logging.reconnectErrorReports }}
reconnect_error_reports: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxConnections }}
max_connections: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxSubscriptions }}
max_subscriptions: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPending }}
max_pending: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxControlLine }}
max_control_line: {{ . }}
{{- end }}
{{- with .Values.nats.limits.maxPayload }}
max_payload: {{ . }}
{{- end }}
{{- with .Values.nats.pingInterval }}
ping_interval: {{ . }}
{{- end }}
{{- with .Values.nats.maxPings }}
ping_max: {{ . }}
{{- end }}
{{- with .Values.nats.writeDeadline }}
write_deadline: {{ . | quote }}
{{- end }}
{{- with .Values.nats.writeDeadline }}
lame_duck_duration: {{ . | quote }}
{{- end }}
{{- if .Values.websocket.enabled }}
##################
# #
# Websocket #
# #
##################
websocket {
port: {{ .Values.websocket.port }}
{{- if .Values.websocket.tls }}
{{ $secretName := .secret.name }}
tls {
{{- with .cert }}
cert_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .key }}
key_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
{{- with .ca }}
ca_file: /etc/nats-certs/ws/{{ $secretName }}/{{ . }}
{{- end }}
}
{{- else }}
no_tls: {{ .Values.websocket.noTLS }}
{{- end }}
}
{{- end }}
{{- if .Values.auth.enabled }}
##################
# #
# Authorization #
# #
##################
{{- if .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
resolver: MEMORY
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
include "accounts/{{ .Values.auth.resolver.configMap.key }}"
{{- else }}
{{- with .Values.auth.resolver }}
operator: {{ .operator }}
system_account: {{ .systemAccount }}
{{- end }}
resolver: {
type: full
{{- with .Values.auth.resolver }}
dir: {{ .store.dir | quote }}
allow_delete: {{ .allowDelete }}
interval: {{ .interval | quote }}
{{- end }}
}
{{- end }}
{{- end }}
{{- if .Values.auth.resolver.resolverPreload }}
resolver_preload: {{ toRawJson .Values.auth.resolver.resolverPreload }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
{{- with .Values.auth.resolver.url }}
resolver: URL({{ . }})
{{- end }}
operator: /etc/nats-config/operator/{{ .Values.auth.operatorjwt.configMap.key }}
{{- end }}
{{- end }}
{{- with .Values.auth.systemAccount }}
system_account: {{ . }}
{{- end }}
{{- with .Values.auth.basic }}
{{- with .noAuthUser }}
no_auth_user: {{ . }}
{{- end }}
{{- with .users }}
authorization {
users: [
{{- range . }}
{{- toRawJson . | nindent 4 }},
{{- end }}
]
}
{{- end }}
{{- if .token }}
authorization {
token: "{{ .token }}"
}
{{- end }}
{{- with .accounts }}
accounts: {{- toRawJson . }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -1,95 +0,0 @@
{{- if .Values.natsbox.enabled }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nats.fullname" . }}-box
namespace: {{ .Release.Namespace | quote }}
labels:
app: {{ include "nats.fullname" . }}-box
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ include "nats.fullname" . }}-box
template:
metadata:
labels:
app: {{ include "nats.fullname" . }}-box
{{- if .Values.natsbox.podAnnotations }}
annotations:
{{- range $key, $value := .Values.natsbox.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
{{- with .Values.natsbox.affinity }}
affinity:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
secret:
secretName: {{ .Values.natsbox.credentials.secret.name }}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: nats-box
image: {{ .Values.natsbox.image }}
imagePullPolicy: {{ .Values.natsbox.pullPolicy }}
resources:
{{- toYaml .Values.natsbox.resources | nindent 10 }}
env:
- name: NATS_URL
value: {{ template "nats.fullname" . }}
{{- if .Values.natsbox.credentials }}
- name: USER_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
- name: USER2_CREDS
value: /etc/nats-config/creds/{{ .Values.natsbox.credentials.secret.key }}
{{- end }}
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp /etc/nats-certs/clients/{{ $secretName }}/* /usr/local/share/ca-certificates && update-ca-certificates
{{- end }}
command:
- "tail"
- "-f"
- "/dev/null"
volumeMounts:
{{- if .Values.natsbox.credentials }}
- name: nats-sys-creds
mountPath: /etc/nats-config/creds
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@ -1,22 +0,0 @@
{{- if .Values.podDisruptionBudget }}
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
name: {{ include "nats.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -1,31 +0,0 @@
{{ if and .Values.nats.externalAccess .Values.nats.advertise }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.nats.serviceAccount }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.nats.serviceAccount }}
rules:
- apiGroups: [""]
resources:
- nodes
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.nats.serviceAccount }}-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Values.nats.serviceAccount }}
subjects:
- kind: ServiceAccount
name: {{ .Values.nats.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{ end }}

View File

@ -1,67 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.serviceAnnotations}}
annotations:
{{- range $key, $value := .Values.serviceAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
selector:
{{- include "nats.selectorLabels" . | nindent 4 }}
clusterIP: None
{{- if .Values.topologyKeys }}
topologyKeys:
{{- .Values.topologyKeys | toYaml | nindent 4 }}
{{- end }}
ports:
{{- if .Values.websocket.enabled }}
- name: websocket
port: {{ .Values.websocket.port }}
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- end }}
- name: client
port: 4222
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: cluster
port: 6222
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: monitor
port: 8222
{{- if .Values.appProtocol.enabled }}
appProtocol: http
{{- end }}
- name: metrics
port: 7777
{{- if .Values.appProtocol.enabled }}
appProtocol: http
{{- end }}
- name: leafnodes
port: 7422
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
- name: gateways
port: 7522
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- if .Values.mqtt.enabled }}
- name: mqtt
port: 1883
{{- if .Values.appProtocol.enabled }}
appProtocol: tcp
{{- end }}
{{- end }}

View File

@ -1,40 +0,0 @@
{{ if and .Values.exporter.enabled .Values.exporter.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "nats.fullname" . }}
{{- if .Values.exporter.serviceMonitor.namespace }}
namespace: {{ .Values.exporter.serviceMonitor.namespace }}
{{- else }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.labels }}
labels:
{{- range $key, $value := .Values.exporter.serviceMonitor.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.annotations }}
annotations:
{{- range $key, $value := .Values.exporter.serviceMonitor.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
endpoints:
- port: metrics
{{- if .Values.exporter.serviceMonitor.path }}
path: {{ .Values.exporter.serviceMonitor.path }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.interval }}
interval: {{ .Values.exporter.serviceMonitor.interval }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.exporter.serviceMonitor.scrapeTimeout }}
{{- end }}
namespaceSelector:
any: true
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -1,477 +0,0 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "nats.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "nats.labels" . | nindent 4 }}
{{- if .Values.statefulSetAnnotations}}
annotations:
{{- range $key, $value := .Values.statefulSetAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "nats.selectorLabels" . | nindent 6 }}
{{- if .Values.cluster.enabled }}
replicas: {{ .Values.cluster.replicas }}
{{- else }}
replicas: 1
{{- end }}
serviceName: {{ include "nats.fullname" . }}
template:
metadata:
{{- if or .Values.podAnnotations .Values.exporter.enabled }}
annotations:
{{- if .Values.exporter.enabled }}
prometheus.io/path: /metrics
prometheus.io/port: "7777"
prometheus.io/scrape: "true"
{{- end }}
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
{{- include "nats.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- range .Values.topologySpreadConstraints }}
{{- if and .maxSkew .topologyKey }}
- maxSkew: {{ .maxSkew }}
topologyKey: {{ .topologyKey }}
{{- if .whenUnsatisfiable }}
whenUnsatisfiable: {{ .whenUnsatisfiable }}
{{- end }}
labelSelector:
matchLabels:
{{- include "nats.selectorLabels" $ | nindent 12 }}
{{- end}}
{{- end }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
# Common volumes for the containers.
volumes:
- name: config-volume
configMap:
name: {{ include "nats.fullname" . }}-config
# Local volume shared with the reloader.
- name: pid
emptyDir: {}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
configMap:
name: {{ .Values.auth.resolver.configMap.name }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
configMap:
name: {{ .Values.auth.operatorjwt.configMap.name }}
{{- end }}
{{- end }}
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Local volume shared with the advertise config initializer.
- name: advertiseconfig
emptyDir: {}
{{- end }}
{{- if and .Values.nats.jetstream.fileStorage.enabled .Values.nats.jetstream.fileStorage.existingClaim }}
# Persistent volume for jetstream running with file storage option
- name: {{ include "nats.fullname" . }}-js-pvc
persistentVolumeClaim:
claimName: {{ .Values.nats.jetstream.fileStorage.existingClaim | quote }}
{{- end }}
#################
# #
# TLS Volumes #
# #
#################
{{- with .Values.nats.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.mqtt.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-mqtt-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-cluster-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-leafnodes-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-gateways-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-ws-volume
secret:
secretName: {{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
secret:
secretName: {{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
{{ if and .Values.nats.externalAccess .Values.nats.advertise }}
# Assume that we only use the service account in case we want to
# figure out what is the current external public IP from the server
# in order to be able to advertise correctly.
serviceAccountName: {{ .Values.nats.serviceAccount }}
{{ end }}
# Required to be able to HUP signal and apply config
# reload to the server without restarting the pod.
shareProcessNamespace: true
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
# Initializer container required to be able to lookup
# the external ip on which this node is running.
initContainers:
- name: bootconfig
command:
- nats-pod-bootconfig
- -f
- /etc/nats-config/advertise/client_advertise.conf
- -gf
- /etc/nats-config/advertise/gateway_advertise.conf
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: {{ .Values.bootconfig.image }}
imagePullPolicy: {{ .Values.bootconfig.pullPolicy }}
resources:
{{- toYaml .Values.bootconfig.resources | nindent 10 }}
volumeMounts:
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
#################
# #
# NATS Server #
# #
#################
terminationGracePeriodSeconds: {{ .Values.nats.terminationGracePeriodSeconds }}
containers:
- name: nats
image: {{ .Values.nats.image }}
imagePullPolicy: {{ .Values.nats.pullPolicy }}
resources:
{{- toYaml .Values.nats.resources | nindent 10 }}
ports:
- containerPort: 4222
name: client
{{- if .Values.nats.externalAccess }}
hostPort: 4222
{{- end }}
- containerPort: 7422
name: leafnodes
{{- if .Values.nats.externalAccess }}
hostPort: 7422
{{- end }}
- containerPort: 7522
name: gateways
{{- if .Values.nats.externalAccess }}
hostPort: 7522
{{- end }}
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
{{- if .Values.mqtt.enabled }}
- containerPort: 1883
name: mqtt
{{- if .Values.nats.externalAccess }}
hostPort: 1883
{{- end }}
{{- end }}
{{- if .Values.websocket.enabled }}
- containerPort: {{ .Values.websocket.port }}
name: websocket
{{- if .Values.nats.externalAccess }}
hostPort: {{ .Values.websocket.port }}
{{- end }}
{{- end }}
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CLUSTER_ADVERTISE
value: {{ include "nats.clusterAdvertise" . }}
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{- if and .Values.nats.externalAccess .Values.nats.advertise }}
- mountPath: /etc/nats-config/advertise
name: advertiseconfig
subPath: advertise
{{- end }}
{{- if and .Values.auth.enabled .Values.auth.resolver }}
{{- if eq .Values.auth.resolver.type "memory" }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if eq .Values.auth.resolver.type "full" }}
{{- if .Values.auth.resolver.configMap }}
- name: resolver-volume
mountPath: /etc/nats-config/accounts
{{- end }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
- name: nats-jwt-pvc
mountPath: {{ .Values.auth.resolver.store.dir }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.resolver.type "URL" }}
- name: operator-jwt-volume
mountPath: /etc/nats-config/operator
{{- end }}
{{- end }}
{{- if .Values.nats.jetstream.fileStorage.enabled }}
- name: {{ include "nats.fullname" . }}-js-pvc
mountPath: {{ .Values.nats.jetstream.fileStorage.storageDirectory }}
{{- end }}
{{- with .Values.nats.tls }}
#######################
# #
# TLS Volumes Mounts #
# #
#######################
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-clients-volume
mountPath: /etc/nats-certs/clients/{{ $secretName }}
{{- end }}
{{- with .Values.mqtt.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-mqtt-volume
mountPath: /etc/nats-certs/mqtt/{{ $secretName }}
{{- end }}
{{- with .Values.cluster.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-cluster-volume
mountPath: /etc/nats-certs/cluster/{{ $secretName }}
{{- end }}
{{- with .Values.leafnodes.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-leafnodes-volume
mountPath: /etc/nats-certs/leafnodes/{{ $secretName }}
{{- end }}
{{- with .Values.gateway.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-gateways-volume
mountPath: /etc/nats-certs/gateways/{{ $secretName }}
{{- end }}
{{- with .Values.websocket.tls }}
{{ $secretName := .secret.name }}
- name: {{ $secretName }}-ws-volume
mountPath: /etc/nats-certs/ws/{{ $secretName }}
{{- end }}
{{- if .Values.leafnodes.enabled }}
#
# Leafnode credential volumes
#
{{- range .Values.leafnodes.remotes }}
{{- with .credentials }}
- name: {{ .secret.name }}-volume
mountPath: /etc/nats-creds/{{ .secret.name }}
{{- end }}
{{- end }}
{{- end }}
# Liveness/Readiness probes against the monitoring.
#
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
# Gracefully stop NATS Server on pod deletion or image upgrade.
#
lifecycle:
preStop:
exec:
# Using the alpine based NATS image, we add an extra sleep that is
# the same amount as the terminationGracePeriodSeconds to allow
# the NATS Server to gracefully terminate the client connections.
#
command:
- "/bin/sh"
- "-c"
- "nats-server -sl=ldm=/var/run/nats/nats.pid && /bin/sleep {{ .Values.nats.terminationGracePeriodSeconds }}"
#################################
# #
# NATS Configuration Reloader #
# #
#################################
{{ if .Values.reloader.enabled }}
- name: reloader
image: {{ .Values.reloader.image }}
imagePullPolicy: {{ .Values.reloader.pullPolicy }}
resources:
{{- toYaml .Values.reloader.resources | nindent 10 }}
command:
- "nats-server-config-reloader"
- "-pid"
- "/var/run/nats/nats.pid"
- "-config"
- "/etc/nats-config/nats.conf"
volumeMounts:
- name: config-volume
mountPath: /etc/nats-config
- name: pid
mountPath: /var/run/nats
{{ end }}
##############################
# #
# NATS Prometheus Exporter #
# #
##############################
{{ if .Values.exporter.enabled }}
- name: metrics
image: {{ .Values.exporter.image }}
imagePullPolicy: {{ .Values.exporter.pullPolicy }}
resources:
{{- toYaml .Values.exporter.resources | nindent 10 }}
args:
- -connz
- -routez
- -subz
- -varz
- -prefix=nats
- -use_internal_server_id
{{- if .Values.nats.jetstream.enabled }}
- -jsz=all
{{- end }}
- http://localhost:8222/
ports:
- containerPort: 7777
name: metrics
{{ end }}
volumeClaimTemplates:
{{- if eq .Values.auth.resolver.type "full" }}
{{- if and .Values.auth.resolver .Values.auth.resolver.store }}
#####################################
# #
# Account Server Embedded JWT #
# #
#####################################
- metadata:
name: nats-jwt-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.auth.resolver.store.size }}
{{- end }}
{{- end }}
{{- if and .Values.nats.jetstream.fileStorage.enabled (not .Values.nats.jetstream.fileStorage.existingClaim) }}
#####################################
# #
# Jetstream New Persistent Volume #
# #
#####################################
- metadata:
name: {{ include "nats.fullname" . }}-js-pvc
{{- if .Values.nats.jetstream.fileStorage.annotations }}
annotations:
{{- range $key, $value := .Values.nats.jetstream.fileStorage.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
accessModes:
{{- range .Values.nats.jetstream.fileStorage.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.nats.jetstream.fileStorage.size }}
storageClassName: {{ .Values.nats.jetstream.fileStorage.storageClassName | quote }}
{{- end }}

View File

@ -1,405 +0,0 @@
###############################
# #
# NATS Server Configuration #
# #
###############################
nats:
image: nats:2.3.2-alpine
pullPolicy: IfNotPresent
# Toggle whether to enable external access.
# This binds a host port for clients, gateways and leafnodes.
externalAccess: false
# Toggle to disable client advertisements (connect_urls),
# in case of running behind a load balancer (which is not recommended)
# it might be required to disable advertisements.
advertise: true
# In case both external access and advertise are enabled
# then a service account would be required to be able to
# gather the public ip from a node.
serviceAccount: "nats-server"
# The number of connect attempts against discovered routes.
connectRetries: 30
# How many seconds should pass before sending a PING
# to a client that has no activity.
pingInterval:
resources: {}
# Server settings.
limits:
maxConnections:
maxSubscriptions:
maxControlLine:
maxPayload:
writeDeadline:
maxPending:
maxPings:
lameDuckDuration:
terminationGracePeriodSeconds: 60
logging:
debug:
trace:
logtime:
connectErrorReports:
reconnectErrorReports:
jetstream:
enabled: false
#############################
# #
# Jetstream Memory Storage #
# #
#############################
memStorage:
enabled: true
size: 1Gi
############################
# #
# Jetstream File Storage #
# #
############################
fileStorage:
enabled: false
storageDirectory: /data
# Set for use with existing PVC
# existingClaim: jetstream-pvc
# claimStorageSize: 1Gi
# Use below block to create new persistent volume
# only used if existingClaim is not specified
size: 1Gi
storageClassName: default
accessModes:
- ReadWriteOnce
annotations:
# key: "value"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
mqtt:
enabled: false
ackWait: 1m
maxAckPending: 100
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
#
# tls:
# secret:
# name: nats-mqtt-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
nameOverride: ""
# An array of imagePullSecrets, and they have to be created manually in the same namespace
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# Toggle whether to use setup a Pod Security Context
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: {}
# securityContext:
# fsGroup: 1000
# runAsUser: 1000
# runAsNonRoot: true
# Affinity for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
## Pod priority class name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: null
# Service topology
# ref: https://kubernetes.io/docs/concepts/services-networking/service-topology/
topologyKeys: []
# Pod Topology Spread Constraints
# ref https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []
# - maxSkew: 1
# topologyKey: zone
# whenUnsatisfiable: DoNotSchedule
# Annotations to add to the NATS pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# key: "value"
## Define a Pod Disruption Budget for the stateful set
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
podDisruptionBudget: null
# minAvailable: 1
# maxUnavailable: 1
# Annotations to add to the NATS StatefulSet
statefulSetAnnotations: {}
# Annotations to add to the NATS Service
serviceAnnotations: {}
cluster:
enabled: false
replicas: 3
noAdvertise: false
# authorization:
# user: foo
# password: pwd
# timeout: 0.5
# Leafnode connections to extend a cluster:
#
# https://docs.nats.io/nats-server/configuration/leafnodes
#
leafnodes:
enabled: false
noAdvertise: false
# remotes:
# - url: "tls://connect.ngs.global:7422"
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# Gateway connections to create a super cluster
#
# https://docs.nats.io/nats-server/configuration/gateways
#
gateway:
enabled: false
name: 'default'
#############################
# #
# List of remote gateways #
# #
#############################
# gateways:
# - name: other
# url: nats://my-gateway-url:7522
#######################
# #
# TLS Configuration #
# #
#######################
#
# # You can find more on how to setup and trouble shoot TLS connnections at:
#
# # https://docs.nats.io/nats-server/configuration/securing_nats/tls
#
# tls:
# secret:
# name: nats-client-tls
# ca: "ca.crt"
# cert: "tls.crt"
# key: "tls.key"
# In case of both external access and advertisements being
# enabled, an initializer container will be used to gather
# the public ips.
bootconfig:
image: natsio/nats-boot-config:0.5.3
pullPolicy: IfNotPresent
# NATS Box
#
# https://github.com/nats-io/nats-box
#
natsbox:
enabled: true
image: natsio/nats-box:0.6.0
pullPolicy: IfNotPresent
# An array of imagePullSecrets, and they have to be created manually in the same namespace
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: dockerhub
# credentials:
# secret:
# name: nats-sys-creds
# key: sys.creds
# Annotations to add to the box pods
# ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# key: "value"
# Affinity for nats box pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# The NATS config reloader image to use.
reloader:
enabled: true
image: natsio/nats-server-config-reloader:0.6.1
pullPolicy: IfNotPresent
# Prometheus NATS Exporter configuration.
exporter:
enabled: true
image: natsio/prometheus-nats-exporter:0.8.0
pullPolicy: IfNotPresent
resources: {}
# Prometheus operator ServiceMonitor support. Exporter has to be enabled
serviceMonitor:
enabled: false
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
labels: {}
annotations: {}
path: /metrics
# interval:
# scrapeTimeout:
# Authentication setup
auth:
enabled: false
# basic:
# noAuthUser:
# # List of users that can connect with basic auth,
# # that belong to the global account.
# users:
# # List of accounts with users that can connect
# # using basic auth.
# accounts:
# Reference to the Operator JWT.
# operatorjwt:
# configMap:
# name: operator-jwt
# key: KO.jwt
# Token authentication
# token:
# Public key of the System Account
# systemAccount:
resolver:
# Disables the resolver by default
type: none
##########################################
# #
# Embedded NATS Account Server Resolver #
# #
##########################################
# type: full
# If the resolver type is 'full', delete when enabled will rename the jwt.
allowDelete: false
# Interval at which a nats-server with a nats based account resolver will compare
# it's state with one random nats based account resolver in the cluster and if needed,
# exchange jwt and converge on the same set of jwt.
interval: 2m
# Operator JWT
operator:
# System Account Public NKEY
systemAccount:
# resolverPreload:
# <ACCOUNT>: <JWT>
# Directory in which the account JWTs will be stored.
store:
dir: "/accounts/jwt"
# Size of the account JWT storage.
size: 1Gi
##############################
# #
# Memory resolver settings #
# #
##############################
# type: memory
#
# Use a configmap reference which will be mounted
# into the container.
#
# configMap:
# name: nats-accounts
# key: resolver.conf
##########################
# #
# URL resolver settings #
# #
##########################
# type: URL
# url: "http://nats-account-server:9090/jwt/v1/accounts/"
websocket:
enabled: false
port: 443
appProtocol:
enabled: false
# Cluster Domain configured on the kubelets
# https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
k8sClusterDomain: cluster.local
# Add labels to all the deployed resources
commonLabels: {}

View File

@ -1,4 +1,4 @@
{{- if .Values.nats.exporter.serviceMonitor.enabled }}
{{- if .Values.nats.promExporter.podMonitor.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:

View File

@ -1,13 +1,13 @@
#!/bin/bash
##!/bin/bash
set -ex
helm dep update
. ../../scripts/lib-update.sh
## NATS
NATS_VERSION=0.8.4
rm -rf charts/nats && curl -L -s -o - https://github.com/nats-io/k8s/releases/download/v$NATS_VERSION/nats-$NATS_VERSION.tgz | tar xfz - -C charts
#login_ecr_public
update_helm
# Fetch dashboards
../kubezero-metrics/sync_grafana_dashboards.py dashboards-nats.yaml templates/nats/grafana-dashboards.yaml
../kubezero-metrics/sync_grafana_dashboards.py dashboards-rabbitmq.yaml templates/rabbitmq/grafana-dashboards.yaml
update_docs

View File

@ -2,17 +2,16 @@
nats:
enabled: false
nats:
advertise: false
config:
jetstream:
enabled: true
natsbox:
natsBox:
enabled: false
exporter:
serviceMonitor:
promExporter:
enabled: false
podMonitor:
enabled: false
mqtt:
@ -71,18 +70,18 @@ rabbitmq:
failIfNoPeerCert: false
existingSecret: rabbitmq-server-certificate
existingSecretFullChain: true
clustering:
enabled: false
forceBoot: false
resources:
requests:
memory: 512Mi
cpu: 100m
replicaCount: 1
persistence:
size: 2Gi
@ -98,10 +97,10 @@ rabbitmq:
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
pdb:
create: false
metrics:
enabled: false
serviceMonitor:

View File

@ -28,6 +28,8 @@ spec:
containers:
- name: kube-multus
image: {{ .Values.multus.image.repository }}:{{ .Values.multus.image.tag }}
# Always used cached images
imagePullPolicy: Never
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=/tmp/multus-conf/00-multus.conf"
@ -45,6 +47,7 @@ spec:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: run
mountPath: /run

View File

@ -27,9 +27,10 @@ multus:
cilium:
enabled: false
# breaks preloaded images otherwise
# Always use cached images
image:
useDigest: false
pullPolicy: Never
resources:
requests:

View File

@ -1,13 +0,0 @@
{{- if or .Values.redis.metrics.enabled ( index .Values "redis-cluster" "metrics" "enabled") }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" (include "kubezero-lib.fullname" $) "grafana-dashboards" | trunc 63 | trimSuffix "-" }}
namespace: {{ .Release.Namespace }}
labels:
grafana_dashboard: "1"
{{- include "kubezero-lib.labels" . | nindent 4 }}
binaryData:
redis.json.gz:
H4sIAAAAAAAC/+1daVPcOBr+Pr9C5Zndgq0OsQ0dmqmaDySBmdQAyQYyW7UD1aW2RbcWX2PJQIdifvvq8CHZ6jMNNMEfOKxX1vGejw5Ldz8AYPX7OEoySqyfwZ/sGYA78ZtRIhgilmq9P+1/+vzx+ODst4Mvp1anIAdwgAJO/5TGIaIjlJGK6CPipTihOI54lopAx4ko1IcUkjhLPVTRkiAb4uiDz+mJoVBJP8mbpVQrMtyz3xcd2aUU/ZXhFBk6VdQ/TOEljGBVOPaNyQUTfq0TrlFK8t692epuuXkjOubqEhgxZjUrS0bGqtRkpaLpdZhYiicyM2qy0VSls2Vv2Uv0jeBoGCBCIW1WeWqgNXtZihNGUczyMiqXp6zeCjChpXSrRjHKIMMB/cBLcjpVqsIcc6dZHhTBQcDpNM2Qkj7CviEVe3H0Lg7ilBeYDgdww+4A13HYr263A5xNteii6/tVX8A/wX6AUqo1oZIlGQ1imPpWTrsXfy9+yMVQN7DPyMcEvC/eApdxCqpOAkk+uE3ilKIUOFu3HYApuInTKwJuMB2BEQpCwETCGPA65blfjSBA+Qtb5+l5xH8+XIJxnIEQEy5fIDKCEIVxOgYZxQH+KvrWAUmAIEEgjH18OQbnVghvZbZzC1zDIEMAR/IfsjWGoVQfixVHayKwhhGiwiU4Tm+7K5O4gZzFcUBxwgi2SBRqF2VBIJ9Yq2HOHKfb23Z2uju2280LCHB0JXyDVDChwAZf4UFvhM5wiOKMKoVLGpf8W+hdDdM4i3jdlzAgSKf/wTtoJhFFeUGuP+4O05vujvyxt/Y0DZI5tneZZrl7HbBj8yw9Q54uozi7LsvSFaXsbhY6dFG551nWwPTLwyEUXLHLRIN8ZHqaCjPQ+8l0MIRUOIOq3CHMhqg0Y5HEdKNglGPbSndCHBUENZmM4pt6Zdx0Rszhj+LAP+KBiUzLcQzTKyREwLtRWFjVxhT7n2Kit3LEHneVwngTXOX5ttbKMX9uFC20dK96jJieMjOoaxcmJ+imzmVda3PWJQkzxDPpNRxTuq5oVYcUpyQNksaAoluq6BPISbzoMvG+M72wFEbDGYW5VWENzWTK8J4p56eYsYboGmFxFgnCcSwcMrOkKEIeRb6l5TnjNdc4GidlAKmkkcSEXuJbHaHkiYdxRE/xV1FP1/6HQk9R8x2RNvEVwZNjmEyRxSWzQf4eb7TONSp7Y5283q8R4vKFKQwlCVN2pjo1o7vEQaBHr20WuZiPZb96e9xxOD3Nt1zyepo2xUtWy5HFuNwH7W1rBeRma7Q44VVYMVlYh4swHSI6hW8sRImqmd5ssJ9+zIBEnzKfvSGCUz9L+APDuH2CmLr45A5HLMxFHvrl73Prp+Lh3Lr/8ycOhaU9Xmxu6ryufJkojqAUI6JnUWzZMlMOoUcFo1yNHKAhivzDsgb9ZeaZU+w105nGSaRc0wpCEQ+KTs+2p2hF6QpJjd+YCvdufRF8a+J2E64Tdq0q/66q/II6Xflj3mLrlzn1vnAjM1VfZCyGC16WpiiiZijbRvoVRfoojtALCfbuzGDPxw1cqT4yn5Q2xxASDDhuiwZaNNCigZWiARn7S4XoewFmvn9y7F862D9mSHeXiuchOsyVyQn19NMRvmxYRgkA3kmWLYkAei0CmI4ADLF7GQjQBBJPjQASxGqK6EOAgNoE3BNggJ0aBthZEQZwWgzQYoAWA6wUAzA9Af8C+TSAnHzuZ4SBgcGYoslQAIDXQHuHzyrMeGXzO8MPPbuz110FiDiWSwNfCByil4IkYIAheVfEfcW3DGDaCFJioecIRUM6Eq5NS0em7LOC+YJxGwsbdrSEX1Poc/Cn4QEeI/X+zB01e7Wo2ZsnavooOpWGU2+zaIc7IzIKA9KbBq+HBndVyLJJYYZvSsWRIXXSNEVMoclHypWvgtDoPnehxBDs0Q32haK40yDAPEFRCX76GulRWV6jVTmyg8MmzE14dVxrMv5uV09vSpAx3EcpEm76MogV85e+sgBLtY6xwOUhk7EwF+JdNWrhbi9B/pGMdDpt/pEkpMVUsheHIYx80k/S2EOEhxIh38kzyk54sYbzyPuLRY0dW3elgnPVOv0c8UQRoRJKahEjRJ/RMNfJ2gszBqq5WMDBLfIypuUsgBPkKeGmXClWhyDkMyIMXuRrxE17hkzozTEHiVNaG5wIW+4XMc3LwiyAFF8jq4lpqr0n6gaPW3iLa5Y4yLwrqZ5qn3mzc5PWJucrrF3LbR43lc7HYONjeDttfKAs6Y44J3T9yzcE6Y3ghHj4FhKk78goHWwju/SwjeQmWpw89Fi7djYMZNwUOgMOQ5MuivQjdF02WtsSsmbwQ5lLcJ8TJnHerACUOC0oeVJQojXt+8UkWAElV2gsmtAfYUpm4ZHuMngk3/1W05ZVAhXe9NWMcOfBKp0lmMs3vKHnyl7Z+AUZ/PZ5gsHfmCox/HcsugyYcwCnTwYFceTja+xnMHixULCmXUsBLLtFgd+MAuv4wfrx7aHz1rabivliJ6mcWZs5d5cAhLsPDwh5/DgIEzqeQPsvSuPV4Mj68ttawEieACABX3k32+mt5TZKLLA68tArHbwNa4MFF10HWj34mxfiMWteEMetB147404ETFgmauftVg/WhAY/JBJq4drzm7R7DnDMcVeAxxRmrdMM3Voiq3bVcCWrhhGi8gtrCR4eZA5pQbRwdwdEi8D9/aKQan7opHMgzujjsOCbZ8sYc2RjJ3NnzTHVCaL8w17w4fXHxcBUHTO1WOqJsVS7Bvpy4NTu085uOTtLwKnuEmiKC3KfnJlPWpiFtWrZF4FaKR6O6KnxJIclUZhx32oLwpYDYRq/VoLBSBYWu3/9gVjKmwg5NsFgDDb8wdPCDn/AIAf4vjZnyfmdDxSFcjXu/dt2Me5BMYn2AW4LSVpI8nwgSWOGZylM4my3UzwtulgvdPEKNLP3WUk4xdFwyntPB0eimIKigY++7PbcmGVm1PJbrNYDuh3kvQLXBJzE9FX5/DsTSIvi2t31LYxTikfX2BMqZf3Y27MPbVfVnxCF4nwBv4/5SKgvMjc/Hzq3nF13y9ndsrecn/ecrn1udf4XD1h6dYAnP6zCXEmKvADiULZh+/BNb3u73d41/wSY6ywBNnst1nwZWNPsCIU3kBuFctvTfFXhhdzpH+M/+uokw1cbyvqcgC/MGwmgNcfanJwtK1LXccd73qO12UdWZ3ju/h+L4avjq2z4M8az4uvQg1o32p1mLYZtpyLXGhz2nngqcil06Oy8ZHjotFORjzMVSePkaqPbAdh0VsV0eAHEERVPu+zphX5jq1WF1bTu9D0YBHmflt+5tgaHVYB3vCPLHFSxqj1a7Uxai0JeMgpZbNJpRVNMvWVAxJsWL0z47u+RoIL73X3vN+Vg5FWfi+xMCtPrEY/lxUglP4DXOPXYFJPbqNtG3WcSdctb1Jjpcavivd62pYJbxBuhEP5RXr3mOjKZjoP8FrL0SuZkbrISvLQaqyyaojDhM1rRcJ772arYdGc6otN4NZt6Umf93r+apIvpZdUIceQFmY/2zefVTrnHkMsxY/ZveG367YhaTKo8Ekv+K0PpeMKlfJqYHC11iG5rgy2LXOHkSxqcjiPPdMpT81bAuj1r2hUUB2/XvIQirvsF79Pz0SWOcHFFneBzX7qX8nKeDuB8FJFQOxp4aSGeFMUtIsPI9NIs+S3Un0qs7reJ1RBimG0K2ZF/F02zdGqjDzzNnDlXGtlJhZARdCYLMqK7NdCku5LxHMOUDwzEdEAS+6vRrk+xD8QhwAsoF6u8H9XeWUq3FunhqvXNafVtXmkIYfC0QvBCQKbVu6X1sA77ZmhgUffDaeACff5ONbN2PatA4CUYKo/yj29eOQW2L07rZ3hQey3BDN2n1cs5w8q76VQAbnWVoYpjKw/b6oN6aHtX+d9RH7ZtlaKMQFzlfye/kPai6AMfCCq6NLMWteA3asFqLe6O+qAsCez6anuLtmjs+xqL8bE1SOMbNkLN4as+3pv3ntwN0624UputTN5zfPv+6OTz71/+81WmVtcY7/xw/38XppizYHsAAA==
{{- end }}

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin"
}
},
"version": "e7f572914d79f0705b3dc8ca28d9a14b0f854d49",
"version": "9f59ef8ead097f836271f125d0e3774ddae4e71d",
"sum": "IXI3LQIT9NmTPJAk8WLUJd5+qZfcGpeNCyWIK7oEpws="
},
{
@ -58,7 +58,7 @@
"subdir": "gen/grafonnet-latest"
}
},
"version": "119d65363dff84a1976bba609f2ac3a8f450e760",
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"sum": "eyuJ0jOXeA4MrobbNgU4/v5a7ASDHslHZ0eS6hDdWoI="
},
{
@ -68,7 +68,7 @@
"subdir": "gen/grafonnet-v10.0.0"
}
},
"version": "119d65363dff84a1976bba609f2ac3a8f450e760",
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"sum": "xdcrJPJlpkq4+5LpGwN4tPAuheNNLXZjE6tDcyvFjr0="
},
{
@ -78,7 +78,7 @@
"subdir": "gen/grafonnet-v11.0.0"
}
},
"version": "119d65363dff84a1976bba609f2ac3a8f450e760",
"version": "5a66b0f6a0f4f7caec754dd39a0e263b56a0f90a",
"sum": "Fuo+qTZZzF+sHDBWX/8fkPsUmwW6qhH8hRVz45HznfI="
},
{
@ -88,8 +88,8 @@
"subdir": "grafana-builder"
}
},
"version": "ea6f2601969aa12c02dbca761ce4316aff036af2",
"sum": "udZaafkbKYMGodLqsFhEe+Oy/St2p0edrK7hiMPEey0="
"version": "3f0a5b0eeb2f5dc381a420b35d27198bd9b72e8c",
"sum": "yxqWcq/N3E/a/XreeU6EuE6X7kYPnG0AspAQFKOjASo="
},
{
"source": {
@ -118,8 +118,8 @@
"subdir": ""
}
},
"version": "3dfa72d1d1ab31a686b1f52ec28bbf77c972bd23",
"sum": "7ufhpvzoDqAYLrfAsGkTAIRmu2yWQkmHukTE//jOsJU="
"version": "dd5c59ab4491159593ed370a344a553b57146a7d",
"sum": "2tFZyRtLw9nasUQdFn5LGGqJplJyAeJxd59u6mHU+mw="
},
{
"source": {
@ -128,8 +128,8 @@
"subdir": "jsonnet/kube-state-metrics"
}
},
"version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"sum": "pvInhJNQVDOcC3NGWRMKRIP954mAvLXCQpTlafIg7fA="
"version": "e96dfc0a39d8e2ae759a954a98d8bc9b29bf1a3e",
"sum": "h6H5AsU7JsCAWttnPgevTNituobj2eIr2ebxdkaABQo="
},
{
"source": {
@ -138,7 +138,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin"
}
},
"version": "7104d579e93d672754c018a924d6c3f7ec23874e",
"version": "e96dfc0a39d8e2ae759a954a98d8bc9b29bf1a3e",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
},
{
@ -168,7 +168,7 @@
"subdir": "jsonnet/mixin"
}
},
"version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"version": "105b88afada91ecd4dab14b6d091b0933c749972",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin"
},
@ -179,8 +179,8 @@
"subdir": "jsonnet/prometheus-operator"
}
},
"version": "609424db53853b992277b7a9a0e5cf59f4cc24f3",
"sum": "z2/5LjQpWC7snhT+n/mtQqoy5986uI95sTqcKQziwGU="
"version": "105b88afada91ecd4dab14b6d091b0933c749972",
"sum": "lTyttpFADJ40Zd7FuwgXcXswU+7grlQBeXms7gyabYc="
},
{
"source": {
@ -189,7 +189,7 @@
"subdir": "doc/alertmanager-mixin"
}
},
"version": "eb8369ec510d76f63901379a8437c4b55885d6c5",
"version": "cad5fa580108431e6ed209f2a23a373aa50c098f",
"sum": "IpF46ZXsm+0wJJAPtAre8+yxTNZA57mBqGpBP/r7/kw=",
"name": "alertmanager"
},
@ -210,7 +210,7 @@
"subdir": "documentation/prometheus-mixin"
}
},
"version": "e9dec5fc537b1709f3a0e4c959043fb159b5d413",
"version": "d4f098ae80fb276153efc757e373c813163da0e8",
"sum": "dYLcLzGH4yF3qB7OGC/7z4nqeTNjv42L7Q3BENU8XJI=",
"name": "prometheus"
},
@ -232,7 +232,7 @@
"subdir": "mixin"
}
},
"version": "35c0dbec856f97683a846e9c53f83156a3a44ff3",
"version": "639bf8f216494ad9c375ebaac45f5d15715065ba",
"sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=",
"name": "thanos-mixin"
}

View File

@ -155,6 +155,10 @@ aws-ebs-csi-driver:
loggingFormat: json
tolerateAllTaints: false
priorityClassName: system-node-critical
# We have /var on additional volume: root + ENI + /var = 3
reservedVolumeAttachments: 3
tolerations:
- key: kubezero-workergroup
effect: NoSchedule
@ -240,7 +244,7 @@ aws-efs-csi-driver:
cpu: 20m
memory: 96Mi
limits:
memory: 128Mi
memory: 256Mi
affinity:
nodeAffinity:

View File

@ -29,8 +29,43 @@ Kubernetes: `>= 1.26.0`
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| data-prepper.config."log4j2-rolling.properties" | string | `"status = error\ndest = err\nname = PropertiesConfig\n\nappender.console.type = Console\nappender.console.name = STDOUT\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %d{ISO8601} [%t] %-5p %40C - %m%n\n\nrootLogger.level = warn\nrootLogger.appenderRef.stdout.ref = STDOUT\n\nlogger.pipeline.name = org.opensearch.dataprepper.pipeline\nlogger.pipeline.level = info\n\nlogger.parser.name = org.opensearch.dataprepper.parser\nlogger.parser.level = info\n\nlogger.plugins.name = org.opensearch.dataprepper.plugins\nlogger.plugins.level = info\n"` | |
| data-prepper.enabled | bool | `false` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.buffer.bounded_blocking | string | `nil` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.delay | int | `3000` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.processor[0].service_map.window_duration | int | `180` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.bulk_size | int | `4` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.hosts[0] | string | `"https://telemetry:9200"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.index_type | string | `"trace-analytics-service-map"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.insecure | bool | `true` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.password | string | `"admin"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.sink[0].opensearch.username | string | `"admin"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.source.pipeline.name | string | `"otel-trace-pipeline"` | |
| data-prepper.pipelineConfig.config.otel-service-map-pipeline.workers | int | `1` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.buffer.bounded_blocking | string | `nil` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.delay | string | `"100"` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.sink[0].pipeline.name | string | `"raw-traces-pipeline"` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.sink[1].pipeline.name | string | `"otel-service-map-pipeline"` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.source.otel_trace_source.ssl | bool | `false` | |
| data-prepper.pipelineConfig.config.otel-trace-pipeline.workers | int | `1` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.buffer.bounded_blocking | string | `nil` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.delay | int | `3000` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[0].otel_traces | string | `nil` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.hosts[0] | string | `"https://telemetry:9200"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.insecure | bool | `true` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.password | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.processor[1].otel_trace_group.username | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.hosts[0] | string | `"https://telemetry:9200"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.index_type | string | `"trace-analytics-raw"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.insecure | bool | `true` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.password | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.sink[0].opensearch.username | string | `"admin"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.source.pipeline.name | string | `"otel-trace-pipeline"` | |
| data-prepper.pipelineConfig.config.raw-traces-pipeline.workers | int | `1` | |
| data-prepper.pipelineConfig.config.simple-sample-pipeline | string | `nil` | |
| data-prepper.securityContext.capabilities.drop[0] | string | `"ALL"` | |
| fluent-bit.config.customParsers | string | `"[PARSER]\n Name cri-log\n Format regex\n Regex ^(?<time>.+) (?<stream>stdout|stderr) (?<logtag>F|P) (?<log>.*)$\n Time_Key time\n Time_Format %Y-%m-%dT%H:%M:%S.%L%z\n"` | |
| fluent-bit.config.filters | string | `"[FILTER]\n Name parser\n Match cri.*\n Parser cri-log\n Key_Name log\n\n[FILTER]\n Name kubernetes\n Match cri.*\n Merge_Log On\n Merge_Log_Key kube\n Kube_Tag_Prefix cri.var.log.containers.\n Keep_Log Off\n K8S-Logging.Parser Off\n K8S-Logging.Exclude Off\n Kube_Meta_Cache_TTL 3600s\n Buffer_Size 0\n #Use_Kubelet true\n\n{{- if index .Values \"config\" \"extraRecords\" }}\n\n[FILTER]\n Name record_modifier\n Match cri.*\n {{- range $k,$v := index .Values \"config\" \"extraRecords\" }}\n Record {{ $k }} {{ $v }}\n {{- end }}\n{{- end }}\n\n[FILTER]\n Name rewrite_tag\n Match cri.*\n Emitter_Name kube_tag_rewriter\n Rule $kubernetes['pod_id'] .* kube.$kubernetes['namespace_name'].$kubernetes['container_name'] false\n\n[FILTER]\n Name lua\n Match kube.*\n script /fluent-bit/scripts/kubezero.lua\n call nest_k8s_ns\n"` | |
| fluent-bit.config.filters | string | `"[FILTER]\n Name parser\n Match cri.*\n Parser cri-log\n Key_Name log\n\n[FILTER]\n Name kubernetes\n Match cri.*\n Merge_Log On\n Merge_Log_Key kube\n Kube_Tag_Prefix cri.var.log.containers.\n Keep_Log Off\n Annotations Off\n K8S-Logging.Parser Off\n K8S-Logging.Exclude Off\n Kube_Meta_Cache_TTL 3600s\n Buffer_Size 0\n #Use_Kubelet true\n\n{{- if index .Values \"config\" \"extraRecords\" }}\n\n[FILTER]\n Name record_modifier\n Match cri.*\n {{- range $k,$v := index .Values \"config\" \"extraRecords\" }}\n Record {{ $k }} {{ $v }}\n {{- end }}\n{{- end }}\n\n[FILTER]\n Name rewrite_tag\n Match cri.*\n Emitter_Name kube_tag_rewriter\n Rule $kubernetes['pod_id'] .* kube.$kubernetes['namespace_name'].$kubernetes['container_name'] false\n\n[FILTER]\n Name lua\n Match kube.*\n script /fluent-bit/scripts/kubezero.lua\n call nest_k8s_ns\n"` | |
| fluent-bit.config.flushInterval | int | `5` | |
| fluent-bit.config.input.memBufLimit | string | `"16MB"` | |
| fluent-bit.config.input.refreshInterval | int | `5` | |
@ -120,6 +155,8 @@ Kubernetes: `>= 1.26.0`
| jaeger.provisionDataStore.elasticsearch | bool | `false` | |
| jaeger.query.agentSidecar.enabled | bool | `false` | |
| jaeger.query.serviceMonitor.enabled | bool | `false` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.num-replicas" | int | `1` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.num-shards" | int | `2` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.tls.enabled" | string | `""` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.tls.skip-host-verify" | string | `""` | |
| jaeger.storage.elasticsearch.host | string | `"telemetry"` | |
@ -133,7 +170,9 @@ Kubernetes: `>= 1.26.0`
| opensearch.dashboard.istio.url | string | `"telemetry-dashboard.example.com"` | |
| opensearch.nodeSets | list | `[]` | |
| opensearch.prometheus | bool | `false` | |
| opensearch.version | string | `"2.15.0"` | |
| opensearch.version | string | `"2.16.0"` | |
| opentelemetry-collector.config.exporters.otlp/data-prepper.endpoint | string | `"telemetry-data-prepper:21890"` | |
| opentelemetry-collector.config.exporters.otlp/data-prepper.tls.insecure | bool | `true` | |
| opentelemetry-collector.config.exporters.otlp/jaeger.endpoint | string | `"telemetry-jaeger-collector:4317"` | |
| opentelemetry-collector.config.exporters.otlp/jaeger.tls.insecure | bool | `true` | |
| opentelemetry-collector.config.extensions.health_check.endpoint | string | `"${env:MY_POD_IP}:13133"` | |
@ -149,6 +188,7 @@ Kubernetes: `>= 1.26.0`
| opentelemetry-collector.config.service.pipelines.logs | string | `nil` | |
| opentelemetry-collector.config.service.pipelines.metrics | string | `nil` | |
| opentelemetry-collector.config.service.pipelines.traces.exporters[0] | string | `"otlp/jaeger"` | |
| opentelemetry-collector.config.service.pipelines.traces.exporters[1] | string | `"otlp/data-prepper"` | |
| opentelemetry-collector.config.service.pipelines.traces.processors[0] | string | `"memory_limiter"` | |
| opentelemetry-collector.config.service.pipelines.traces.processors[1] | string | `"batch"` | |
| opentelemetry-collector.config.service.pipelines.traces.receivers[0] | string | `"otlp"` | |

View File

@ -35,5 +35,5 @@ spec:
indexPatterns:
- "logstash-*"
- "jaeger-*"
- "otel-v1-apm-span-*"
- "otel-v1-apm-span-*"
{{- end }}

View File

@ -62,9 +62,6 @@ data-prepper:
name: "otel-trace-pipeline"
processor:
- service_map:
# The window duration is the maximum length of time the data prepper stores the most recent trace data to evaluvate service-map relationships.
# The default is 3 minutes, this means we can detect relationships between services from spans reported in last 3 minutes.
# Set higher value if your applications have higher latency.
window_duration: 180
buffer:
bounded_blocking:
@ -231,7 +228,7 @@ jaeger:
url: jaeger.example.com
opensearch:
version: 2.15.0
version: 2.16.0
prometheus: false
# custom cluster settings
@ -577,6 +574,7 @@ fluent-bit:
Merge_Log_Key kube
Kube_Tag_Prefix cri.var.log.containers.
Keep_Log Off
Annotations Off
K8S-Logging.Parser Off
K8S-Logging.Exclude Off
Kube_Meta_Cache_TTL 3600s

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero
description: KubeZero - Root App of Apps chart
type: application
version: 1.29.7
version: 1.29.7-1
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -15,4 +15,4 @@ dependencies:
- name: kubezero-lib
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts
kubeVersion: ">= 1.26.0"
kubeVersion: ">= 1.26.0-0"

View File

@ -181,6 +181,7 @@ aws-eks-asg-rolling-update-handler:
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
- name: AWS_STS_REGIONAL_ENDPOINTS
value: "regional"
{{- end }}
{{- end }}

View File

@ -13,6 +13,9 @@ argo-cd:
repoServer:
metrics:
enabled: {{ .Values.metrics.enabled }}
{{- with index .Values "argo" "argo-cd" "repoServer" }}
{{- toYaml . | nindent 4 }}
{{- end }}
server:
metrics:
enabled: {{ .Values.metrics.enabled }}

View File

@ -9,11 +9,32 @@ cert-manager:
type: Recreate
{{- end }}
prometheus:
servicemonitor:
enabled: {{ $.Values.metrics.enabled }}
{{ with index .Values "cert-manager" "IamArn" }}
{{- if eq .Values.global.platform "aws" }}
# map everything to the control-plane
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
webhook:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
cainjector:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
extraEnv:
- name: AWS_REGION
value: {{ .Values.global.aws.region }}
{{ with index .Values "cert-manager" "IamArn" }}
- name: AWS_ROLE_ARN
value: "{{ . }}"
- name: AWS_WEB_IDENTITY_TOKEN_FILE
@ -34,7 +55,19 @@ cert-manager:
- name: aws-token
mountPath: "/var/run/secrets/sts.amazonaws.com/serviceaccount/"
readOnly: true
{{- end }}
{{- end }}
{{- end }}
{{- if eq .Values.global.platform "gke" }}
serviceAccount:
annotations:
iam.gke.io/gcp-service-account: "dns01-solver@{{ .Values.global.gcp.projectId }}.iam.gserviceaccount.com"
{{- end }}
prometheus:
servicemonitor:
enabled: {{ $.Values.metrics.enabled }}
{{- with index .Values "cert-manager" "clusterIssuer" }}
clusterIssuer:

View File

@ -3,6 +3,10 @@
gateway:
name: istio-ingressgateway
{{- if ne .Values.global.platform "gke" }}
priorityClassName: "system-cluster-critical"
{{- end }}
{{- with index .Values "istio-ingress" "gateway" "replicaCount" }}
replicaCount: {{ . }}
{{- if gt (int .) 1 }}
@ -11,7 +15,7 @@ gateway:
{{- end }}
{{- end }}
{{- if not (index .Values "istio-ingress" "gateway" "affinity") }}
{{- if eq .Values.global.platform "aws" }}
# Only nodes who are fronted with matching LB
affinity:
nodeAffinity:

View File

@ -3,6 +3,10 @@
gateway:
name: istio-private-ingressgateway
{{- if ne .Values.global.platform "gke" }}
priorityClassName: "system-cluster-critical"
{{- end }}
{{- with index .Values "istio-private-ingress" "gateway" "replicaCount" }}
replicaCount: {{ . }}
{{- if gt (int .) 1 }}
@ -11,7 +15,7 @@ gateway:
{{- end }}
{{- end }}
{{- if not (index .Values "istio-private-ingress" "gateway" "affinity") }}
{{- if eq .Values.global.platform "aws" }}
# Only nodes who are fronted with matching LB
affinity:
nodeAffinity:

View File

@ -1,21 +1,37 @@
{{- define "istio-values" }}
{{- if .Values.global.highAvailable }}
global:
defaultPodDisruptionBudget:
enabled: true
{{- if ne .Values.global.platform "gke" }}
priorityClassName: "system-cluster-critical"
{{- end }}
{{- end }}
istiod:
telemetry:
enabled: {{ $.Values.metrics.enabled }}
pilot:
{{- if eq .Values.global.platform "aws" }}
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
{{- end }}
{{- if .Values.global.highAvailable }}
replicaCount: 2
global:
defaultPodDisruptionBudget:
enabled: true
{{- else }}
extraContainerArgs:
- --leader-elect=false
{{- end }}
{{- with index .Values "istio" "kiali-server" }}
kiali-server:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.istio.rateLimiting }}
rateLimiting:
{{- toYaml . | nindent 2 }}

View File

@ -5,9 +5,15 @@ kubezero:
gitSync: {}
global:
highAvailable: false
clusterName: zdt-trial-cluster
# platform: aws (kubeadm, default), gke, or nocloud
platform: "aws"
highAvailable: false
aws: {}
gcp: {}
addons:
enabled: true
@ -37,7 +43,7 @@ network:
cert-manager:
enabled: false
namespace: cert-manager
targetRevision: 0.9.8
targetRevision: 0.9.9
storage:
enabled: false
@ -58,13 +64,13 @@ storage:
istio:
enabled: false
namespace: istio-system
targetRevision: 0.22.3
targetRevision: 0.22.3-1
istio-ingress:
enabled: false
chart: kubezero-istio-gateway
namespace: istio-ingress
targetRevision: 0.22.3
targetRevision: 0.22.3-1
gateway:
service: {}
@ -72,7 +78,7 @@ istio-private-ingress:
enabled: false
chart: kubezero-istio-gateway
namespace: istio-ingress
targetRevision: 0.22.3
targetRevision: 0.22.3-1
gateway:
service: {}
@ -108,7 +114,7 @@ metrics:
logging:
enabled: false
namespace: logging
targetRevision: 0.8.11
targetRevision: 0.8.12
argo:
enabled: false

View File

Before

Width:  |  Height:  |  Size: 166 KiB

After

Width:  |  Height:  |  Size: 166 KiB

BIN
docs/images/k8s-v129.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

22
docs/v1.29.md Normal file
View File

@ -0,0 +1,22 @@
# ![k8s-v1.29](images/k8s-v129.png) KubeZero 1.29
## What's new - Major themes
- all KubeZero and support AMIs based on Alpine 3.20.1
- new (optional) Telemetry module integrated consisting of OpenTelemetry Collector, Jaeger UI, OpenSearch + Dashboards backend
- custom KubeZero ArgoCD edition adding support for referring to external secrets via helm-secrets + vals
- updated Nvidia and AWS Neuron drivers to latest versions for AI/ML workloads
- Falco IDS now using the modern eBPF event source ( preview )
## Version upgrades
- cilium 1.15.7
- istio 1.22.3
- ArgoCD 2.11.5
- Prometheus 2.53 / Grafana 11.1 ( fixing many of the previous warnings )
- ...
### FeatureGates
- CustomCPUCFSQuotaPeriod
- KubeProxyDrainingTerminatingNodes
- ImageMaximumGCAge
## Known issues