Compare commits

..

64 Commits

Author SHA1 Message Date
Renovate Bot a494106c01 chore(deps): update kubezero-sql-dependencies 2024-04-06 03:07:27 +00:00
Stefan Reimer 8b7b1ec8fa Merge pull request 'chore(deps): update kubezero-logging-dependencies' (#160) from renovate/kubezero-logging-kubezero-logging-dependencies into master
Reviewed-on: #160
2024-04-04 13:41:31 +00:00
Stefan Reimer e2770079eb feat: version upgrades for kubezero-metrics 2024-04-04 13:39:36 +00:00
Renovate Bot b2d8a11854 chore(deps): update kubezero-logging-dependencies 2024-04-04 03:10:31 +00:00
Stefan Reimer 1bdbb7c538 feat: version upgrades for opensearch and operators 2024-04-03 14:36:59 +00:00
Stefan Reimer 1350500f7f Merge pull request 'chore(deps): update kubezero-metrics-dependencies' (#158) from renovate/kubezero-metrics-kubezero-metrics-dependencies into master
Reviewed-on: #158
2024-04-03 14:35:48 +00:00
Stefan Reimer 1cb0ff2c0d Merge pull request 'chore(deps): update helm release kube-prometheus-stack to v57' (#149) from renovate/kubezero-metrics-major-kubezero-metrics-dependencies into master
Reviewed-on: #149
2024-04-03 14:35:31 +00:00
Stefan Reimer 734f19010f Merge pull request 'chore(deps): update helm release eck-operator to v2.12.1' (#180) from renovate/kubezero-operators-kubezero-operators-dependencies into master
Reviewed-on: #180
2024-04-03 13:18:24 +00:00
Stefan Reimer 3013c39061 Merge pull request 'chore(deps): update helm release jaeger to v2' (#173) from renovate/kubezero-telemetry-major-kubezero-telemetry-dependencies into master
Reviewed-on: #173
2024-04-03 13:11:11 +00:00
Stefan Reimer ca14178e94 feat: Falco version upgrade 2024-04-03 13:11:07 +00:00
Stefan Reimer 4b4431919a Merge pull request 'chore(deps): update helm release falco to v4' (#163) from renovate/kubezero-falco-major-kubezero-falco-dependencies into master
Reviewed-on: #163
2024-04-03 11:49:53 +00:00
Stefan Reimer 32e71b4129 feat: Istio upgrade to 1.21 2024-04-03 11:49:07 +00:00
Stefan Reimer 6b7746d3df Merge pull request 'chore(deps): update kubezero-istio-dependencies' (#137) from renovate/kubezero-istio-kubezero-istio-dependencies into master
Reviewed-on: #137
2024-04-02 17:39:38 +00:00
Stefan Reimer 52de70a4a8 Merge pull request 'chore(deps): update helm release gateway to v1.21.0' (#135) from renovate/kubezero-istio-gateway-kubezero-istio-gateway-dependencies into master
Reviewed-on: #135
2024-04-02 17:39:22 +00:00
Renovate Bot e8204779a5 chore(deps): update helm release kube-prometheus-stack to v57 2024-03-28 03:07:08 +00:00
Renovate Bot 9a56c99ee5 chore(deps): update helm release eck-operator to v2.12.1 2024-03-28 03:06:41 +00:00
Stefan Reimer 5116e52bc9 chore: typo 2024-03-27 22:51:24 +00:00
Stefan Reimer 26d59f63da chore: typo 2024-03-27 22:49:26 +00:00
Stefan Reimer 8c2ef9cf2c feat: v1.28 version upgrade argoCD incl. move into argo umbrella chart 2024-03-27 22:48:02 +00:00
Stefan Reimer 9fed97db49 docs: update support timeline 2024-03-27 13:58:32 +00:00
Stefan Reimer 588e50f56e Merge pull request 'chore(deps): update helm release aws-ebs-csi-driver to v2.29.1' (#178) from renovate/kubezero-storage-kubezero-storage-dependencies into master
Reviewed-on: #178
2024-03-27 13:58:10 +00:00
Stefan Reimer 908055bd36 Merge pull request 'chore(deps): update kubezero-network-dependencies' (#179) from renovate/kubezero-network-kubezero-network-dependencies into master
Reviewed-on: #179
2024-03-27 13:57:48 +00:00
Renovate Bot a05e6286cc chore(deps): update kubezero-istio-dependencies 2024-03-27 03:08:54 +00:00
Renovate Bot 7b153ac7cc chore(deps): update kubezero-network-dependencies 2024-03-27 03:08:32 +00:00
Renovate Bot 3e1d8e9c3e chore(deps): update helm release aws-ebs-csi-driver to v2.29.1 2024-03-27 03:06:52 +00:00
Stefan Reimer 78639b623a feat: version bump cert-manager, gitea and Jenkins 2024-03-24 18:49:08 +00:00
Stefan Reimer 4e9c147b7e Merge pull request 'chore(deps): update helm release argo-events to v2.4.4' (#176) from renovate/kubezero-argo-kubezero-argo-dependencies into master
Reviewed-on: #176
2024-03-24 17:48:11 +00:00
Stefan Reimer 64d76c283a Merge pull request 'chore(deps): update kubezero-argocd-dependencies (major)' (#166) from renovate/kubezero-argocd-major-kubezero-argocd-dependencies into master
Reviewed-on: #166
2024-03-24 17:13:42 +00:00
Renovate Bot 71f909e49e chore(deps): update kubezero-argocd-dependencies 2024-03-24 17:12:41 +00:00
Stefan Reimer ed4a47dcec Merge pull request 'chore(deps): update kubezero-argocd-dependencies' (#148) from renovate/kubezero-argocd-kubezero-argocd-dependencies into master
Reviewed-on: #148
2024-03-24 17:09:31 +00:00
Stefan Reimer 3ab37e7a7b Merge pull request 'chore(deps): update helm release cert-manager to v1.14.4' (#152) from renovate/kubezero-cert-manager-kubezero-cert-manager-dependencies into master
Reviewed-on: #152
2024-03-24 17:03:22 +00:00
Stefan Reimer 798c3cba57 Merge pull request 'chore(deps): update kubezero-ci-dependencies' (#170) from renovate/kubezero-ci-kubezero-ci-dependencies into master
Reviewed-on: #170
2024-03-24 16:18:10 +00:00
Renovate Bot 3b536f7c44 chore(deps): update kubezero-ci-dependencies 2024-03-24 03:03:45 +00:00
Renovate Bot 69e132c857 chore(deps): update helm release argo-events to v2.4.4 2024-03-24 03:03:28 +00:00
Stefan Reimer 53f0bbffb6 feat: upgrade addons, storage and network module as part of v1.28 2024-03-22 17:04:41 +00:00
Stefan Reimer b0a6326a09 chore: cleanup upgrade script 2024-03-22 16:58:47 +00:00
Stefan Reimer 358042d38b Merge pull request 'chore(deps): update kubezero-storage-dependencies' (#150) from renovate/kubezero-storage-kubezero-storage-dependencies into master
Reviewed-on: #150
2024-03-22 14:24:05 +00:00
Stefan Reimer 22b774c939 fix: final fixes for cli tools of the v1.27 cycle 2024-03-22 12:21:55 +00:00
Renovate Bot 71061475c8 chore(deps): update kubezero-storage-dependencies 2024-03-22 03:06:38 +00:00
Stefan Reimer 3ea16b311b Merge pull request 'chore(deps): update twinproduction/aws-eks-asg-rolling-update-handler docker tag to v1.8.3' (#168) from renovate/twinproduction-aws-eks-asg-rolling-update-handler-1.x into master
Reviewed-on: #168
2024-03-21 16:29:43 +00:00
Stefan Reimer 46e115e4f5 Merge pull request 'chore(deps): update kubezero-addons-dependencies' (#136) from renovate/kubezero-addons-kubezero-addons-dependencies into master
Reviewed-on: #136
2024-03-21 16:25:40 +00:00
Stefan Reimer e55f986de8 Merge pull request 'chore(deps): update kubezero-network-dependencies' (#154) from renovate/kubezero-network-kubezero-network-dependencies into master
Reviewed-on: #154
2024-03-21 13:09:34 +00:00
Stefan Reimer 9ed2dbca96 Feat: first WIP of v1.28 2024-03-21 13:00:50 +00:00
Renovate Bot fcd2192cb4 chore(deps): update kubezero-argocd-dependencies 2024-03-21 03:05:18 +00:00
Renovate Bot 8aa50e4129 chore(deps): update kubezero-addons-dependencies 2024-03-20 19:58:07 +00:00
Renovate Bot d9146abf72 chore(deps): update kubezero-metrics-dependencies 2024-03-20 19:56:58 +00:00
Renovate Bot 7d354402d6 chore(deps): update helm release jaeger to v2 2024-03-15 03:23:54 +00:00
Renovate Bot 91a0034b26 chore(deps): update helm release falco to v4 2024-03-15 03:23:44 +00:00
Renovate Bot 48e381cb0f chore(deps): update kubezero-network-dependencies 2024-03-14 03:21:44 +00:00
Renovate Bot b98dc98e81 chore(deps): update helm release gateway to v1.21.0 2024-03-14 03:19:12 +00:00
Stefan Reimer 16fab2e0a0 chore: version bumps for all things CI/CD 2024-03-12 16:17:40 +00:00
Stefan Reimer 3b2f83c124 Merge pull request 'chore(deps): update keycloak docker tag to v18.7.1' (#162) from renovate/kubezero-auth-kubezero-auth-dependencies into master
Reviewed-on: #162
2024-03-12 15:49:45 +00:00
Stefan Reimer e36b096a46 doc: argo default values 2024-03-12 15:23:22 +00:00
Stefan Reimer 7628debe0c Merge pull request 'chore(deps): update helm release jenkins to v5' (#164) from renovate/kubezero-ci-major-kubezero-ci-dependencies into master
Reviewed-on: #164
2024-03-12 15:22:24 +00:00
Stefan Reimer 72c585b8ef Merge pull request 'chore(deps): update kubezero-ci-dependencies' (#161) from renovate/kubezero-ci-kubezero-ci-dependencies into master
Reviewed-on: #161
2024-03-12 15:21:59 +00:00
Stefan Reimer d8a73bbb73 Merge pull request 'chore(deps): update docker.io/alpine docker tag to v3.19' (#151) from renovate/docker.io-alpine-3.x into master
Reviewed-on: #151
2024-03-12 15:21:32 +00:00
Stefan Reimer 21e5417331 Merge pull request 'chore(deps): update helm release falco to v3.8.7' (#127) from renovate/kubezero-falco-kubezero-falco-dependencies into master
Reviewed-on: #127
2024-03-12 15:20:42 +00:00
Renovate Bot 2dc58765e7 chore(deps): update kubezero-ci-dependencies 2024-03-12 03:23:58 +00:00
Renovate Bot cfda9b6a92 chore(deps): update helm release cert-manager to v1.14.4 2024-03-09 03:24:29 +00:00
Renovate Bot 48b1d08cc6 chore(deps): update helm release jenkins to v5 2024-03-08 03:19:41 +00:00
Renovate Bot 4628d1e1e7 chore(deps): update keycloak docker tag to v18.7.1 2024-02-23 03:16:31 +00:00
Renovate Bot 1a0bd7f312 chore(deps): update twinproduction/aws-eks-asg-rolling-update-handler docker tag to v1.8.3 2024-02-19 03:09:51 +00:00
Renovate Bot 14030824b1 chore(deps): update helm release falco to v3.8.7 2023-12-19 03:07:08 +00:00
Renovate Bot 8b54524c58 chore(deps): update docker.io/alpine docker tag to v3.19 2023-12-08 03:05:30 +00:00
279 changed files with 54426 additions and 59607 deletions

View File

@ -1,9 +1,9 @@
ARG ALPINE_VERSION=3.18
ARG ALPINE_VERSION=3.19
FROM docker.io/alpine:${ALPINE_VERSION}
ARG ALPINE_VERSION
ARG KUBE_VERSION=1.27
ARG KUBE_VERSION=1.28.8
RUN cd /etc/apk/keys && \
wget "https://cdn.zero-downtime.net/alpine/stefan@zero-downtime.net-61bb6bfb.rsa.pub" && \

View File

@ -18,7 +18,7 @@ KubeZero is a Kubernetes distribution providing an integrated container platform
# Version / Support Matrix
KubeZero releases track the same *minor* version of Kubernetes.
KubeZero releases track the same *minor* version of Kubernetes.
Any 1.26.X-Y release of Kubezero supports any Kubernetes cluster 1.26.X.
KubeZero is distributed as a collection of versioned Helm charts, allowing custom upgrade schedules and module versions as needed.
@ -28,15 +28,15 @@ KubeZero is distributed as a collection of versioned Helm charts, allowing custo
gantt
title KubeZero Support Timeline
dateFormat YYYY-MM-DD
section 1.25
beta :125b, 2023-03-01, 2023-03-31
release :after 125b, 2023-08-01
section 1.26
beta :126b, 2023-06-01, 2023-06-30
release :after 126b, 2023-11-01
section 1.27
beta :127b, 2023-09-01, 2023-09-30
release :after 127b, 2024-02-01
release :after 127b, 2024-04-30
section 1.28
beta :128b, 2024-03-01, 2024-04-30
release :after 128b, 2024-08-31
section 1.29
beta :129b, 2024-07-01, 2024-08-30
release :after 129b, 2024-11-30
```
[Upstream release policy](https://kubernetes.io/releases/)
@ -57,7 +57,7 @@ gantt
## Featured workloads
- rootless CI/CD build platform to build containers as part of a CI pipeline, using podman / fuse device plugin support
- containerized AI models via integrated out of the box support for Nvidia GPU workers as well as AWS Neuron
- containerized AI models via integrated out of the box support for Nvidia GPU workers as well as AWS Neuron
## Control plane
- all Kubernetes components compiled against Alpine OS using `buildmode=pie`
@ -85,12 +85,12 @@ gantt
- CSI Snapshot controller and Gemini snapshot groups and retention
## Ingress
- AWS Network Loadbalancer and Istio Ingress controllers
- AWS Network Loadbalancer and Istio Ingress controllers
- no additional costs per exposed service
- real client source IP available to workloads via HTTP header and access logs
- ACME SSL Certificate handling via cert-manager incl. renewal etc.
- support for TCP services
- optional rate limiting support
- optional rate limiting support
- optional full service mesh
## Metrics
@ -104,4 +104,4 @@ gantt
- flexible ElasticSearch setup, leveraging the ECK operator, for easy maintenance & minimal admin knowledge required, incl. automated backups to S3
- Kibana allowing easy search and dashboards for all logs, incl. pre configured index templates and index management
- [fluentd-concerter](https://git.zero-downtime.net/ZeroDownTime/container-park/src/branch/master/fluentd-concenter) service providing queuing during highload as well as additional parsing options
- lightweight fluent-bit agents on each node requiring minimal resources forwarding logs secure via TLS to fluentd-concenter
- lightweight fluent-bit agents on each node requiring minimal resources forwarding logs secure via TLS to fluentd-concenter

View File

@ -4,7 +4,7 @@
API_VERSIONS="-a monitoring.coreos.com/v1 -a snapshot.storage.k8s.io/v1 -a policy/v1/PodDisruptionBudget"
#VERSION="latest"
VERSION="v1.27"
VERSION="v1.28"
# Waits for max 300s and retries
function wait_for() {
@ -211,8 +211,7 @@ spec:
hostIPC: true
hostPID: true
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
- operator: Exists
effect: NoSchedule
initContainers:
- name: node-upgrade

View File

@ -8,27 +8,11 @@ import yaml
def migrate(values):
"""Actual changes here"""
# Cleanup
values.pop("Domain", None)
values.pop("clusterName", None)
if "addons" in values:
if not values["addons"]:
values.pop("addons")
# fix argoCD CM
# argoCD moves to argo module
try:
if not values["argocd"]["configs"]["cm"]["url"].startswith("http"):
values["argocd"]["configs"]["cm"]["url"] = "https://" + values["argocd"]["configs"]["cm"]["url"]
except KeyError:
pass
# migrate eck operator to new operator module
try:
if values["logging"]["eck-operator"]["enabled"]:
if "operators" not in values:
values["operators"] = { "enabled": True }
values["operators"]["eck-operator"] = { "enabled": True }
values["logging"].pop("eck-operator", None)
if values["argocd"]["enabled"]:
values["argo"] = { "enabled": True, "argo-cd": values["argocd"] }
values.pop("argocd")
except KeyError:
pass

View File

@ -23,52 +23,22 @@ control_plane_upgrade kubeadm_upgrade
# shellcheck disable=SC2015
#argo_used && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kube-system
# v1.27
# We need to restore the network ready file as cilium decided to rename it
control_plane_upgrade apply_network
echo "Wait for all CNI agents to be running ..."
kubectl rollout status ds/cilium -n kube-system --timeout=300s
all_nodes_upgrade "cd /host/etc/cni/net.d && ln -s 05-cilium.conflist 05-cilium.conf || true"
# v1.27
# now the rest
control_plane_upgrade "apply_addons, apply_storage, apply_operators"
# v1.27
# Remove legacy eck-operator as part of logging if running
kubectl delete statefulset elastic-operator -n logging || true
# v1.27
# upgrade modules
control_plane_upgrade "apply_network apply_addons, apply_storage, apply_operators"
echo "Checking that all pods in kube-system are running ..."
waitSystemPodsRunning
echo "Applying remaining KubeZero modules..."
# v1.27
### Cleanup of some deprecated Istio Crds
for crd in clusterrbacconfigs.rbac.istio.io rbacconfigs.rbac.istio.io servicerolebindings.rbac.istio.io serviceroles.rbac.istio.io; do
kubectl delete crds $crd || true
done
### v1.28
# - remove old argocd app, all resources will be taken over by argo.argo-cd
kubectl patch app argocd -n argocd \
--type json \
--patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]' && \
kubectl delete app argocd -n argocd || true
# Cleanup of some legacy node labels and annotations
controllers=$(kubectl get nodes -l node-role.kubernetes.io/control-plane -o json | jq .items[].metadata.name -r)
for c in $controllers; do
for l in projectcalico.org/IPv4VXLANTunnelAddr projectcalico.org/IPv4Address; do
kubectl annotate node $c ${l}-
done
kubectl label node $c topology.ebs.csi.aws.com/zone-
done
# Fix for legacy cert-manager CRDs to be upgraded
for crd_name in certificaterequests.cert-manager.io certificates.cert-manager.io challenges.acme.cert-manager.io clusterissuers.cert-manager.io issuers.cert-manager.io orders.acme.cert-manager.io; do
manager_index="$(kubectl get crd "${crd_name}" --show-managed-fields --output json | jq -r '.metadata.managedFields | map(.manager == "cainjector") | index(true)')"
[ "$manager_index" != "null" ] && kubectl patch crd "${crd_name}" --type=json -p="[{\"op\": \"remove\", \"path\": \"/metadata/managedFields/${manager_index}\"}]"
done
# v1.27
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argocd"
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo"
# Trigger backup of upgraded cluster state
kubectl create job --from=cronjob/kubezero-backup kubezero-backup-$VERSION -n kube-system

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubeadm
description: KubeZero Kubeadm cluster config
type: application
version: 1.27.8
version: 1.28.8
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:

View File

@ -9,7 +9,7 @@ networking:
podSubnet: 10.244.0.0/16
etcd:
local:
# imageTag: 3.5.5-0
# imageTag: 3.5.12-0
extraArgs:
### DNS discovery
#discovery-srv: {{ .Values.domain }}
@ -73,6 +73,7 @@ apiServer:
{{- end }}
{{- if .Values.api.awsIamAuth.enabled }}
authentication-token-webhook-config-file: /etc/kubernetes/apiserver/aws-iam-authenticator.yaml
authentication-token-webhook-cache-ttl: 3600s
{{- end }}
feature-gates: {{ include "kubeadm.featuregates" ( dict "return" "csv" ) | trimSuffix "," | quote }}
enable-admission-plugins: DenyServiceExternalIPs,NodeRestriction,EventRateLimit,ExtendedResourceToleration

View File

@ -2,6 +2,6 @@ apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy doesnt really support setting dynamic bind-address via config, replaced by cilium long-term anyways
metricsBindAddress: "0.0.0.0:10249"
# calico < 3.22.1 breaks starting with 1.23, see https://github.com/projectcalico/calico/issues/5011
# we go Cilium anyways
mode: "iptables"
logging:
format: json

View File

@ -1,9 +1,9 @@
{{- /* Feature gates for all control plane components */ -}}
{{- /* ToAdd: "PodAndContainerStatsFromCRI" */ -}}
{{- /* Issues: "MemoryQoS" */ -}}
{{- /* v1.28: "NodeSwap" */ -}}
{{- /* v1.30?: "NodeSwap" */ -}}
{{- /* v1.29: remove/beta now "SidecarContainers" */ -}}
{{- define "kubeadm.featuregates" }}
{{- $gates := list "CustomCPUCFSQuotaPeriod" }}
{{- $gates := list "CustomCPUCFSQuotaPeriod" "SidecarContainers" "PodAndContainerStatsFromCRI" }}
{{- if eq .return "csv" }}
{{- range $key := $gates }}
{{- $key }}=true,

View File

@ -117,7 +117,7 @@ spec:
containers:
- name: aws-iam-authenticator
image: public.ecr.aws/zero-downtime/aws-iam-authenticator:v0.6.11
image: public.ecr.aws/zero-downtime/aws-iam-authenticator:v0.6.14
args:
- server
- --backend-mode=CRD,MountedFile

View File

@ -2,8 +2,8 @@ apiVersion: v2
name: kubezero-addons
description: KubeZero umbrella chart for various optional cluster addons
type: application
version: 0.8.4
appVersion: v1.27
version: 0.8.5
appVersion: v1.28
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -20,24 +20,24 @@ maintainers:
email: stefan@zero-downtime.net
dependencies:
- name: external-dns
version: 1.13.1
version: 1.14.3
repository: https://kubernetes-sigs.github.io/external-dns/
condition: external-dns.enabled
- name: cluster-autoscaler
version: 9.29.5
version: 9.36.0
repository: https://kubernetes.github.io/autoscaler
condition: cluster-autoscaler.enabled
- name: nvidia-device-plugin
version: 0.14.2
version: 0.14.5
# https://github.com/NVIDIA/k8s-device-plugin
repository: https://nvidia.github.io/k8s-device-plugin
condition: nvidia-device-plugin.enabled
- name: sealed-secrets
version: 2.13.2
version: 2.15.1
repository: https://bitnami-labs.github.io/sealed-secrets
condition: sealed-secrets.enabled
- name: aws-node-termination-handler
version: 0.22.0
version: 0.23.0
repository: "oci://public.ecr.aws/aws-ec2/helm"
condition: aws-node-termination-handler.enabled
- name: aws-eks-asg-rolling-update-handler

View File

@ -1,6 +1,6 @@
# kubezero-addons
![Version: 0.8.4](https://img.shields.io/badge/Version-0.8.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.27](https://img.shields.io/badge/AppVersion-v1.27-informational?style=flat-square)
![Version: 0.8.5](https://img.shields.io/badge/Version-0.8.5-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.28](https://img.shields.io/badge/AppVersion-v1.28-informational?style=flat-square)
KubeZero umbrella chart for various optional cluster addons
@ -18,12 +18,12 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://bitnami-labs.github.io/sealed-secrets | sealed-secrets | 2.13.2 |
| https://kubernetes-sigs.github.io/external-dns/ | external-dns | 1.13.1 |
| https://kubernetes.github.io/autoscaler | cluster-autoscaler | 9.29.5 |
| https://nvidia.github.io/k8s-device-plugin | nvidia-device-plugin | 0.14.2 |
| https://bitnami-labs.github.io/sealed-secrets | sealed-secrets | 2.15.1 |
| https://kubernetes-sigs.github.io/external-dns/ | external-dns | 1.14.3 |
| https://kubernetes.github.io/autoscaler | cluster-autoscaler | 9.36.0 |
| https://nvidia.github.io/k8s-device-plugin | nvidia-device-plugin | 0.14.5 |
| https://twin.github.io/helm-charts | aws-eks-asg-rolling-update-handler | 1.5.0 |
| oci://public.ecr.aws/aws-ec2/helm | aws-node-termination-handler | 0.22.0 |
| oci://public.ecr.aws/aws-ec2/helm | aws-node-termination-handler | 0.23.0 |
# MetalLB
@ -63,7 +63,7 @@ Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/)
| aws-eks-asg-rolling-update-handler.environmentVars[8].name | string | `"AWS_STS_REGIONAL_ENDPOINTS"` | |
| aws-eks-asg-rolling-update-handler.environmentVars[8].value | string | `"regional"` | |
| aws-eks-asg-rolling-update-handler.image.repository | string | `"twinproduction/aws-eks-asg-rolling-update-handler"` | |
| aws-eks-asg-rolling-update-handler.image.tag | string | `"v1.8.2"` | |
| aws-eks-asg-rolling-update-handler.image.tag | string | `"v1.8.3"` | |
| aws-eks-asg-rolling-update-handler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| aws-eks-asg-rolling-update-handler.resources.limits.memory | string | `"128Mi"` | |
| aws-eks-asg-rolling-update-handler.resources.requests.cpu | string | `"10m"` | |
@ -102,7 +102,7 @@ Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/)
| aws-node-termination-handler.useProviderId | bool | `true` | |
| awsNeuron.enabled | bool | `false` | |
| awsNeuron.image.name | string | `"public.ecr.aws/neuron/neuron-device-plugin"` | |
| awsNeuron.image.tag | string | `"2.12.5.0"` | |
| awsNeuron.image.tag | string | `"2.19.16.0"` | |
| cluster-autoscaler.autoDiscovery.clusterName | string | `""` | |
| cluster-autoscaler.awsRegion | string | `"us-west-2"` | |
| cluster-autoscaler.enabled | bool | `false` | |
@ -111,7 +111,7 @@ Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/)
| cluster-autoscaler.extraArgs.scan-interval | string | `"30s"` | |
| cluster-autoscaler.extraArgs.skip-nodes-with-local-storage | bool | `false` | |
| cluster-autoscaler.image.repository | string | `"registry.k8s.io/autoscaling/cluster-autoscaler"` | |
| cluster-autoscaler.image.tag | string | `"v1.27.3"` | |
| cluster-autoscaler.image.tag | string | `"v1.28.2"` | |
| cluster-autoscaler.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cluster-autoscaler.podDisruptionBudget | bool | `false` | |
| cluster-autoscaler.prometheusRule.enabled | bool | `false` | |

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.20.0
appVersion: 1.21.0
description: A Helm chart for the AWS Node Termination Handler.
home: https://github.com/aws/aws-node-termination-handler/
icon: https://raw.githubusercontent.com/aws/eks-charts/master/docs/logo/aws.png
@ -21,4 +21,4 @@ name: aws-node-termination-handler
sources:
- https://github.com/aws/aws-node-termination-handler/
type: application
version: 0.22.0
version: 0.23.0

View File

@ -119,7 +119,7 @@ The configuration in this table applies to AWS Node Termination Handler in queue
| `checkASGTagBeforeDraining` | [DEPRECATED](Use `checkTagBeforeDraining` instead) If `true`, check that the instance is tagged with the `managedAsgTag` before draining the node. If `false`, disables calls ASG API. | `true` |
| `managedAsgTag` | [DEPRECATED](Use `managedTag` instead) The node tag to check if `checkASGTagBeforeDraining` is `true`.
| `useProviderId` | If `true`, fetch node name through Kubernetes node spec ProviderID instead of AWS event PrivateDnsHostname. | `false` |
| `topologySpreadConstraints` | [Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) for pod scheduling. Useful with a highly available deployment to reduce the risk of running multiple replicas on the same Node | `[]` |
### IMDS Mode Configuration
The configuration in this table applies to AWS Node Termination Handler in IMDS mode.
@ -174,6 +174,6 @@ The configuration in this table applies to AWS Node Termination Handler testing
## Metrics Endpoint Considerations
AWS Node Termination HAndler in IMDS mode runs as a DaemonSet with `useHostNetwork: true` by default. If the Prometheus server is enabled with `enablePrometheusServer: true` nothing else will be able to bind to the configured port (by default `prometheusServerPort: 9092`) in the root network namespace. Therefore, it will need to have a firewall/security group configured on the nodes to block access to the `/metrics` endpoint.
AWS Node Termination Handler in IMDS mode runs as a DaemonSet with `useHostNetwork: true` by default. If the Prometheus server is enabled with `enablePrometheusServer: true` nothing else will be able to bind to the configured port (by default `prometheusServerPort: 9092`) in the root network namespace. Therefore, it will need to have a firewall/security group configured on the nodes to block access to the `/metrics` endpoint.
You can switch NTH in IMDS mode to run w/ `useHostNetwork: false`, but you will need to make sure that IMDSv1 is enabled or IMDSv2 IP hop count will need to be incremented to 2 (see the [IMDSv2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).

View File

@ -220,4 +220,8 @@ spec:
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -52,6 +52,8 @@ affinity: {}
tolerations: []
topologySpreadConstraints: []
# Extra environment variables
extraEnv: []

View File

@ -24,21 +24,22 @@ spec:
volumeMounts:
- name: host
mountPath: /host
readOnly: true
- name: workdir
mountPath: /tmp
env:
env:
- name: DEBUG
value: ""
- name: RESTIC_REPOSITORY
valueFrom:
secretKeyRef:
valueFrom:
secretKeyRef:
name: kubezero-backup-restic
key: repository
- name: RESTIC_PASSWORD
valueFrom:
secretKeyRef:
valueFrom:
secretKeyRef:
name: kubezero-backup-restic
key: password
key: password
{{- with .Values.clusterBackup.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}

View File

@ -54,7 +54,7 @@ aws-eks-asg-rolling-update-handler:
enabled: false
image:
repository: twinproduction/aws-eks-asg-rolling-update-handler
tag: v1.8.2
tag: v1.8.3
environmentVars:
- name: CLUSTER_NAME
@ -107,7 +107,6 @@ aws-node-termination-handler:
fullnameOverride: "aws-node-termination-handler"
checkASGTagBeforeDraining: false
# -- "zdt:kubezero:nth:${ClusterName}"
managedTag: "zdt:kubezero:nth:${ClusterName}"
@ -161,7 +160,7 @@ awsNeuron:
image:
name: public.ecr.aws/neuron/neuron-device-plugin
tag: 2.12.5.0
tag: 2.19.16.0
nvidia-device-plugin:
enabled: false
@ -201,7 +200,7 @@ cluster-autoscaler:
image:
repository: registry.k8s.io/autoscaling/cluster-autoscaler
tag: v1.27.3
tag: v1.28.2
autoDiscovery:
clusterName: ""
@ -236,7 +235,7 @@ cluster-autoscaler:
# On AWS enable Projected Service Accounts to assume IAM role
#extraEnv:
# AWS_ROLE_ARN: <IamArn>
# AWS_WEB_IDENTITY_TOKEN_FILE: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
# AWS_WEB_IDENTITY_TOKEN_FILE: "/var/run/secrets/sts.amazonaws.com/serviceaccount/token"
# AWS_STS_REGIONAL_ENDPOINTS: "regional"
#extraVolumes:

View File

@ -1,11 +1,12 @@
apiVersion: v2
description: KubeZero Argo - Events, Workflow, CD
name: kubezero-argo
version: 0.1.0
version: 0.2.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
- kubezero
- argocd
- argo-events
- argo-workflow
maintainers:
@ -17,7 +18,19 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: argo-events
version: 2.4.3
version: 2.4.4
repository: https://argoproj.github.io/argo-helm
condition: argo-events.enabled
- name: argo-cd
version: 6.7.3
repository: https://argoproj.github.io/argo-helm
condition: argo-cd.enabled
- name: argocd-apps
version: 2.0.0
repository: https://argoproj.github.io/argo-helm
condition: argo-cd.enabled
- name: argocd-image-updater
version: 0.9.6
repository: https://argoproj.github.io/argo-helm
condition: argocd-image-updater.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-argo
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square)
![Version: 0.2.0](https://img.shields.io/badge/Version-0.2.0-informational?style=flat-square)
KubeZero Argo - Events, Workflow, CD
@ -18,14 +18,75 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://argoproj.github.io/argo-helm | argo-events | 2.4.3 |
| https://argoproj.github.io/argo-helm | argo-cd | 6.7.3 |
| https://argoproj.github.io/argo-helm | argo-events | 2.4.4 |
| https://argoproj.github.io/argo-helm | argocd-apps | 2.0.0 |
| https://argoproj.github.io/argo-helm | argocd-image-updater | 0.9.6 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| argo-cd.applicationSet.enabled | bool | `false` | |
| argo-cd.configs.cm."resource.customizations" | string | `"cert-manager.io/Certificate:\n # Lua script for customizing the health status assessment\n health.lua: |\n hs = {}\n if obj.status ~= nil then\n if obj.status.conditions ~= nil then\n for i, condition in ipairs(obj.status.conditions) do\n if condition.type == \"Ready\" and condition.status == \"False\" then\n hs.status = \"Degraded\"\n hs.message = condition.message\n return hs\n end\n if condition.type == \"Ready\" and condition.status == \"True\" then\n hs.status = \"Healthy\"\n hs.message = condition.message\n return hs\n end\n end\n end\n end\n hs.status = \"Progressing\"\n hs.message = \"Waiting for certificate\"\n return hs\n"` | |
| argo-cd.configs.cm."timeout.reconciliation" | int | `300` | |
| argo-cd.configs.cm."ui.bannercontent" | string | `"KubeZero v1.27 - Release notes"` | |
| argo-cd.configs.cm."ui.bannerpermanent" | string | `"true"` | |
| argo-cd.configs.cm."ui.bannerposition" | string | `"bottom"` | |
| argo-cd.configs.cm."ui.bannerurl" | string | `"https://kubezero.com/releases/v1.27"` | |
| argo-cd.configs.cm.url | string | `"https://argocd.example.com"` | |
| argo-cd.configs.knownHosts.data.ssh_known_hosts | string | `"bitbucket.org ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4+a2sjSSpBK0iqitSQ+5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE=\nbitbucket.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu+UUO\nbitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=\ngithub.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=\ngithub.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl\ngithub.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==\ngitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=\ngitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf\ngitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9\ngit.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8YdJ4YcOK7A0K7qOWsRjCS+wHTStXRcwBe7gjG43HPSNijiCKoGf/c+tfNsRhyouawg7Law6M6ahmS/jKWBpznRIM+OdOFVSuhnK/nr6h6wG3/ZfdLicyAPvx1/STGY/Fc6/zXA88i/9PV+g84gSVmhf3fGY92wokiASiu9DU4T9dT1gIkdyOX6fbMi1/mMKLSrHnAQcjyasYDvw9ISCJ95EoSwbj7O4c+7jo9fxYvdCfZZZAEZGozTRLAAO0AnjVcRah7bZV/jfHJuhOipV/TB7UVAhlVv1dfGV7hoTp9UKtKZFJF4cjIrSGxqQA/mdhSdLgkepK7yc4Jp2xGnaarhY29DfqsQqop+ugFpTbj7Xy5Rco07mXc6XssbAZhI1xtCOX20N4PufBuYippCK5AE6AiAyVtJmvfGQk4HP+TjOyhFo7PZm3wc9Hym7IBBVC0Sl30K8ddufkAgHwNGvvu1ZmD9ZWaMOXJDHBCZGMMr16QREZwVtZTwMEQalc7/yqmuqMhmcJIfs/GA2Lt91y+pq9C8XyeUL0VFPch0vkcLSRe3ghMZpRFJ/ht307xPcLzgTJqN6oQtNNDzSQglSEjwhge2K4GyWcIh+oGsWxWz5dHyk1iJmw90Y976BZIl/mYVgbTtZAJ81oGe/0k5rAe+LDL+Yq6tG28QFOg0QmiQ==\n"` | |
| argo-cd.configs.params."controller.operation.processors" | string | `"5"` | |
| argo-cd.configs.params."controller.status.processors" | string | `"10"` | |
| argo-cd.configs.params."server.enable.gzip" | bool | `true` | |
| argo-cd.configs.params."server.insecure" | bool | `true` | |
| argo-cd.configs.secret.createSecret | bool | `false` | |
| argo-cd.configs.styles | string | `".sidebar__logo img { content: url(https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png); }\n.sidebar__logo__text-logo { height: 0em; }\n.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }\n"` | |
| argo-cd.controller.metrics.enabled | bool | `false` | |
| argo-cd.controller.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.controller.resources.limits.memory | string | `"2048Mi"` | |
| argo-cd.controller.resources.requests.cpu | string | `"100m"` | |
| argo-cd.controller.resources.requests.memory | string | `"512Mi"` | |
| argo-cd.dex.enabled | bool | `false` | |
| argo-cd.enabled | bool | `false` | |
| argo-cd.global.logging.format | string | `"json"` | |
| argo-cd.istio.enabled | bool | `false` | |
| argo-cd.istio.gateway | string | `"istio-ingress/ingressgateway"` | |
| argo-cd.istio.ipBlocks | list | `[]` | |
| argo-cd.notifications.enabled | bool | `false` | |
| argo-cd.repoServer.metrics.enabled | bool | `false` | |
| argo-cd.repoServer.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.server.metrics.enabled | bool | `false` | |
| argo-cd.server.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.server.service.servicePortHttpsName | string | `"grpc"` | |
| argo-events.configs.jetstream.settings.maxFileStore | int | `-1` | Maximum size of the file storage (e.g. 20G) |
| argo-events.configs.jetstream.settings.maxMemoryStore | int | `-1` | Maximum size of the memory storage (e.g. 1G) |
| argo-events.configs.jetstream.streamConfig.duplicates | string | `"300s"` | Not documented at the moment |
| argo-events.configs.jetstream.streamConfig.maxAge | string | `"72h"` | Maximum age of existing messages, i.e. “72h”, “4h35m” |
| argo-events.configs.jetstream.streamConfig.maxBytes | string | `"1GB"` | |
| argo-events.configs.jetstream.streamConfig.maxMsgs | int | `1000000` | Maximum number of messages before expiring oldest message |
| argo-events.configs.jetstream.streamConfig.replicas | int | `1` | Number of replicas, defaults to 3 and requires minimal 3 |
| argo-events.configs.jetstream.versions[0].configReloaderImage | string | `"natsio/nats-server-config-reloader:0.14.1"` | |
| argo-events.configs.jetstream.versions[0].metricsExporterImage | string | `"natsio/prometheus-nats-exporter:0.14.0"` | |
| argo-events.configs.jetstream.versions[0].natsImage | string | `"nats:2.10.11-scratch"` | |
| argo-events.configs.jetstream.versions[0].startCommand | string | `"/nats-server"` | |
| argo-events.configs.jetstream.versions[0].version | string | `"2.10.11"` | |
| argo-events.enabled | bool | `false` | |
| argocd-apps.applications | object | `{}` | |
| argocd-apps.enabled | bool | `false` | |
| argocd-apps.projects | object | `{}` | |
| argocd-image-updater.authScripts.enabled | bool | `true` | |
| argocd-image-updater.authScripts.scripts."ecr-login.sh" | string | `"#!/bin/sh\naws ecr --region $AWS_REGION get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d\n"` | |
| argocd-image-updater.authScripts.scripts."ecr-public-login.sh" | string | `"#!/bin/sh\naws ecr-public --region us-east-1 get-authorization-token --output text --query 'authorizationData.authorizationToken' | base64 -d\n"` | |
| argocd-image-updater.config.argocd.plaintext | bool | `true` | |
| argocd-image-updater.enabled | bool | `false` | |
| argocd-image-updater.fullnameOverride | string | `"argocd-image-updater"` | |
| argocd-image-updater.metrics.enabled | bool | `false` | |
| argocd-image-updater.metrics.serviceMonitor.enabled | bool | `true` | |
| argocd-image-updater.sshConfig.config | string | `"Host *\n PubkeyAcceptedAlgorithms +ssh-rsa\n HostkeyAlgorithms +ssh-rsa\n"` | |
## Resources
- https://argoproj.github.io/argo-cd/operator-manual/metrics/
- https://raw.githubusercontent.com/argoproj/argo-cd/master/examples/dashboard.json

View File

@ -16,4 +16,6 @@
{{ template "chart.valuesSection" . }}
## Resources
- https://argoproj.github.io/argo-cd/operator-manual/metrics/
- https://raw.githubusercontent.com/argoproj/argo-cd/master/examples/dashboard.json

View File

@ -1,5 +1,5 @@
{{- if .Values.istio.enabled }}
{{- if .Values.istio.ipBlocks }}
{{- if index .Values "argo-cd" "istio" "enabled" }}
{{- if index .Values "argo-cd" "istio" "ipBlocks" }}
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
@ -16,7 +16,7 @@ spec:
- from:
- source:
notIpBlocks:
{{- toYaml .Values.istio.ipBlocks | nindent 8 }}
{{- toYaml (index .Values "argo-cd" "istio" "ipBlocks") | nindent 8 }}
to:
- operation:
hosts: [{{ index .Values "argo-cd" "configs" "cm" "url" | quote }}]

View File

@ -1,4 +1,4 @@
{{- if .Values.istio.enabled }}
{{- if index .Values "argo-cd" "istio" "enabled" }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
@ -8,7 +8,7 @@ metadata:
{{- include "kubezero-lib.labels" . | nindent 4 }}
spec:
gateways:
- {{ .Values.istio.gateway }}
- {{ index .Values "argo-cd" "istio" "gateway" }}
hosts:
- {{ get (urlParse (index .Values "argo-cd" "configs" "cm" "url")) "host" }}
http:
@ -19,13 +19,13 @@ spec:
prefix: argocd-client
route:
- destination:
host: argocd-server
host: argo-argocd-server
port:
number: 443
- name: http
route:
- destination:
host: argocd-server
host: argo-argocd-server
port:
number: 80
{{- end }}

View File

@ -5,6 +5,6 @@
update_helm
# Create ZDT dashboard configmap
#../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/grafana-dashboards.yaml
../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/argo-cd/grafana-dashboards.yaml
update_docs

View File

@ -30,3 +30,157 @@ argo-events:
configReloaderImage: natsio/nats-server-config-reloader:0.14.1
startCommand: /nats-server
argocd-apps:
enabled: false
projects: {}
applications: {}
argo-cd:
enabled: false
#configs:
# secret:
# `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/'`
# argocdServerAdminPassword: "$2a$10$ivKzaXVxMqdeDSfS3nqi1Od3iDbnL7oXrixzDfZFRHlXHnAG6LydG"
# argocdServerAdminPasswordMtime: "2020-04-24T15:33:09BST"
global:
logging:
format: json
# image:
# tag: v2.1.6
configs:
styles: |
.sidebar__logo img { content: url(https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png); }
.sidebar__logo__text-logo { height: 0em; }
.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }
cm:
ui.bannercontent: "KubeZero v1.27 - Release notes"
ui.bannerurl: "https://kubezero.com/releases/v1.27"
ui.bannerpermanent: "true"
ui.bannerposition: "bottom"
# argo-cd.server.config.url -- ArgoCD URL being exposed via Istio
url: https://argocd.example.com
timeout.reconciliation: 300s
resource.customizations: |
cert-manager.io/Certificate:
# Lua script for customizing the health status assessment
health.lua: |
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for i, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "False" then
hs.status = "Degraded"
hs.message = condition.message
return hs
end
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = condition.message
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for certificate"
return hs
secret:
createSecret: false
ssh:
extraHosts: "git.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8YdJ4YcOK7A0K7qOWsRjCS+wHTStXRcwBe7gjG43HPSNijiCKoGf/c+tfNsRhyouawg7Law6M6ahmS/jKWBpznRIM+OdOFVSuhnK/nr6h6wG3/ZfdLicyAPvx1/STGY/Fc6/zXA88i/9PV+g84gSVmhf3fGY92wokiASiu9DU4T9dT1gIkdyOX6fbMi1/mMKLSrHnAQcjyasYDvw9ISCJ95EoSwbj7O4c+7jo9fxYvdCfZZZAEZGozTRLAAO0AnjVcRah7bZV/jfHJuhOipV/TB7UVAhlVv1dfGV7hoTp9UKtKZFJF4cjIrSGxqQA/mdhSdLgkepK7yc4Jp2xGnaarhY29DfqsQqop+ugFpTbj7Xy5Rco07mXc6XssbAZhI1xtCOX20N4PufBuYippCK5AE6AiAyVtJmvfGQk4HP+TjOyhFo7PZm3wc9Hym7IBBVC0Sl30K8ddufkAgHwNGvvu1ZmD9ZWaMOXJDHBCZGMMr16QREZwVtZTwMEQalc7/yqmuqMhmcJIfs/GA2Lt91y+pq9C8XyeUL0VFPch0vkcLSRe3ghMZpRFJ/ht307xPcLzgTJqN6oQtNNDzSQglSEjwhge2K4GyWcIh+oGsWxWz5dHyk1iJmw90Y976BZIl/mYVgbTtZAJ81oGe/0k5rAe+LDL+Yq6tG28QFOg0QmiQ=="
params:
controller.status.processors: "10"
controller.operation.processors: "5"
server.insecure: true
server.enable.gzip: true
controller:
metrics:
enabled: false
serviceMonitor:
enabled: true
resources:
limits:
# cpu: 500m
memory: 2048Mi
requests:
cpu: 100m
memory: 512Mi
repoServer:
metrics:
enabled: false
serviceMonitor:
enabled: true
server:
# Rename former https port to grpc, works with istio + insecure
service:
servicePortHttpsName: grpc
metrics:
enabled: false
serviceMonitor:
enabled: true
# redis:
# We might want to try to keep redis close to the controller
# affinity:
dex:
enabled: false
applicationSet:
enabled: false
notifications:
enabled: false
# Support for Istio Ingress for ArgoCD
istio:
# istio.enabled -- Deploy Istio VirtualService to expose ArgoCD
enabled: false
# istio.gateway -- Name of the Istio gateway to add the VirtualService to
gateway: istio-ingress/ingressgateway
ipBlocks: []
argocd-image-updater:
enabled: false
# Unify all ArgoCD pieces under the same argocd namespace
fullnameOverride: argocd-image-updater
config:
argocd:
plaintext: true
metrics:
enabled: false
serviceMonitor:
enabled: true
authScripts:
enabled: true
scripts:
ecr-login.sh: |
#!/bin/sh
aws ecr --region $AWS_REGION get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d
ecr-public-login.sh: |
#!/bin/sh
aws ecr-public --region us-east-1 get-authorization-token --output text --query 'authorizationData.authorizationToken' | base64 -d
sshConfig:
config: |
Host *
PubkeyAcceptedAlgorithms +ssh-rsa
HostkeyAlgorithms +ssh-rsa

View File

@ -1,29 +0,0 @@
apiVersion: v2
description: KubeZero ArgoCD - config, branding, image-updater (optional)
name: kubezero-argocd
version: 0.13.3
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
- kubezero
- argocd
- argocd-image-updater
maintainers:
- name: Stefan Reimer
email: stefan@zero-downtime.net
# Url: https://github.com/argoproj/argo-helm/tree/main/charts
dependencies:
- name: kubezero-lib
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: argo-cd
version: 5.51.4
repository: https://argoproj.github.io/argo-helm
- name: argocd-apps
version: 1.4.1
repository: https://argoproj.github.io/argo-helm
- name: argocd-image-updater
version: 0.9.1
repository: https://argoproj.github.io/argo-helm
condition: argocd-image-updater.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,74 +0,0 @@
# kubezero-argocd
![Version: 0.13.3](https://img.shields.io/badge/Version-0.13.3-informational?style=flat-square)
KubeZero ArgoCD - config, branding, image-updater (optional)
**Homepage:** <https://kubezero.com>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Stefan Reimer | <stefan@zero-downtime.net> | |
## Requirements
Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://argoproj.github.io/argo-helm | argo-cd | 5.51.4 |
| https://argoproj.github.io/argo-helm | argocd-apps | 1.4.1 |
| https://argoproj.github.io/argo-helm | argocd-image-updater | 0.9.1 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| argo-cd.applicationSet.enabled | bool | `false` | |
| argo-cd.configs.cm."resource.customizations" | string | `"cert-manager.io/Certificate:\n # Lua script for customizing the health status assessment\n health.lua: |\n hs = {}\n if obj.status ~= nil then\n if obj.status.conditions ~= nil then\n for i, condition in ipairs(obj.status.conditions) do\n if condition.type == \"Ready\" and condition.status == \"False\" then\n hs.status = \"Degraded\"\n hs.message = condition.message\n return hs\n end\n if condition.type == \"Ready\" and condition.status == \"True\" then\n hs.status = \"Healthy\"\n hs.message = condition.message\n return hs\n end\n end\n end\n end\n hs.status = \"Progressing\"\n hs.message = \"Waiting for certificate\"\n return hs\n"` | |
| argo-cd.configs.cm."timeout.reconciliation" | int | `300` | |
| argo-cd.configs.cm."ui.bannercontent" | string | `"KubeZero v1.27 - Release notes"` | |
| argo-cd.configs.cm."ui.bannerpermanent" | string | `"true"` | |
| argo-cd.configs.cm."ui.bannerposition" | string | `"bottom"` | |
| argo-cd.configs.cm."ui.bannerurl" | string | `"https://kubezero.com/releases/v1.27"` | |
| argo-cd.configs.cm.url | string | `"https://argocd.example.com"` | |
| argo-cd.configs.knownHosts.data.ssh_known_hosts | string | `"bitbucket.org ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4+a2sjSSpBK0iqitSQ+5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE=\nbitbucket.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu+UUO\nbitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=\ngithub.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=\ngithub.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl\ngithub.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==\ngitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=\ngitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf\ngitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9\ngit.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8YdJ4YcOK7A0K7qOWsRjCS+wHTStXRcwBe7gjG43HPSNijiCKoGf/c+tfNsRhyouawg7Law6M6ahmS/jKWBpznRIM+OdOFVSuhnK/nr6h6wG3/ZfdLicyAPvx1/STGY/Fc6/zXA88i/9PV+g84gSVmhf3fGY92wokiASiu9DU4T9dT1gIkdyOX6fbMi1/mMKLSrHnAQcjyasYDvw9ISCJ95EoSwbj7O4c+7jo9fxYvdCfZZZAEZGozTRLAAO0AnjVcRah7bZV/jfHJuhOipV/TB7UVAhlVv1dfGV7hoTp9UKtKZFJF4cjIrSGxqQA/mdhSdLgkepK7yc4Jp2xGnaarhY29DfqsQqop+ugFpTbj7Xy5Rco07mXc6XssbAZhI1xtCOX20N4PufBuYippCK5AE6AiAyVtJmvfGQk4HP+TjOyhFo7PZm3wc9Hym7IBBVC0Sl30K8ddufkAgHwNGvvu1ZmD9ZWaMOXJDHBCZGMMr16QREZwVtZTwMEQalc7/yqmuqMhmcJIfs/GA2Lt91y+pq9C8XyeUL0VFPch0vkcLSRe3ghMZpRFJ/ht307xPcLzgTJqN6oQtNNDzSQglSEjwhge2K4GyWcIh+oGsWxWz5dHyk1iJmw90Y976BZIl/mYVgbTtZAJ81oGe/0k5rAe+LDL+Yq6tG28QFOg0QmiQ==\n"` | |
| argo-cd.configs.params."controller.operation.processors" | string | `"5"` | |
| argo-cd.configs.params."controller.status.processors" | string | `"10"` | |
| argo-cd.configs.params."server.enable.gzip" | bool | `true` | |
| argo-cd.configs.params."server.insecure" | bool | `true` | |
| argo-cd.configs.secret.createSecret | bool | `false` | |
| argo-cd.configs.styles | string | `".sidebar__logo img { content: url(https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png); }\n.sidebar__logo__text-logo { height: 0em; }\n.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }\n"` | |
| argo-cd.controller.metrics.enabled | bool | `false` | |
| argo-cd.controller.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.controller.resources.requests.cpu | string | `"100m"` | |
| argo-cd.controller.resources.requests.memory | string | `"256Mi"` | |
| argo-cd.dex.enabled | bool | `false` | |
| argo-cd.global.logging.format | string | `"json"` | |
| argo-cd.notifications.enabled | bool | `false` | |
| argo-cd.repoServer.metrics.enabled | bool | `false` | |
| argo-cd.repoServer.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.server.metrics.enabled | bool | `false` | |
| argo-cd.server.metrics.serviceMonitor.enabled | bool | `true` | |
| argo-cd.server.service.servicePortHttpsName | string | `"grpc"` | |
| argocd-apps.applications | list | `[]` | |
| argocd-apps.projects | list | `[]` | |
| argocd-image-updater.authScripts.enabled | bool | `true` | |
| argocd-image-updater.authScripts.scripts."ecr-login.sh" | string | `"#!/bin/sh\naws ecr --region $AWS_REGION get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d\n"` | |
| argocd-image-updater.authScripts.scripts."ecr-public-login.sh" | string | `"#!/bin/sh\naws ecr-public --region us-east-1 get-authorization-token --output text --query 'authorizationData.authorizationToken' | base64 -d\n"` | |
| argocd-image-updater.config.argocd.plaintext | bool | `true` | |
| argocd-image-updater.enabled | bool | `false` | |
| argocd-image-updater.fullnameOverride | string | `"argocd-image-updater"` | |
| argocd-image-updater.metrics.enabled | bool | `false` | |
| argocd-image-updater.metrics.serviceMonitor.enabled | bool | `true` | |
| argocd-image-updater.sshConfig.config | string | `"Host *\n PubkeyAcceptedAlgorithms +ssh-rsa\n HostkeyAlgorithms +ssh-rsa\n"` | |
| istio.enabled | bool | `false` | Deploy Istio VirtualService to expose ArgoCD |
| istio.gateway | string | `"istio-ingress/ingressgateway"` | Name of the Istio gateway to add the VirtualService to |
| istio.ipBlocks | list | `[]` | |
## Resources
- https://argoproj.github.io/argo-cd/operator-manual/metrics/
- https://raw.githubusercontent.com/argoproj/argo-cd/master/examples/dashboard.json

View File

@ -1,20 +0,0 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.maintainersSection" . }}
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
## Resources
- https://argoproj.github.io/argo-cd/operator-manual/metrics/
- https://raw.githubusercontent.com/argoproj/argo-cd/master/examples/dashboard.json

View File

@ -1,10 +0,0 @@
#!/bin/bash
. ../../scripts/lib-update.sh
update_helm
# Create ZDT dashboard configmap
../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/grafana-dashboards.yaml
update_docs

View File

@ -1,162 +0,0 @@
# Support for Istio Ingress for ArgoCD
istio:
# istio.enabled -- Deploy Istio VirtualService to expose ArgoCD
enabled: false
# istio.gateway -- Name of the Istio gateway to add the VirtualService to
gateway: istio-ingress/ingressgateway
ipBlocks: []
argocd-apps:
projects: []
applications: []
argo-cd:
#configs:
# secret:
# `htpasswd -nbBC 10 "" $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/'`
# argocdServerAdminPassword: "$2a$10$ivKzaXVxMqdeDSfS3nqi1Od3iDbnL7oXrixzDfZFRHlXHnAG6LydG"
# argocdServerAdminPasswordMtime: "2020-04-24T15:33:09BST"
global:
logging:
format: json
# image:
# tag: v2.1.6
configs:
styles: |
.sidebar__logo img { content: url(https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png); }
.sidebar__logo__text-logo { height: 0em; }
.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }
cm:
ui.bannercontent: "KubeZero v1.27 - Release notes"
ui.bannerurl: "https://kubezero.com/releases/v1.27"
ui.bannerpermanent: "true"
ui.bannerposition: "bottom"
# argo-cd.server.config.url -- ArgoCD URL being exposed via Istio
url: https://argocd.example.com
timeout.reconciliation: 300
resource.customizations: |
cert-manager.io/Certificate:
# Lua script for customizing the health status assessment
health.lua: |
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for i, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "False" then
hs.status = "Degraded"
hs.message = condition.message
return hs
end
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = condition.message
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for certificate"
return hs
secret:
createSecret: false
knownHosts:
data:
ssh_known_hosts: |
bitbucket.org ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4+a2sjSSpBK0iqitSQ+5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE=
bitbucket.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu+UUO
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
git.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8YdJ4YcOK7A0K7qOWsRjCS+wHTStXRcwBe7gjG43HPSNijiCKoGf/c+tfNsRhyouawg7Law6M6ahmS/jKWBpznRIM+OdOFVSuhnK/nr6h6wG3/ZfdLicyAPvx1/STGY/Fc6/zXA88i/9PV+g84gSVmhf3fGY92wokiASiu9DU4T9dT1gIkdyOX6fbMi1/mMKLSrHnAQcjyasYDvw9ISCJ95EoSwbj7O4c+7jo9fxYvdCfZZZAEZGozTRLAAO0AnjVcRah7bZV/jfHJuhOipV/TB7UVAhlVv1dfGV7hoTp9UKtKZFJF4cjIrSGxqQA/mdhSdLgkepK7yc4Jp2xGnaarhY29DfqsQqop+ugFpTbj7Xy5Rco07mXc6XssbAZhI1xtCOX20N4PufBuYippCK5AE6AiAyVtJmvfGQk4HP+TjOyhFo7PZm3wc9Hym7IBBVC0Sl30K8ddufkAgHwNGvvu1ZmD9ZWaMOXJDHBCZGMMr16QREZwVtZTwMEQalc7/yqmuqMhmcJIfs/GA2Lt91y+pq9C8XyeUL0VFPch0vkcLSRe3ghMZpRFJ/ht307xPcLzgTJqN6oQtNNDzSQglSEjwhge2K4GyWcIh+oGsWxWz5dHyk1iJmw90Y976BZIl/mYVgbTtZAJ81oGe/0k5rAe+LDL+Yq6tG28QFOg0QmiQ==
params:
controller.status.processors: "10"
controller.operation.processors: "5"
server.insecure: true
server.enable.gzip: true
controller:
metrics:
enabled: false
serviceMonitor:
enabled: true
resources:
limits:
# cpu: 500m
memory: 2048Mi
requests:
cpu: 100m
memory: 512Mi
repoServer:
metrics:
enabled: false
serviceMonitor:
enabled: true
server:
# Rename former https port to grpc, works with istio + insecure
service:
servicePortHttpsName: grpc
metrics:
enabled: false
serviceMonitor:
enabled: true
# redis:
# We might want to try to keep redis close to the controller
# affinity:
dex:
enabled: false
applicationSet:
enabled: false
notifications:
enabled: false
argocd-image-updater:
enabled: false
# Unify all ArgoCD pieces under the same argocd namespace
fullnameOverride: argocd-image-updater
config:
argocd:
plaintext: true
metrics:
enabled: false
serviceMonitor:
enabled: true
authScripts:
enabled: true
scripts:
ecr-login.sh: |
#!/bin/sh
aws ecr --region $AWS_REGION get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d
ecr-public-login.sh: |
#!/bin/sh
aws ecr-public --region us-east-1 get-authorization-token --output text --query 'authorizationData.authorizationToken' | base64 -d
sshConfig:
config: |
Host *
PubkeyAcceptedAlgorithms +ssh-rsa
HostkeyAlgorithms +ssh-rsa

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-auth
description: KubeZero umbrella chart for all things Authentication and Identity management
type: application
version: 0.4.5
version: 0.4.6
appVersion: 22.0.5
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
@ -17,7 +17,7 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: keycloak
version: 18.3.2
version: 18.7.1
repository: "oci://registry-1.docker.io/bitnamicharts"
condition: keycloak.enabled
kubeVersion: ">= 1.26.0"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-cert-manager
description: KubeZero Umbrella Chart for cert-manager
type: application
version: 0.9.6
version: 0.9.7
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -16,6 +16,6 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: cert-manager
version: v1.13.2
version: v1.14.4
repository: https://charts.jetstack.io
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-cert-manager
![Version: 0.9.6](https://img.shields.io/badge/Version-0.9.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.9.7](https://img.shields.io/badge/Version-0.9.7-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for cert-manager
@ -19,7 +19,7 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.jetstack.io | cert-manager | v1.13.2 |
| https://charts.jetstack.io | cert-manager | v1.14.4 |
## AWS - OIDC IAM roles
@ -37,6 +37,7 @@ If your resolvers need additional sercrets like CloudFlare API tokens etc. make
| cert-manager.cainjector.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| cert-manager.cainjector.tolerations[0].effect | string | `"NoSchedule"` | |
| cert-manager.cainjector.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| cert-manager.enableCertificateOwnerRef | bool | `true` | |
| cert-manager.enabled | bool | `true` | |
| cert-manager.extraArgs[0] | string | `"--logging-format=json"` | |
| cert-manager.extraArgs[1] | string | `"--leader-elect=false"` | |

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin"
}
},
"version": "f0708d350e4fc95e95338fb54af7b571e5bce7a8",
"version": "5a53a708d8ab9ef936ac5b8062ffc66c77a2c18f",
"sum": "xuUBd2vqF7asyVDe5CE08uPT/RxAdy8O75EjFJoMXXU="
},
{
@ -51,6 +51,16 @@
"version": "a1d61cce1da59c71409b99b5c7568511fec661ea",
"sum": "gCtR9s/4D5fxU9aKXg0Bru+/njZhA0YjLjPiASc61FM="
},
{
"source": {
"git": {
"remote": "https://github.com/grafana/grafonnet.git",
"subdir": "gen/grafonnet-latest"
}
},
"version": "6ac1593ca787638da223380ff4a3fd0f96e953e1",
"sum": "GxEO83uxgsDclLp/fmlUJZDbSGpeUZY6Ap3G2cgdL1g="
},
{
"source": {
"git": {
@ -58,8 +68,18 @@
"subdir": "gen/grafonnet-v10.0.0"
}
},
"version": "bb2afaffbcefeae1035cd691ab06a486e0022002",
"sum": "gj/20VIGucG2vDGjG7YdHLC4yUUfrpuaneUYaRmymOM="
"version": "6ac1593ca787638da223380ff4a3fd0f96e953e1",
"sum": "W7sLuAvMSJPkC7Oo31t45Nz/cUdJV7jzNSJTd3F1daM="
},
{
"source": {
"git": {
"remote": "https://github.com/grafana/grafonnet.git",
"subdir": "gen/grafonnet-v10.4.0"
}
},
"version": "6ac1593ca787638da223380ff4a3fd0f96e953e1",
"sum": "ZSmDT7i/qU9P8ggmuPuJT+jonq1ZEsBRCXycW/H5L/A="
},
{
"source": {
@ -68,8 +88,8 @@
"subdir": "grafana-builder"
}
},
"version": "eb731883044fc58f255d79c2a8d78a5854084e05",
"sum": "VmOxvg9FuY9UYr3lN6ZJe2HhuIErJoWimPybQr3S3yQ="
"version": "7561fd330312538d22b00e0c7caecb4ba66321ea",
"sum": "+z5VY+bPBNqXcmNAV8xbJcbsRA+pro1R3IM7aIY8OlU="
},
{
"source": {
@ -88,8 +108,8 @@
"subdir": "doc-util"
}
},
"version": "503e5c8fe96d6b55775037713ac10b184709ad93",
"sum": "BY4u0kLF3Qf/4IB4HnX9S5kEQIpHb4MUrppp6WLDtlU="
"version": "6ac6c69685b8c29c54515448eaca583da2d88150",
"sum": "BrAL/k23jq+xy9oA7TWIhUx07dsA/QLm3g7ktCwe//U="
},
{
"source": {
@ -98,8 +118,8 @@
"subdir": ""
}
},
"version": "c1a315a7dbead0335a5e0486acc5583395b22a24",
"sum": "UVdL+uuFI8BSQgLfMJEJk2WDKsQXNT3dRHcr2Ti9rLI="
"version": "fc2e57a8839902ed4ba6cab5a99d642500f7102b",
"sum": "43waffw1QzvpY4rKcWoo3L7Vpee+DCYexwLDd5cPG0M="
},
{
"source": {
@ -108,8 +128,8 @@
"subdir": ""
}
},
"version": "2dbe4f9625a811b8b89f0495e74509c74779da82",
"sum": "Fe7bN9E6qeKNUdENjQvYttgf4S1DDqXRVB80wdmQgHQ="
"version": "a1c276d7a46c4b06fa5d8b4a64441939d398efe5",
"sum": "b/mEai1MvVnZ22YvZlXEO4jWDZledrtJg8eOS1ZUj0M="
},
{
"source": {
@ -118,8 +138,8 @@
"subdir": "jsonnet/kube-state-metrics"
}
},
"version": "98b38ba9bbfdff27b359c58adecab30cc1311a78",
"sum": "+dOzAK+fwsFf97uZpjcjTcEJEC1H8hh/j8f5uIQK/5g="
"version": "9ba1c3702142918e09e8eb5ca530e15198624259",
"sum": "msMZyUvcebzRILLzNlTIiSOwa1XgQKtP7jbZTkiqwM0="
},
{
"source": {
@ -128,7 +148,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin"
}
},
"version": "98b38ba9bbfdff27b359c58adecab30cc1311a78",
"version": "9ba1c3702142918e09e8eb5ca530e15198624259",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
},
{
@ -138,8 +158,8 @@
"subdir": "jsonnet/kube-prometheus"
}
},
"version": "80ab54b66a88cd40fc935d17abbd7b50b12cc3f7",
"sum": "w35hpzjA5b+xr9dXnpudKRsdTheO9YO1SESoG4oyyL8="
"version": "76f2e1ef95be0df752037baa040781c5219e1fb3",
"sum": "IgpAgyyBZ7VT2vr9kSYQP/lkZUNQnbqpGh2sYCtUKs0="
},
{
"source": {
@ -148,8 +168,8 @@
"subdir": "jsonnet/mixin"
}
},
"version": "88e86c5caf84dc85338c904e13b0656bf1b56a67",
"sum": "n3flMIzlADeyygb0uipZ4KPp2uNSjdtkrwgHjTC7Ca4=",
"version": "71d9433ba612f4b826ffa38520b23a7985b50db3",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin"
},
{
@ -159,8 +179,8 @@
"subdir": "jsonnet/prometheus-operator"
}
},
"version": "88e86c5caf84dc85338c904e13b0656bf1b56a67",
"sum": "3tRcbCxuH5piaixkvwe4UdVVWlxkxKz8eBgbgYqvbRk="
"version": "71d9433ba612f4b826ffa38520b23a7985b50db3",
"sum": "S4LFa0h1AzANixqGMowtwVswVP+y6f+fXloxpO7hMes="
},
{
"source": {
@ -169,7 +189,7 @@
"subdir": "doc/alertmanager-mixin"
}
},
"version": "4494abfce419d1bbd3cb1a2c0b6584da88ac9b64",
"version": "14cbe6301c732658d6fe877ec55ad5b738abcf06",
"sum": "IpF46ZXsm+0wJJAPtAre8+yxTNZA57mBqGpBP/r7/kw=",
"name": "alertmanager"
},
@ -180,8 +200,8 @@
"subdir": "docs/node-mixin"
}
},
"version": "12f1744e799e04373c7a29b42bf8b8a332c82790",
"sum": "QZwFBpulndqo799gkR5rP2/WdcQKQkNnaBwhaOI8Jeg="
"version": "3accd4cf8286e69d70516abdced6bf186274322a",
"sum": "vWhHvFqV7+fxrQddTeGVKi1e4EzB3VWtNyD8TjSmevY="
},
{
"source": {
@ -190,8 +210,8 @@
"subdir": "documentation/prometheus-mixin"
}
},
"version": "5dbbadf59823aea6e9daa5e6c7877f1572b93941",
"sum": "rNvddVTMNfaguOGzEGoeKjUsfhlXJBUImC+SIFNNCiM=",
"version": "773170f372e0a57949854b74231ee3e09185f728",
"sum": "u/Fpz2MPkezy71/q+c7mF0vc3hE9fWt2W/YbvF0LP/8=",
"name": "prometheus"
},
{
@ -212,7 +232,7 @@
"subdir": "mixin"
}
},
"version": "463dd481cc740194cc9504cd6aeb48f342afe020",
"version": "93c79b61825ec00889188e35a58635eee247bc36",
"sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=",
"name": "thanos-mixin"
}

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-ci
description: KubeZero umbrella chart for all things CI
type: application
version: 0.8.6
version: 0.8.8
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -18,11 +18,11 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: gitea
version: 10.1.0
version: 10.1.3
repository: https://dl.gitea.io/charts/
condition: gitea.enabled
- name: jenkins
version: 4.12.1
version: 5.1.3
repository: https://charts.jenkins.io
condition: jenkins.enabled
- name: trivy
@ -30,7 +30,7 @@ dependencies:
repository: https://aquasecurity.github.io/helm-charts/
condition: trivy.enabled
- name: renovate
version: 37.153.2
version: 37.267.1
repository: https://docs.renovatebot.com/helm-charts
condition: renovate.enabled
kubeVersion: ">= 1.25.0"

View File

@ -1,6 +1,6 @@
# kubezero-ci
![Version: 0.8.6](https://img.shields.io/badge/Version-0.8.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.8.8](https://img.shields.io/badge/Version-0.8.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for all things CI
@ -20,9 +20,9 @@ Kubernetes: `>= 1.25.0`
|------------|------|---------|
| https://aquasecurity.github.io/helm-charts/ | trivy | 0.7.0 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.jenkins.io | jenkins | 4.12.1 |
| https://dl.gitea.io/charts/ | gitea | 10.1.0 |
| https://docs.renovatebot.com/helm-charts | renovate | 37.153.2 |
| https://charts.jenkins.io | jenkins | 5.1.3 |
| https://dl.gitea.io/charts/ | gitea | 10.1.3 |
| https://docs.renovatebot.com/helm-charts | renovate | 37.267.1 |
# Jenkins
- default build retention 10 builds, 32days
@ -66,7 +66,7 @@ Kubernetes: `>= 1.25.0`
| gitea.gitea.metrics.enabled | bool | `false` | |
| gitea.gitea.metrics.serviceMonitor.enabled | bool | `true` | |
| gitea.image.rootless | bool | `true` | |
| gitea.image.tag | string | `"1.21.4"` | |
| gitea.image.tag | string | `"1.21.9"` | |
| gitea.istio.enabled | bool | `false` | |
| gitea.istio.gateway | string | `"istio-ingress/private-ingressgateway"` | |
| gitea.istio.url | string | `"git.example.com"` | |
@ -90,7 +90,8 @@ Kubernetes: `>= 1.25.0`
| jenkins.agent.containerCap | int | `2` | |
| jenkins.agent.customJenkinsLabels[0] | string | `"podman-aws-trivy"` | |
| jenkins.agent.idleMinutes | int | `30` | |
| jenkins.agent.image | string | `"public.ecr.aws/zero-downtime/jenkins-podman"` | |
| jenkins.agent.image.repository | string | `"public.ecr.aws/zero-downtime/jenkins-podman"` | |
| jenkins.agent.image.tag | string | `"v0.5.0"` | |
| jenkins.agent.podName | string | `"podman-aws"` | |
| jenkins.agent.podRetention | string | `"Default"` | |
| jenkins.agent.resources.limits.cpu | string | `""` | |
@ -98,12 +99,12 @@ Kubernetes: `>= 1.25.0`
| jenkins.agent.resources.requests.cpu | string | `""` | |
| jenkins.agent.resources.requests.memory | string | `""` | |
| jenkins.agent.showRawYaml | bool | `false` | |
| jenkins.agent.tag | string | `"v0.4.6"` | |
| jenkins.agent.yamlMergeStrategy | string | `"merge"` | |
| jenkins.agent.yamlTemplate | string | `"apiVersion: v1\nkind: Pod\nspec:\n securityContext:\n fsGroup: 1000\n serviceAccountName: jenkins-podman-aws\n containers:\n - name: jnlp\n resources:\n requests:\n cpu: \"512m\"\n memory: \"1024Mi\"\n limits:\n cpu: \"4\"\n memory: \"6144Mi\"\n github.com/fuse: 1\n volumeMounts:\n - name: aws-token\n mountPath: \"/var/run/secrets/sts.amazonaws.com/serviceaccount/\"\n readOnly: true\n - name: host-registries-conf\n mountPath: \"/home/jenkins/.config/containers/registries.conf\"\n readOnly: true\n volumes:\n - name: aws-token\n projected:\n sources:\n - serviceAccountToken:\n path: token\n expirationSeconds: 86400\n audience: \"sts.amazonaws.com\"\n - name: host-registries-conf\n hostPath:\n path: /etc/containers/registries.conf\n type: File"` | |
| jenkins.controller.JCasC.configScripts.zdt-settings | string | `"jenkins:\n noUsageStatistics: true\n disabledAdministrativeMonitors:\n - \"jenkins.security.ResourceDomainRecommendation\"\nappearance:\n themeManager:\n disableUserThemes: true\n theme: \"dark\"\nunclassified:\n buildDiscarders:\n configuredBuildDiscarders:\n - \"jobBuildDiscarder\"\n - defaultBuildDiscarder:\n discarder:\n logRotator:\n artifactDaysToKeepStr: \"32\"\n artifactNumToKeepStr: \"10\"\n daysToKeepStr: \"100\"\n numToKeepStr: \"10\"\n"` | |
| jenkins.controller.disableRememberMe | bool | `true` | |
| jenkins.controller.enableRawHtmlMarkupFormatter | bool | `true` | |
| jenkins.controller.image.tag | string | `"alpine-jdk17"` | |
| jenkins.controller.initContainerResources.limits.memory | string | `"1024Mi"` | |
| jenkins.controller.initContainerResources.requests.cpu | string | `"50m"` | |
| jenkins.controller.initContainerResources.requests.memory | string | `"256Mi"` | |
@ -128,7 +129,6 @@ Kubernetes: `>= 1.25.0`
| jenkins.controller.resources.limits.memory | string | `"4096Mi"` | |
| jenkins.controller.resources.requests.cpu | string | `"250m"` | |
| jenkins.controller.resources.requests.memory | string | `"1280Mi"` | |
| jenkins.controller.tag | string | `"alpine-jdk17"` | |
| jenkins.controller.testEnabled | bool | `false` | |
| jenkins.enabled | bool | `false` | |
| jenkins.istio.agent.enabled | bool | `false` | |
@ -152,7 +152,7 @@ Kubernetes: `>= 1.25.0`
| renovate.env.LOG_FORMAT | string | `"json"` | |
| renovate.securityContext.fsGroup | int | `1000` | |
| trivy.enabled | bool | `false` | |
| trivy.image.tag | string | `"0.47.0"` | |
| trivy.image.tag | string | `"0.49.1"` | |
| trivy.persistence.enabled | bool | `true` | |
| trivy.persistence.size | string | `"1Gi"` | |
| trivy.rbac.create | bool | `false` | |

View File

@ -2,7 +2,7 @@ gitea:
enabled: false
image:
#tag: 1.21.4
tag: 1.21.9
rootless: true
repliaCount: 1
@ -42,7 +42,7 @@ gitea:
extraVolumeMounts:
- name: gitea-themes
readOnly: true
mountPath: "/data/gitea/public/assets/css"
mountPath: "/data/gitea/public/assets/css"
checkDeprecation: false
test:
@ -72,6 +72,8 @@ gitea:
ui:
THEMES: "gitea,github-dark"
DEFAULT_THEME: "github-dark"
log:
LEVEL: warn
redis-cluster:
enabled: false
@ -90,8 +92,9 @@ jenkins:
enabled: false
controller:
tag: alpine-jdk17
#tagLabel: alpine
image:
tag: alpine-jdk17
#tagLabel: alpine
disableRememberMe: true
prometheus:
enabled: false
@ -161,8 +164,9 @@ jenkins:
# Preconfigure agents to use zdt podman requires fuse/overlayfs
agent:
image: public.ecr.aws/zero-downtime/jenkins-podman
tag: v0.4.6
image:
repository: public.ecr.aws/zero-downtime/jenkins-podman
tag: v0.5.0
#alwaysPullImage: true
podRetention: "Default"
showRawYaml: false
@ -251,7 +255,7 @@ jenkins:
trivy:
enabled: false
image:
tag: 0.47.0
tag: 0.49.1
persistence:
enabled: true
size: 1Gi

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-falco
description: Falco Container Security and Audit components
type: application
version: 0.1.0
version: 0.1.2
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -16,7 +16,7 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: falco
version: 3.8.4
version: 4.2.5
repository: https://falcosecurity.github.io/charts
condition: k8saudit.enabled
alias: k8saudit

View File

@ -0,0 +1,64 @@
# kubezero-falco
![Version: 0.1.2](https://img.shields.io/badge/Version-0.1.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
Falco Container Security and Audit components
**Homepage:** <https://kubezero.com>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Stefan Reimer | <stefan@zero-downtime.net> | |
## Requirements
Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://falcosecurity.github.io/charts | k8saudit(falco) | 4.2.5 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| k8saudit.collectors | object | `{"enabled":false}` | Disable the collectors, no syscall events to enrich with metadata. |
| k8saudit.controller | object | `{"deployment":{"replicas":1},"kind":"deployment"}` | Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurabale. |
| k8saudit.controller.deployment.replicas | int | `1` | Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing. For more info check the section on Plugins in the README.md file. |
| k8saudit.driver | object | `{"enabled":false}` | Disable the drivers since we want to deploy only the k8saudit plugin. |
| k8saudit.enabled | bool | `false` | |
| k8saudit.falco.buffered_outputs | bool | `true` | |
| k8saudit.falco.json_output | bool | `true` | |
| k8saudit.falco.load_plugins[0] | string | `"k8saudit"` | |
| k8saudit.falco.load_plugins[1] | string | `"json"` | |
| k8saudit.falco.log_syslog | bool | `false` | |
| k8saudit.falco.plugins[0].init_config.maxEventSize | int | `1048576` | |
| k8saudit.falco.plugins[0].library_path | string | `"libk8saudit.so"` | |
| k8saudit.falco.plugins[0].name | string | `"k8saudit"` | |
| k8saudit.falco.plugins[0].open_params | string | `"http://:9765/k8s-audit"` | |
| k8saudit.falco.plugins[1].init_config | string | `""` | |
| k8saudit.falco.plugins[1].library_path | string | `"libjson.so"` | |
| k8saudit.falco.plugins[1].name | string | `"json"` | |
| k8saudit.falco.rules_file[0] | string | `"/etc/falco/rules.d"` | |
| k8saudit.falco.syslog_output.enabled | bool | `false` | |
| k8saudit.falcoctl.artifact.follow.enabled | bool | `false` | |
| k8saudit.falcoctl.artifact.install.enabled | bool | `false` | |
| k8saudit.fullnameOverride | string | `"falco-k8saudit"` | |
| k8saudit.mounts.volumeMounts[0].mountPath | string | `"/etc/falco/rules.d"` | |
| k8saudit.mounts.volumeMounts[0].name | string | `"rules-volume"` | |
| k8saudit.mounts.volumes[0].configMap.name | string | `"falco-k8saudit-rules"` | |
| k8saudit.mounts.volumes[0].name | string | `"rules-volume"` | |
| k8saudit.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| k8saudit.resources.limits.cpu | string | `"1000m"` | |
| k8saudit.resources.limits.memory | string | `"512Mi"` | |
| k8saudit.resources.requests.cpu | string | `"100m"` | |
| k8saudit.resources.requests.memory | string | `"256Mi"` | |
| k8saudit.services[0].name | string | `"webhook"` | |
| k8saudit.services[0].ports[0].port | int | `9765` | |
| k8saudit.services[0].ports[0].protocol | string | `"TCP"` | |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)

View File

@ -20,10 +20,12 @@
- required_plugin_versions:
- name: k8saudit
version: 0.6.0
version: 0.7.0
alternatives:
- name: k8saudit-eks
version: 0.2.0
version: 0.4.0
- name: k8saudit-gke
version: 0.1.0
- name: json
version: 0.7.0
@ -79,7 +81,45 @@
"eks:vpc-resource-controller",
"eks:addon-manager",
]
-
- list: k8s_audit_sensitive_mount_images
items: [
falcosecurity/falco, docker.io/falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
docker.io/sysdig/sysdig, sysdig/sysdig,
gcr.io/google_containers/hyperkube,
gcr.io/google_containers/kube-proxy, docker.io/calico/node,
docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
docker.io/datadog/docker-dd-agent, docker.io/datadog/agent, docker.io/docker/ucp-agent, docker.io/gliderlabs/logspout,
docker.io/netdata/netdata, docker.io/google/cadvisor, docker.io/prom/node-exporter,
amazon/amazon-ecs-agent, prom/node-exporter, amazon/cloudwatch-agent
]
- list: k8s_audit_privileged_images
items: [
falcosecurity/falco, docker.io/falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
docker.io/calico/node, calico/node,
docker.io/cloudnativelabs/kube-router,
docker.io/docker/ucp-agent,
docker.io/mesosphere/mesos-slave,
docker.io/rook/toolbox,
docker.io/sysdig/sysdig,
gcr.io/google_containers/kube-proxy,
gcr.io/google-containers/startup-script,
gcr.io/projectcalico-org/node,
gke.gcr.io/kube-proxy,
gke.gcr.io/gke-metadata-server,
gke.gcr.io/netd-amd64,
gke.gcr.io/watcher-daemonset,
gcr.io/google-containers/prometheus-to-sd,
registry.k8s.io/ip-masq-agent-amd64,
registry.k8s.io/kube-proxy,
registry.k8s.io/prometheus-to-sd,
quay.io/calico/node,
sysdig/sysdig,
registry.k8s.io/dns/k8s-dns-node-cache,
mcr.microsoft.com/oss/kubernetes/kube-proxy
]
- rule: Disallowed K8s User
desc: Detect any k8s operation by users outside of an allowed set of users.
condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users)
@ -166,7 +206,7 @@
- rule: Create Privileged Pod
desc: >
Detect an attempt to start a pod with a privileged container
condition: kevt and pod and kcreate and ka.req.pod.containers.privileged intersects (true) and not ka.req.pod.containers.image.repository in (falco_privileged_images)
condition: kevt and pod and kcreate and ka.req.pod.containers.privileged intersects (true) and not ka.req.pod.containers.image.repository in (k8s_audit_privileged_images)
output: Pod started with privileged container (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
@ -180,7 +220,7 @@
desc: >
Detect an attempt to start a pod with a volume from a sensitive host directory (i.e. /proc).
Exceptions are made for known trusted images.
condition: kevt and pod and kcreate and sensitive_vol_mount and not ka.req.pod.containers.image.repository in (falco_sensitive_mount_images)
condition: kevt and pod and kcreate and sensitive_vol_mount and not ka.req.pod.containers.image.repository in (k8s_audit_sensitive_mount_images)
output: Pod started with sensitive mount (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace resource=%ka.target.resource images=%ka.req.pod.containers.image volumes=%jevt.value[/requestObject/spec/volumes])
priority: WARNING
source: k8s_audit
@ -188,7 +228,7 @@
# These container images are allowed to run with hostnetwork=true
# TODO: Remove k8s.gcr.io reference after 01/Dec/2023
- list: falco_hostnetwork_images
- list: k8s_audit_hostnetwork_images
items: [
gcr.io/google-containers/prometheus-to-sd,
gcr.io/projectcalico-org/typha,
@ -196,8 +236,6 @@
gke.gcr.io/gke-metadata-server,
gke.gcr.io/kube-proxy,
gke.gcr.io/netd-amd64,
k8s.gcr.io/ip-masq-agent-amd64,
k8s.gcr.io/prometheus-to-sd,
registry.k8s.io/ip-masq-agent-amd64,
registry.k8s.io/prometheus-to-sd
]
@ -205,29 +243,29 @@
# Corresponds to K8s CIS Benchmark 1.7.4
- rule: Create HostNetwork Pod
desc: Detect an attempt to start a pod using the host network.
condition: kevt and pod and kcreate and ka.req.pod.host_network intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostnetwork_images)
condition: kevt and pod and kcreate and ka.req.pod.host_network intersects (true) and not ka.req.pod.containers.image.repository in (k8s_audit_hostnetwork_images)
output: Pod started using host network (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]
- list: falco_hostpid_images
- list: k8s_audit_hostpid_images
items: []
- rule: Create HostPid Pod
desc: Detect an attempt to start a pod using the host pid namespace.
condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images)
condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (k8s_audit_hostpid_images)
output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
tags: [k8s]
- list: falco_hostipc_images
- list: k8s_audit_hostipc_images
items: []
- rule: Create HostIPC Pod
desc: Detect an attempt to start a pod using the host ipc namespace.
condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images)
condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (k8s_audit_hostipc_images)
output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name resource=%ka.target.resource ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING
source: k8s_audit
@ -298,6 +336,18 @@
source: k8s_audit
tags: [k8s]
- macro: user_known_portforward_activities
condition: (k8s_audit_never_true)
- rule: port-forward
desc: >
Detect any attempt to portforward
condition: ka.target.subresource in (portforward) and not user_known_portforward_activities
output: Portforward to pod (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace action=%ka.target.subresource )
priority: NOTICE
source: k8s_audit
tags: [k8s]
- macro: user_known_pod_debug_activities
condition: (k8s_audit_never_true)
@ -344,19 +394,11 @@
gke.gcr.io/addon-resizer,
gke.gcr.io/heapster,
gke.gcr.io/gke-metadata-server,
k8s.gcr.io/ip-masq-agent-amd64,
k8s.gcr.io/kube-apiserver,
registry.k8s.io/ip-masq-agent-amd64,
registry.k8s.io/kube-apiserver,
gke.gcr.io/kube-proxy,
gke.gcr.io/netd-amd64,
gke.gcr.io/watcher-daemonset,
k8s.gcr.io/addon-resizer,
k8s.gcr.io/prometheus-to-sd,
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
k8s.gcr.io/k8s-dns-kube-dns-amd64,
k8s.gcr.io/k8s-dns-sidecar-amd64,
k8s.gcr.io/metrics-server-amd64,
registry.k8s.io/addon-resizer,
registry.k8s.io/prometheus-to-sd,
registry.k8s.io/k8s-dns-dnsmasq-nanny-amd64,

View File

@ -15,9 +15,9 @@ k8saudit:
resources:
requests:
cpu: 100m
memory: 256Mi
memory: 64Mi
limits:
cpu: 1000m
cpu: 1
memory: 512Mi
nodeSelector:
@ -43,10 +43,16 @@ k8saudit:
falcoctl:
artifact:
install:
enabled: false
follow:
enabled: false
# Since 0.37 the plugins are not part of the image anymore
# but we provide our rules static via our CM
config:
artifact:
allowedTypes:
- plugin
install:
refs: [k8saudit:0.7.0,json:0.7.2]
services:
- name: webhook

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio-gateway
description: KubeZero Umbrella Chart for Istio gateways
type: application
version: 0.19.4
version: 0.21.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -17,6 +17,6 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: gateway
version: 1.19.4
version: 1.21.0
repository: https://istio-release.storage.googleapis.com/charts
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-istio-gateway
![Version: 0.19.4](https://img.shields.io/badge/Version-0.19.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.21.0](https://img.shields.io/badge/Version-0.21.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio gateways
@ -21,7 +21,7 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://istio-release.storage.googleapis.com/charts | gateway | 1.19.4 |
| https://istio-release.storage.googleapis.com/charts | gateway | 1.21.0 |
## Values
@ -41,6 +41,8 @@ Kubernetes: `>= 1.26.0`
| gateway.service.externalTrafficPolicy | string | `"Local"` | |
| gateway.service.type | string | `"NodePort"` | |
| gateway.terminationGracePeriodSeconds | int | `120` | |
| hardening.rejectUnderscoresHeaders | bool | `true` | |
| hardening.unescapeSlashes | bool | `true` | |
| proxyProtocol | bool | `true` | |
| telemetry.enabled | bool | `false` | |

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.19.4
appVersion: 1.21.0
description: Helm chart for deploying Istio gateways
icon: https://istio.io/latest/favicons/android-192x192.png
keywords:
@ -9,4 +9,4 @@ name: gateway
sources:
- https://github.com/istio/istio
type: application
version: 1.19.4
version: 1.21.0

View File

@ -35,6 +35,28 @@ To view support configuration options and documentation, run:
helm show values istio/gateway
```
### Profiles
Istio Helm charts have a concept of a `profile`, which is a bundled collection of value presets.
These can be set with `--set profile=<profile>`.
For example, the `demo` profile offers a preset configuration to try out Istio in a test environment, with additional features enabled and lowered resource requirements.
For consistency, the same profiles are used across each chart, even if they do not impact a given chart.
Explicitly set values have highest priority, then profile settings, then chart defaults.
As an implementation detail of profiles, the default values for the chart are all nested under `defaults`.
When configuring the chart, you should not include this.
That is, `--set some.field=true` should be passed, not `--set defaults.some.field=true`.
### OpenShift
When deploying the gateway in an OpenShift cluster, use the `openshift` profile to override the default values, for example:
```console
helm install istio-ingressgateway istio/gateway -- set profile=openshift
```
### `image: auto` Information
The image used by the chart, `auto`, may be unintuitive.

View File

@ -0,0 +1,25 @@
# The ambient profile enables ambient mode. The Istiod, CNI, and ztunnel charts must be deployed
meshConfig:
defaultConfig:
proxyMetadata:
ISTIO_META_ENABLE_HBONE: "true"
variant: distroless
pilot:
variant: distroless
env:
# Setup more secure default that is off in 'default' only for backwards compatibility
VERIFY_CERTIFICATE_AT_CLIENT: "true"
ENABLE_AUTO_SNI: "true"
PILOT_ENABLE_HBONE: "true"
CA_TRUSTED_NODE_ACCOUNTS: "istio-system/ztunnel,kube-system/ztunnel"
PILOT_ENABLE_AMBIENT_CONTROLLERS: "true"
cni:
logLevel: info
privileged: true
ambient:
enabled: true
# Default excludes istio-system; its actually fine to redirect there since we opt-out istiod, ztunnel, and istio-cni
excludeNamespaces:
- kube-system

View File

@ -0,0 +1,6 @@
pilot:
env:
ENABLE_EXTERNAL_NAME_ALIAS: "false"
PERSIST_OLDEST_FIRST_HEURISTIC_FOR_VIRTUAL_SERVICE_HOST_MATCHING: "true"
VERIFY_CERTIFICATE_AT_CLIENT: "false"
ENABLE_AUTO_SNI: "false"

View File

@ -0,0 +1,69 @@
# The demo profile enables a variety of things to try out Istio in non-production environments.
# * Lower resource utilization.
# * Some additional features are enabled by default; especially ones used in some tasks in istio.io.
# * More ports enabled on the ingress, which is used in some tasks.
meshConfig:
accessLogFile: /dev/stdout
extensionProviders:
- name: otel
envoyOtelAls:
service: opentelemetry-collector.istio-system.svc.cluster.local
port: 4317
- name: skywalking
skywalking:
service: tracing.istio-system.svc.cluster.local
port: 11800
- name: otel-tracing
opentelemetry:
port: 4317
service: opentelemetry-collector.otel-collector.svc.cluster.local
global:
proxy:
resources:
requests:
cpu: 10m
memory: 40Mi
pilot:
autoscaleEnabled: false
traceSampling: 100
resources:
requests:
cpu: 10m
memory: 100Mi
gateways:
istio-egressgateway:
autoscaleEnabled: false
resources:
requests:
cpu: 10m
memory: 40Mi
istio-ingressgateway:
autoscaleEnabled: false
ports:
## You can add custom gateway ports in user values overrides, but it must include those ports since helm replaces.
# Note that AWS ELB will by default perform health checks on the first port
# on this list. Setting this to the health check port will ensure that health
# checks always work. https://github.com/istio/istio/issues/12503
- port: 15021
targetPort: 15021
name: status-port
- port: 80
targetPort: 8080
name: http2
- port: 443
targetPort: 8443
name: https
- port: 31400
targetPort: 31400
name: tcp
# This is the port where sni routing happens
- port: 15443
targetPort: 15443
name: tls
resources:
requests:
cpu: 10m
memory: 40Mi

View File

@ -0,0 +1,18 @@
# The OpenShift profile provides a basic set of settings to run Istio on OpenShift
# CNI must be installed.
cni:
cniBinDir: /var/lib/cni/bin
cniConfDir: /etc/cni/multus/net.d
chained: false
cniConfFileName: "istio-cni.conf"
excludeNamespaces:
- istio-system
- kube-system
logLevel: info
privileged: true
provider: "multus"
global:
platform: openshift
istio_cni:
enabled: true
chained: false

View File

@ -0,0 +1,9 @@
# The preview profile contains features that are experimental.
# This is intended to explore new features coming to Istio.
# Stability, security, and performance are not guaranteed - use at your own risk.
meshConfig:
defaultConfig:
proxyMetadata:
# Enable Istio agent to handle DNS requests for known hosts
# Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf
ISTIO_META_DNS_CAPTURE: "true"

View File

@ -46,6 +46,10 @@ spec:
- name: net.ipv4.ip_unprivileged_port_start
value: "0"
{{- end }}
{{- with .Values.volumes }}
volumes:
{{ toYaml . | nindent 8 }}
{{- end }}
containers:
- name: istio-proxy
# "auto" will be populated at runtime by the mutating webhook. See https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#customizing-injection
@ -94,9 +98,9 @@ spec:
name: http-envoy-prom
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.volumeMounts }}
{{- with .Values.volumeMounts }}
volumeMounts:
{{- toYaml .Values.volumeMounts | nindent 12 }}
{{ toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
@ -118,7 +122,3 @@ spec:
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- with .Values.volumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -28,4 +28,15 @@ spec:
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
type: Utilization
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
type: Utilization
{{- end }}
{{- if .Values.autoscaling.autoscaleBehavior }}
behavior: {{ toYaml .Values.autoscaling.autoscaleBehavior | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -15,12 +15,19 @@ spec:
{{- with .Values.service.loadBalancerIP }}
loadBalancerIP: "{{ . }}"
{{- end }}
{{- with .Values.service.ipFamilyPolicy }}
ipFamilyPolicy: "{{ . }}"
{{- if eq .Values.service.type "LoadBalancer" }}
{{- if hasKey .Values.service "allocateLoadBalancerNodePorts" }}
allocateLoadBalancerNodePorts: {{ .Values.service.allocateLoadBalancerNodePorts }}
{{- end }}
{{- end }}
{{- with .Values.service.ipFamilies }}
{{- if .Values.service.ipFamilyPolicy }}
ipFamilyPolicy: {{ .Values.service.ipFamilyPolicy }}
{{- end }}
{{- if .Values.service.ipFamilies }}
ipFamilies:
{{ toYaml . | indent 4 }}
{{- range .Values.service.ipFamilies }}
- {{ . }}
{{- end }}
{{- end }}
{{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:

View File

@ -0,0 +1,34 @@
{{/*
Complex logic ahead...
We have three sets of values, in order of precedence (last wins):
1. The builtin values.yaml defaults
2. The profile the user selects
3. Users input (-f or --set)
Unfortunately, Helm provides us (1) and (3) together (as .Values), making it hard to insert (2).
However, we can workaround this by placing all of (1) under a specific key (.Values.defaults).
We can then merge the profile onto the defaults, then the user settings onto that.
Finally, we can set all of that under .Values so the chart behaves without awareness.
*/}}
{{- $defaults := $.Values.defaults }}
{{- $_ := unset $.Values "defaults" }}
{{- $profile := dict }}
{{- with .Values.profile }}
{{- with $.Files.Get (printf "files/profile-%s.yaml" .)}}
{{- $profile = (. | fromYaml) }}
{{- else }}
{{ fail (cat "unknown profile" $.Values.profile) }}
{{- end }}
{{- end }}
{{- with .Values.compatibilityVersion }}
{{- with $.Files.Get (printf "files/profile-compatibility-version-%s.yaml" .) }}
{{- $ignore := mustMergeOverwrite $profile (. | fromYaml) }}
{{- else }}
{{ fail (cat "unknown compatibility version" $.Values.compatibilityVersion) }}
{{- end }}
{{- end }}
{{- if $profile }}
{{- $a := mustMergeOverwrite $defaults $profile }}
{{- end }}
{{- $b := set $ "Values" (mustMergeOverwrite $defaults $.Values) }}

View File

@ -2,240 +2,300 @@
"$schema": "http://json-schema.org/schema#",
"type": "object",
"additionalProperties": false,
"properties": {
"global": {
"type": "object"
},
"affinity": {
"type": "object"
},
"securityContext": {
"type": ["object", "null"]
},
"containerSecurityContext": {
"type": ["object", "null"]
},
"kind":{
"type": "string",
"enum": ["Deployment", "DaemonSet"]
},
"annotations": {
"additionalProperties": {
"type": [
"string",
"integer"
]
},
"type": "object"
},
"autoscaling": {
"$defs": {
"values": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean"
},
"maxReplicas": {
"type": "integer"
},
"minReplicas": {
"type": "integer"
},
"targetCPUUtilizationPercentage": {
"type": "integer"
}
}
},
"env": {
"type": "object"
},
"labels": {
"type": "object"
},
"volumes": {
"type": "array"
},
"volumeMounts": {
"type": "array"
},
"name": {
"type": "string"
},
"nodeSelector": {
"type": "object"
},
"podAnnotations": {
"type": "object",
"properties": {
"inject.istio.io/templates": {
"type": "string"
},
"prometheus.io/path": {
"type": "string"
},
"prometheus.io/port": {
"type": "string"
},
"prometheus.io/scrape": {
"type": "string"
}
}
},
"replicaCount": {
"type": [ "integer", "null" ]
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpu": {
"type": "string"
},
"memory": {
"type": "string"
}
}
},
"requests": {
"type": "object",
"properties": {
"cpu": {
"type": "string"
},
"memory": {
"type": "string"
}
}
}
}
},
"revision": {
"type": "string"
},
"runAsRoot": {
"type": "boolean"
},
"unprivilegedPort": {
"type": ["string", "boolean"],
"enum": [true, false, "auto"]
},
"service": {
"type": "object",
"properties": {
"annotations": {
"global": {
"type": "object"
},
"externalTrafficPolicy": {
"affinity": {
"type": "object"
},
"securityContext": {
"type": [
"object",
"null"
]
},
"containerSecurityContext": {
"type": [
"object",
"null"
]
},
"kind": {
"type": "string",
"enum": [
"Deployment",
"DaemonSet"
]
},
"annotations": {
"additionalProperties": {
"type": [
"string",
"integer"
]
},
"type": "object"
},
"autoscaling": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean"
},
"maxReplicas": {
"type": "integer"
},
"minReplicas": {
"type": "integer"
},
"targetCPUUtilizationPercentage": {
"type": "integer"
}
}
},
"env": {
"type": "object"
},
"labels": {
"type": "object"
},
"name": {
"type": "string"
},
"loadBalancerIP": {
"nodeSelector": {
"type": "object"
},
"podAnnotations": {
"type": "object",
"properties": {
"inject.istio.io/templates": {
"type": "string"
},
"prometheus.io/path": {
"type": "string"
},
"prometheus.io/port": {
"type": "string"
},
"prometheus.io/scrape": {
"type": "string"
}
}
},
"replicaCount": {
"type": [
"integer",
"null"
]
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpu": {
"type": "string"
},
"memory": {
"type": "string"
}
}
},
"requests": {
"type": "object",
"properties": {
"cpu": {
"type": "string"
},
"memory": {
"type": "string"
}
}
}
}
},
"revision": {
"type": "string"
},
"loadBalancerSourceRanges": {
"compatibilityVersion": {
"type": "string"
},
"runAsRoot": {
"type": "boolean"
},
"unprivilegedPort": {
"type": [
"string",
"boolean"
],
"enum": [
true,
false,
"auto"
]
},
"service": {
"type": "object",
"properties": {
"annotations": {
"type": "object"
},
"externalTrafficPolicy": {
"type": "string"
},
"loadBalancerIP": {
"type": "string"
},
"loadBalancerSourceRanges": {
"type": "array"
},
"ipFamilies": {
"items": {
"type": "string",
"enum": [
"IPv4",
"IPv6"
]
}
},
"ipFamilyPolicy": {
"type": "string",
"enum": [
"",
"SingleStack",
"PreferDualStack",
"RequireDualStack"
]
},
"ports": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"port": {
"type": "integer"
},
"protocol": {
"type": "string"
},
"targetPort": {
"type": "integer"
}
}
}
},
"type": {
"type": "string"
}
}
},
"serviceAccount": {
"type": "object",
"properties": {
"annotations": {
"type": "object"
},
"name": {
"type": "string"
},
"create": {
"type": "boolean"
}
}
},
"rbac": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean"
}
}
},
"tolerations": {
"type": "array"
},
"ipFamilies" : {
"items": {
"type": "string",
"enum": ["IPv4", "IPv6"]
}
"topologySpreadConstraints": {
"type": "array"
},
"ipFamilyPolicy" : {
"networkGateway": {
"type": "string"
},
"imagePullPolicy": {
"type": "string",
"enum": ["", "SingleStack", "PreferDualStack", "RequireDualStack"]
"enum": [
"",
"Always",
"IfNotPresent",
"Never"
]
},
"ports": {
"imagePullSecrets": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"port": {
"type": "integer"
},
"protocol": {
"type": "string"
},
"targetPort": {
"type": "integer"
}
}
}
},
"type": {
"type": "string"
}
}
},
"serviceAccount": {
"type": "object",
"properties": {
"annotations": {
"type": "object"
},
"name": {
"type": "string"
},
"create": {
"type": "boolean"
}
}
},
"rbac": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean"
}
}
},
"tolerations": {
"type": "array"
},
"topologySpreadConstraints": {
"type": "array"
},
"networkGateway": {
"type": "string"
},
"imagePullPolicy": {
"type": "string",
"enum": ["", "Always", "IfNotPresent", "Never"]
},
"imagePullSecrets": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
"podDisruptionBudget": {
"type": "object",
"properties": {
"minAvailable": {
"type": [
"integer",
"string"
]
},
"maxUnavailable": {
"type": [
"integer",
"string"
]
},
"unhealthyPodEvictionPolicy": {
"type": "string",
"enum": [
"",
"IfHealthyBudget",
"AlwaysAllow"
]
}
}
},
"terminationGracePeriodSeconds": {
"type": "number"
},
"volumes": {
"type": "array",
"items": {
"type": "object"
}
},
"volumeMounts": {
"type": "array",
"items": {
"type": "object"
}
},
"priorityClassName": {
"type": "string"
}
}
},
"podDisruptionBudget": {
"type": "object",
"properties": {
"minAvailable": {
"type": ["integer", "string"]
},
"maxUnavailable": {
"type": ["integer", "string"]
},
"unhealthyPodEvictionPolicy": {
"type": "string",
"enum": ["", "IfHealthyBudget", "AlwaysAllow"]
}
}
},
"terminationGracePeriodSeconds": {
"type": "number"
},
"priorityClassName": {
"type": "string"
}
}
},
"defaults": {
"$ref": "#/$defs/values"
},
"$ref": "#/$defs/values"
}

View File

@ -1,139 +1,152 @@
# Name allows overriding the release name. Generally this should not be set
name: ""
# revision declares which revision this gateway is a part of
revision: ""
# Controls the spec.replicas setting for the Gateway deployment if set.
# Otherwise defaults to Kubernetes Deployment default (1).
replicaCount:
kind: Deployment
rbac:
# If enabled, roles will be created to enable accessing certificates from Gateways. This is not needed
# when using http://gateway-api.org/.
enabled: true
serviceAccount:
# If set, a service account will be created. Otherwise, the default is used
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set, the release name is used
defaults:
# Name allows overriding the release name. Generally this should not be set
name: ""
# revision declares which revision this gateway is a part of
revision: ""
podAnnotations:
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
prometheus.io/path: "/stats/prometheus"
inject.istio.io/templates: "gateway"
sidecar.istio.io/inject: "true"
# Controls the spec.replicas setting for the Gateway deployment if set.
# Otherwise defaults to Kubernetes Deployment default (1).
replicaCount:
# Define the security context for the pod.
# If unset, this will be automatically set to the minimum privileges required to bind to port 80 and 443.
# On Kubernetes 1.22+, this only requires the `net.ipv4.ip_unprivileged_port_start` sysctl.
securityContext: ~
containerSecurityContext: ~
kind: Deployment
service:
# Type of service. Set to "None" to disable the service entirely
type: LoadBalancer
ports:
- name: status-port
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
rbac:
# If enabled, roles will be created to enable accessing certificates from Gateways. This is not needed
# when using http://gateway-api.org/.
enabled: true
serviceAccount:
# If set, a service account will be created. Otherwise, the default is used
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set, the release name is used
name: ""
podAnnotations:
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
prometheus.io/path: "/stats/prometheus"
inject.istio.io/templates: "gateway"
sidecar.istio.io/inject: "true"
# Define the security context for the pod.
# If unset, this will be automatically set to the minimum privileges required to bind to port 80 and 443.
# On Kubernetes 1.22+, this only requires the `net.ipv4.ip_unprivileged_port_start` sysctl.
securityContext: ~
containerSecurityContext: ~
service:
# Type of service. Set to "None" to disable the service entirely
type: LoadBalancer
ports:
- name: status-port
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
annotations: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
externalIPs: []
ipFamilyPolicy: ""
ipFamilies: []
## Whether to automatically allocate NodePorts (only for LoadBalancers).
# allocateLoadBalancerNodePorts: false
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: {}
autoscaleBehavior: {}
# Pod environment variables
env: {}
# Labels to apply to all resources
labels: {}
# Annotations to apply to all resources
annotations: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
externalIPs: []
ipFamilyPolicy: ""
ipFamilies: []
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
nodeSelector: {}
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
tolerations: []
# Pod environment variables
env: {}
topologySpreadConstraints: []
# Labels to apply to all resources
labels: {}
affinity: {}
# Annotations to apply to all resources
annotations: {}
# If specified, the gateway will act as a network gateway for the given network.
networkGateway: ""
nodeSelector: {}
# Specify image pull policy if default behavior isn't desired.
# Default behavior: latest images will be Always else IfNotPresent
imagePullPolicy: ""
tolerations: []
imagePullSecrets: []
topologySpreadConstraints: []
# This value is used to configure a Kubernetes PodDisruptionBudget for the gateway.
#
# By default, the `podDisruptionBudget` is disabled (set to `{}`),
# which means that no PodDisruptionBudget resource will be created.
#
# To enable the PodDisruptionBudget, configure it by specifying the
# `minAvailable` or `maxUnavailable`. For example, to set the
# minimum number of available replicas to 1, you can update this value as follows:
#
# podDisruptionBudget:
# minAvailable: 1
#
# Or, to allow a maximum of 1 unavailable replica, you can set:
#
# podDisruptionBudget:
# maxUnavailable: 1
#
# You can also specify the `unhealthyPodEvictionPolicy` field, and the valid values are `IfHealthyBudget` and `AlwaysAllow`.
# For example, to set the `unhealthyPodEvictionPolicy` to `AlwaysAllow`, you can update this value as follows:
#
# podDisruptionBudget:
# minAvailable: 1
# unhealthyPodEvictionPolicy: AlwaysAllow
#
# To disable the PodDisruptionBudget, you can leave it as an empty object `{}`:
#
# podDisruptionBudget: {}
#
podDisruptionBudget: {}
affinity: {}
terminationGracePeriodSeconds: 30
# If specified, the gateway will act as a network gateway for the given network.
networkGateway: ""
# A list of `Volumes` added into the Gateway Pods. See
# https://kubernetes.io/docs/concepts/storage/volumes/.
volumes: []
# Specify image pull policy if default behavior isn't desired.
# Default behavior: latest images will be Always else IfNotPresent
imagePullPolicy: ""
# A list of `VolumeMounts` added into the Gateway Pods. See
# https://kubernetes.io/docs/concepts/storage/volumes/.
volumeMounts: []
imagePullSecrets: []
# This value is used to configure a Kubernetes PodDisruptionBudget for the gateway.
#
# By default, the `podDisruptionBudget` is disabled (set to `{}`),
# which means that no PodDisruptionBudget resource will be created.
#
# To enable the PodDisruptionBudget, configure it by specifying the
# `minAvailable` or `maxUnavailable`. For example, to set the
# minimum number of available replicas to 1, you can update this value as follows:
#
# podDisruptionBudget:
# minAvailable: 1
#
# Or, to allow a maximum of 1 unavailable replica, you can set:
#
# podDisruptionBudget:
# maxUnavailable: 1
#
# You can also specify the `unhealthyPodEvictionPolicy` field, and the valid values are `IfHealthyBudget` and `AlwaysAllow`.
# For example, to set the `unhealthyPodEvictionPolicy` to `AlwaysAllow`, you can update this value as follows:
#
# podDisruptionBudget:
# minAvailable: 1
# unhealthyPodEvictionPolicy: AlwaysAllow
#
# To disable the PodDisruptionBudget, you can leave it as an empty object `{}`:
#
# podDisruptionBudget: {}
#
podDisruptionBudget: {}
terminationGracePeriodSeconds: 30
# Configure this to a higher priority class in order to make sure your Istio gateway pods
# will not be killed because of low priority class.
# Refer to https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
# for more detail.
priorityClassName: ""
# Configure this to a higher priority class in order to make sure your Istio gateway pods
# will not be killed because of low priority class.
# Refer to https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
# for more detail.
priorityClassName: ""

View File

@ -11,25 +11,6 @@ diff -tubr charts/gateway.orig/templates/deployment.yaml charts/gateway/template
selector:
matchLabels:
{{- include "gateway.selectorLabels" . | nindent 6 }}
@@ -86,6 +90,10 @@
name: http-envoy-prom
resources:
{{- toYaml .Values.resources | nindent 12 }}
+ {{- if .Values.volumeMounts }}
+ volumeMounts:
+ {{- toYaml .Values.volumeMounts | nindent 12 }}
+ {{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
@@ -102,3 +110,7 @@
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
+ {{- with .Values.volumes }}
+ volumes:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
diff -tubr charts/gateway.orig/templates/service.yaml charts/gateway/templates/service.yaml
--- charts/gateway.orig/templates/service.yaml 2022-12-09 14:58:33.000000000 +0000
+++ charts/gateway/templates/service.yaml 2022-12-12 22:52:27.629670669 +0000
@ -49,19 +30,3 @@ diff -tubr charts/gateway.orig/templates/service.yaml charts/gateway/templates/s
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs: {{- range .Values.service.externalIPs }}
diff -tubr charts/gateway.orig/values.schema.json charts/gateway/values.schema.json
--- charts/gateway.orig/values.schema.json 2022-12-09 14:58:33.000000000 +0000
+++ charts/gateway/values.schema.json 2022-12-12 22:52:27.629670669 +0000
@@ -51,6 +51,12 @@
"labels": {
"type": "object"
},
+ "volumes": {
+ "type": "array"
+ },
+ "volumeMounts": {
+ "type": "array"
+ },
"name": {
"type": "string"
},

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio
description: KubeZero Umbrella Chart for Istio
type: application
version: 0.19.4
version: 0.21.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -16,13 +16,13 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: base
version: 1.19.4
version: 1.21.0
repository: https://istio-release.storage.googleapis.com/charts
- name: istiod
version: 1.19.4
version: 1.21.0
repository: https://istio-release.storage.googleapis.com/charts
- name: kiali-server
version: "1.76.0"
version: "1.82.0"
repository: https://kiali.org/helm-charts
condition: kiali-server.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-istio
![Version: 0.19.4](https://img.shields.io/badge/Version-0.19.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.21.0](https://img.shields.io/badge/Version-0.21.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio
@ -21,9 +21,9 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://istio-release.storage.googleapis.com/charts | base | 1.19.4 |
| https://istio-release.storage.googleapis.com/charts | istiod | 1.19.4 |
| https://kiali.org/helm-charts | kiali-server | 1.76.0 |
| https://istio-release.storage.googleapis.com/charts | base | 1.21.0 |
| https://istio-release.storage.googleapis.com/charts | istiod | 1.21.0 |
| https://kiali.org/helm-charts | kiali-server | 1.82.0 |
## Values

View File

@ -5,18 +5,18 @@ folder: Istio
condition: '.Values.istiod.telemetry.enabled'
dashboards:
- name: istio-control-plane
url: https://grafana.com/api/dashboards/7645/revisions/187/download
url: https://grafana.com/api/dashboards/7645/revisions/201/download
tags:
- Istio
- name: istio-mesh
url: https://grafana.com/api/dashboards/7639/revisions/187/download
url: https://grafana.com/api/dashboards/7639/revisions/201/download
tags:
- Istio
- name: istio-service
url: https://grafana.com/api/dashboards/7636/revisions/187/download
url: https://grafana.com/api/dashboards/7636/revisions/201/download
tags:
- Istio
- name: istio-workload
url: https://grafana.com/api/dashboards/7630/revisions/187/download
url: https://grafana.com/api/dashboards/7630/revisions/201/download
tags:
- Istio

File diff suppressed because one or more lines are too long

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-logging
description: KubeZero Umbrella Chart for complete EFK stack
type: application
version: 0.8.10
version: 0.8.11
appVersion: 1.6.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
@ -20,11 +20,11 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: fluentd
version: 0.5.0
version: 0.5.2
repository: https://fluent.github.io/helm-charts
condition: fluentd.enabled
- name: fluent-bit
version: 0.40.0
version: 0.46.0
repository: https://fluent.github.io/helm-charts
condition: fluent-bit.enabled
kubeVersion: ">= 1.26.0"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-metrics
description: KubeZero Umbrella Chart for Prometheus, Grafana and Alertmanager as well as all Kubernetes integrations.
type: application
version: 0.9.5
version: 0.9.6
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -19,14 +19,14 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: kube-prometheus-stack
version: 54.2.2
version: 57.2.0
repository: https://prometheus-community.github.io/helm-charts
- name: prometheus-adapter
version: 4.9.0
version: 4.9.1
repository: https://prometheus-community.github.io/helm-charts
condition: prometheus-adapter.enabled
- name: prometheus-pushgateway
version: 2.4.2
version: 2.8.0
repository: https://prometheus-community.github.io/helm-charts
condition: prometheus-pushgateway.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-metrics
![Version: 0.9.5](https://img.shields.io/badge/Version-0.9.5-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.9.6](https://img.shields.io/badge/Version-0.9.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Prometheus, Grafana and Alertmanager as well as all Kubernetes integrations.
@ -19,9 +19,9 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://prometheus-community.github.io/helm-charts | kube-prometheus-stack | 54.2.2 |
| https://prometheus-community.github.io/helm-charts | prometheus-adapter | 4.9.0 |
| https://prometheus-community.github.io/helm-charts | prometheus-pushgateway | 2.4.2 |
| https://prometheus-community.github.io/helm-charts | kube-prometheus-stack | 57.2.0 |
| https://prometheus-community.github.io/helm-charts | prometheus-adapter | 4.9.1 |
| https://prometheus-community.github.io/helm-charts | prometheus-pushgateway | 2.8.0 |
## Values
@ -177,29 +177,30 @@ Kubernetes: `>= 1.26.0`
| kube-prometheus-stack.prometheusOperator.enabled | bool | `true` | |
| kube-prometheus-stack.prometheusOperator.logFormat | string | `"json"` | |
| kube-prometheus-stack.prometheusOperator.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| kube-prometheus-stack.prometheusOperator.resources.limits.memory | string | `"64Mi"` | |
| kube-prometheus-stack.prometheusOperator.resources.requests.cpu | string | `"20m"` | |
| kube-prometheus-stack.prometheusOperator.resources.requests.memory | string | `"32Mi"` | |
| kube-prometheus-stack.prometheusOperator.resources.limits.memory | string | `"128Mi"` | |
| kube-prometheus-stack.prometheusOperator.resources.requests.cpu | string | `"10m"` | |
| kube-prometheus-stack.prometheusOperator.resources.requests.memory | string | `"64Mi"` | |
| kube-prometheus-stack.prometheusOperator.tolerations[0].effect | string | `"NoSchedule"` | |
| kube-prometheus-stack.prometheusOperator.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| prometheus-adapter.enabled | bool | `true` | |
| prometheus-adapter.logLevel | int | `1` | |
| prometheus-adapter.metricsRelistInterval | string | `"3m"` | |
| prometheus-adapter.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| prometheus-adapter.prometheus.url | string | `"http://metrics-kube-prometheus-st-prometheus"` | |
| prometheus-adapter.rules.default | bool | `false` | |
| prometheus-adapter.rules.resource.cpu.containerLabel | string | `"container"` | |
| prometheus-adapter.rules.resource.cpu.containerQuery | string | `"sum(irate(container_cpu_usage_seconds_total{<<.LabelMatchers>>,container!=\"POD\",container!=\"\",pod!=\"\"}[5m])) by (<<.GroupBy>>)"` | |
| prometheus-adapter.rules.resource.cpu.nodeQuery | string | `"sum(1 - irate(node_cpu_seconds_total{mode=\"idle\"}[5m]) * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:{<<.LabelMatchers>>}) by (<<.GroupBy>>)"` | |
| prometheus-adapter.rules.resource.cpu.containerQuery | string | `"sum by (<<.GroupBy>>) (\n irate (\n container_cpu_usage_seconds_total{<<.LabelMatchers>>,container!=\"\",pod!=\"\"}[120s]\n )\n)\n"` | |
| prometheus-adapter.rules.resource.cpu.nodeQuery | string | `"sum(1 - irate(node_cpu_seconds_total{<<.LabelMatchers>>, mode=\"idle\"}[120s])) by (<<.GroupBy>>)\n"` | |
| prometheus-adapter.rules.resource.cpu.resources.overrides.instance.resource | string | `"node"` | |
| prometheus-adapter.rules.resource.cpu.resources.overrides.namespace.resource | string | `"namespace"` | |
| prometheus-adapter.rules.resource.cpu.resources.overrides.node.resource | string | `"node"` | |
| prometheus-adapter.rules.resource.cpu.resources.overrides.pod.resource | string | `"pod"` | |
| prometheus-adapter.rules.resource.memory.containerLabel | string | `"container"` | |
| prometheus-adapter.rules.resource.memory.containerQuery | string | `"sum(container_memory_working_set_bytes{<<.LabelMatchers>>,container!=\"POD\",container!=\"\",pod!=\"\"}) by (<<.GroupBy>>)"` | |
| prometheus-adapter.rules.resource.memory.nodeQuery | string | `"sum(node_memory_MemTotal_bytes{job=\"node-exporter\",<<.LabelMatchers>>} - node_memory_MemAvailable_bytes{job=\"node-exporter\",<<.LabelMatchers>>}) by (<<.GroupBy>>)"` | |
| prometheus-adapter.rules.resource.memory.containerQuery | string | `"sum by (<<.GroupBy>>) (\n container_memory_working_set_bytes{<<.LabelMatchers>>,container!=\"\",pod!=\"\",container!=\"POD\"}\n)\n"` | |
| prometheus-adapter.rules.resource.memory.nodeQuery | string | `"sum(node_memory_MemTotal_bytes{<<.LabelMatchers>>} - node_memory_MemAvailable_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)\n"` | |
| prometheus-adapter.rules.resource.memory.resources.overrides.instance.resource | string | `"node"` | |
| prometheus-adapter.rules.resource.memory.resources.overrides.namespace.resource | string | `"namespace"` | |
| prometheus-adapter.rules.resource.memory.resources.overrides.node.resource | string | `"node"` | |
| prometheus-adapter.rules.resource.memory.resources.overrides.pod.resource | string | `"pod"` | |
| prometheus-adapter.rules.resource.window | string | `"5m"` | |
| prometheus-adapter.rules.resource.window | string | `"2m"` | |
| prometheus-adapter.tolerations[0].effect | string | `"NoSchedule"` | |
| prometheus-adapter.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| prometheus-pushgateway.enabled | bool | `false` | |

View File

@ -0,0 +1,5 @@
root = true
[files/dashboards/*.json]
indent_size = 2
indent_style = space

View File

@ -26,3 +26,4 @@ ci/
kube-prometheus-*.tgz
unittests/
files/dashboards/

View File

@ -7,7 +7,7 @@ annotations:
url: https://github.com/prometheus-operator/kube-prometheus
artifacthub.io/operator: "true"
apiVersion: v2
appVersion: v0.69.1
appVersion: v0.72.0
dependencies:
- condition: crds.enabled
name: crds
@ -16,19 +16,19 @@ dependencies:
- condition: kubeStateMetrics.enabled
name: kube-state-metrics
repository: https://prometheus-community.github.io/helm-charts
version: 5.15.*
version: 5.18.*
- condition: nodeExporter.enabled
name: prometheus-node-exporter
repository: https://prometheus-community.github.io/helm-charts
version: 4.24.*
version: 4.32.*
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 7.0.*
version: 7.3.*
- condition: windowsMonitoring.enabled
name: prometheus-windows-exporter
repository: https://prometheus-community.github.io/helm-charts
version: 0.1.*
version: 0.3.*
description: kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide easy to
operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus
@ -49,6 +49,8 @@ maintainers:
name: gkarthiks
- email: kube-prometheus-stack@sisti.pt
name: GMartinez-Sisti
- email: github@jkroepke.de
name: jkroepke
- email: scott@r6by.com
name: scottrigby
- email: miroslav.hadzhiev@gmail.com
@ -60,4 +62,4 @@ sources:
- https://github.com/prometheus-community/helm-charts
- https://github.com/prometheus-operator/kube-prometheus
type: application
version: 54.2.2
version: 57.2.0

View File

@ -82,6 +82,63 @@ _See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documen
A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an incompatible breaking change needing manual actions.
### From 56.x to 57.x
This version upgrades Prometheus-Operator to v0.72.0
Run these commands to update the CRDs before applying the upgrade.
```console
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_scrapeconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
```
### From 55.x to 56.x
This version upgrades Prometheus-Operator to v0.71.0, Prometheus to 2.49.1
Run these commands to update the CRDs before applying the upgrade.
```console
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_scrapeconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.71.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
```
### From 54.x to 55.x
This version upgrades Prometheus-Operator to v0.70.0
Run these commands to update the CRDs before applying the upgrade.
```console
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_scrapeconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.70.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
```
### From 53.x to 54.x
Grafana Helm Chart has bumped to version 7

View File

@ -1,133 +1,112 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.69.1/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.1
operator.prometheus.io/version: 0.69.1
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
argocd.argoproj.io/sync-options: ServerSideApply=true
creationTimestamp: null
name: prometheusrules.monitoring.coreos.com
spec:
group: monitoring.coreos.com
names:
categories:
- prometheus-operator
- prometheus-operator
kind: PrometheusRule
listKind: PrometheusRuleList
plural: prometheusrules
shortNames:
- promrule
- promrule
singular: prometheusrule
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: PrometheusRule defines recording and alerting rules for a Prometheus
instance
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Specification of desired alerting rule definitions for Prometheus.
properties:
groups:
description: Content of Prometheus rule file
items:
description: RuleGroup is a list of sequentially evaluated recording
and alerting rules.
properties:
interval:
description: Interval determines how often rules in the group
are evaluated.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
limit:
description: Limit the number of alerts an alerting rule and
series a recording rule can produce. Limit is supported starting
with Prometheus >= 2.31 and Thanos Ruler >= 0.24.
type: integer
name:
description: Name of the rule group.
minLength: 1
type: string
partial_response_strategy:
description: 'PartialResponseStrategy is only used by ThanosRuler
and will be ignored by Prometheus instances. More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response'
pattern: ^(?i)(abort|warn)?$
type: string
rules:
description: List of alerting and recording rules.
items:
description: 'Rule describes an alerting or recording rule
See Prometheus documentation: [alerting](https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
or [recording](https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules)
rule'
properties:
alert:
description: Name of the alert. Must be a valid label
value. Only one of `record` and `alert` must be set.
type: string
annotations:
additionalProperties:
- name: v1
schema:
openAPIV3Schema:
description: PrometheusRule defines recording and alerting rules for a Prometheus instance
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Specification of desired alerting rule definitions for Prometheus.
properties:
groups:
description: Content of Prometheus rule file
items:
description: RuleGroup is a list of sequentially evaluated recording and alerting rules.
properties:
interval:
description: Interval determines how often rules in the group are evaluated.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
limit:
description: Limit the number of alerts an alerting rule and series a recording rule can produce. Limit is supported starting with Prometheus >= 2.31 and Thanos Ruler >= 0.24.
type: integer
name:
description: Name of the rule group.
minLength: 1
type: string
partial_response_strategy:
description: 'PartialResponseStrategy is only used by ThanosRuler and will be ignored by Prometheus instances. More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response'
pattern: ^(?i)(abort|warn)?$
type: string
rules:
description: List of alerting and recording rules.
items:
description: 'Rule describes an alerting or recording rule See Prometheus documentation: [alerting](https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) or [recording](https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules) rule'
properties:
alert:
description: Name of the alert. Must be a valid label value. Only one of `record` and `alert` must be set.
type: string
description: Annotations to add to each alert. Only valid
for alerting rules.
type: object
expr:
anyOf:
- type: integer
- type: string
description: PromQL expression to evaluate.
x-kubernetes-int-or-string: true
for:
description: Alerts are considered firing once they have
been returned for this long.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
keep_firing_for:
description: KeepFiringFor defines how long an alert will
continue firing after the condition that triggered it
has cleared.
minLength: 1
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
labels:
additionalProperties:
annotations:
additionalProperties:
type: string
description: Annotations to add to each alert. Only valid for alerting rules.
type: object
expr:
anyOf:
- type: integer
- type: string
description: PromQL expression to evaluate.
x-kubernetes-int-or-string: true
for:
description: Alerts are considered firing once they have been returned for this long.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
description: Labels to add or overwrite.
type: object
record:
description: Name of the time series to output to. Must
be a valid metric name. Only one of `record` and `alert`
must be set.
type: string
required:
- expr
type: object
type: array
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
required:
- spec
type: object
served: true
storage: true
keep_firing_for:
description: KeepFiringFor defines how long an alert will continue firing after the condition that triggered it has cleared.
minLength: 1
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
labels:
additionalProperties:
type: string
description: Labels to add or overwrite.
type: object
record:
description: Name of the time series to output to. Must be a valid metric name. Only one of `record` and `alert` must be set.
type: string
required:
- expr
type: object
type: array
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
required:
- spec
type: object
served: true
storage: true

View File

@ -1,15 +1,15 @@
annotations:
artifacthub.io/license: AGPL-3.0-only
artifacthub.io/license: Apache-2.0
artifacthub.io/links: |
- name: Chart Source
url: https://github.com/grafana/helm-charts
- name: Upstream Project
url: https://github.com/grafana/grafana
apiVersion: v2
appVersion: 10.1.5
appVersion: 10.4.0
description: The leading tool for querying and visualizing time series and metrics.
home: https://grafana.net
icon: https://raw.githubusercontent.com/grafana/grafana/master/public/img/logo_transparent_400x.png
home: https://grafana.com
icon: https://artifacthub.io/image/b4fed1a7-6c8f-4945-b99d-096efa3e4116
keywords:
- monitoring
- metric
@ -30,4 +30,4 @@ sources:
- https://github.com/grafana/grafana
- https://github.com/grafana/helm-charts
type: application
version: 7.0.8
version: 7.3.7

View File

@ -48,7 +48,7 @@ This version requires Helm >= 3.1.0.
### To 7.0.0
For consistency with other Helm charts, the `global.image.registry` parameter was renamed
For consistency with other Helm charts, the `global.image.registry` parameter was renamed
to `global.imageRegistry`. If you were not previously setting `global.image.registry`, no action
is required on upgrade. If you were previously setting `global.image.registry`, you will
need to instead set `global.imageRegistry`.
@ -136,6 +136,8 @@ need to instead set `global.imageRegistry`.
| `enableServiceLinks` | Inject Kubernetes services as environment variables. | `true` |
| `extraSecretMounts` | Additional grafana server secret mounts | `[]` |
| `extraVolumeMounts` | Additional grafana server volume mounts | `[]` |
| `extraVolumes` | Additional Grafana server volumes | `[]` |
| `automountServiceAccountToken` | Mounted the service account token on the grafana pod. Mandatory, if sidecars are enabled | `true` |
| `createConfigmap` | Enable creating the grafana configmap | `true` |
| `extraConfigmapMounts` | Additional grafana server configMap volume mounts (values are templated) | `[]` |
| `extraEmptyDirMounts` | Additional grafana server emptyDir volume mounts | `[]` |
@ -160,7 +162,7 @@ need to instead set `global.imageRegistry`.
| `lifecycleHooks` | Lifecycle hooks for podStart and preStop [Example](https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers) | `{}` |
| `sidecar.image.registry` | Sidecar image registry | `quay.io` |
| `sidecar.image.repository` | Sidecar image repository | `kiwigrid/k8s-sidecar` |
| `sidecar.image.tag` | Sidecar image tag | `1.24.6` |
| `sidecar.image.tag` | Sidecar image tag | `1.26.0` |
| `sidecar.image.sha` | Sidecar image sha (optional) | `""` |
| `sidecar.imagePullPolicy` | Sidecar image pull policy | `IfNotPresent` |
| `sidecar.resources` | Sidecar resources | `{}` |
@ -174,7 +176,7 @@ need to instead set `global.imageRegistry`.
| `sidecar.alerts.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `sidecar.alerts.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/alerting/reload"` |
| `sidecar.alerts.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
| `sidecar.alerts.initDatasources` | Set to true to deploy the datasource sidecar as an initContainer. This is needed if skipReload is true, to load any alerts defined at startup time. | `false` |
| `sidecar.alerts.initAlerts` | Set to true to deploy the alerts sidecar as an initContainer. This is needed if skipReload is true, to load any alerts defined at startup time. | `false` |
| `sidecar.alerts.extraMounts` | Additional alerts sidecar volume mounts. | `[]` |
| `sidecar.dashboards.enabled` | Enables the cluster wide search for dashboards and adds/updates/deletes them in grafana | `false` |
| `sidecar.dashboards.SCProvider` | Enables creation of sidecar provider | `true` |
@ -222,7 +224,7 @@ need to instead set `global.imageRegistry`.
| `admin.existingSecret` | The name of an existing secret containing the admin credentials (can be templated). | `""` |
| `admin.userKey` | The key in the existing admin secret containing the username. | `"admin-user"` |
| `admin.passwordKey` | The key in the existing admin secret containing the password. | `"admin-password"` |
| `serviceAccount.autoMount` | Automount the service account token in the pod| `true` |
| `serviceAccount.automountServiceAccountToken` | Automount the service account token on all pods where is service account is used | `false` |
| `serviceAccount.annotations` | ServiceAccount annotations | |
| `serviceAccount.create` | Create service account | `true` |
| `serviceAccount.labels` | ServiceAccount labels | `{}` |
@ -315,24 +317,35 @@ ingress:
path: "/grafana"
```
### Example of extraVolumeMounts
### Example of extraVolumeMounts and extraVolumes
Volume can be type persistentVolumeClaim or hostPath but not both at same time.
If neither existingClaim or hostPath argument is given then type is emptyDir.
Configure additional volumes with `extraVolumes` and volume mounts with `extraVolumeMounts`.
Example for `extraVolumeMounts` and corresponding `extraVolumes`:
```yaml
- extraVolumeMounts:
extraVolumeMounts:
- name: plugins
mountPath: /var/lib/grafana/plugins
subPath: configs/grafana/plugins
existingClaim: existing-grafana-claim
readOnly: false
- name: dashboards
mountPath: /var/lib/grafana/dashboards
hostPath: /usr/shared/grafana/dashboards
readOnly: false
extraVolumes:
- name: plugins
existingClaim: existing-grafana-claim
- name: dashboards
hostPath: /usr/shared/grafana/dashboards
```
Volumes default to `emptyDir`. Set to `persistentVolumeClaim`,
`hostPath`, `csi`, or `configMap` for other types. For a
`persistentVolumeClaim`, specify an existing claim name with
`existingClaim`.
## Import dashboards
There are a few methods to import dashboards to Grafana. Below are some examples and explanations as to how to use each method:
@ -544,9 +557,61 @@ delete_notifiers:
# default org_id: 1
```
## Provision alert rules, contact points, notification policies and notification templates
## Sidecar for alerting resources
There are two methods to provision alerting configuration in Grafana. Below are some examples and explanations as to how to use each method:
If the parameter `sidecar.alerts.enabled` is set, a sidecar container is deployed in the grafana
pod. This container watches all configmaps (or secrets) in the cluster (namespace defined by `sidecar.alerts.searchNamespace`) and filters out the ones with
a label as defined in `sidecar.alerts.label` (default is `grafana_alert`). The files defined in those configmaps are written
to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported alerting resources are updated, however, deletions are a little more complicated (see below).
This sidecar can be used to provision alert rules, contact points, notification policies, notification templates and mute timings as shown in [Grafana Documentation](https://grafana.com/docs/grafana/next/alerting/set-up/provision-alerting-resources/file-provisioning/).
To fetch the alert config which will be provisioned, use the alert provisioning API ([Grafana Documentation](https://grafana.com/docs/grafana/next/developers/http_api/alerting_provisioning/)).
You can use either JSON or YAML format.
Example config for an alert rule:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-alert
labels:
grafana_alert: "1"
data:
k8s-alert.yml: |-
apiVersion: 1
groups:
- orgId: 1
name: k8s-alert
[...]
```
To delete provisioned alert rules is a two step process, you need to delete the configmap which defined the alert rule
and then create a configuration which deletes the alert rule.
Example deletion configuration:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: delete-sample-grafana-alert
namespace: monitoring
labels:
grafana_alert: "1"
data:
delete-k8s-alert.yml: |-
apiVersion: 1
deleteRules:
- orgId: 1
uid: 16624780-6564-45dc-825c-8bded4ad92d3
```
## Statically provision alerting resources
If you don't need to change alerting resources (alert rules, contact points, notification policies and notification templates) regularly you could use the `alerting` config option instead of the sidecar option above.
This will grab the alerting config and apply it statically at build time for the helm file.
There are two methods to statically provision alerting configuration in Grafana. Below are some examples and explanations as to how to use each method:
```yaml
alerting:
@ -576,13 +641,14 @@ alerting:
title: '{{ `{{ template "default.title" . }}` }}'
```
There are two possibilities:
The two possibilities for static alerting resource provisioning are:
* Inlining the file contents as described in the example `values.yaml` and the official [Grafana documentation](https://grafana.com/docs/grafana/next/alerting/set-up/provision-alerting-resources/file-provisioning/).
* Importing a file using a relative path starting from the chart root directory.
* Inlining the file contents as shown for contact points in the above example.
* Importing a file using a relative path starting from the chart root directory as shown for the alert rules in the above example.
### Important notes on file provisioning
* The format of the files is defined in the [Grafana documentation](https://grafana.com/docs/grafana/next/alerting/set-up/provision-alerting-resources/file-provisioning/) on file provisioning.
* The chart supports importing YAML and JSON files.
* The filename must be unique, otherwise one volume mount will overwrite the other.
* In case of inlining, double curly braces that arise from the Grafana configuration format and are not intended as templates for the chart must be escaped.

View File

@ -0,0 +1,171 @@
{{/*
Generate config map data
*/}}
{{- define "grafana.configData" -}}
{{ include "grafana.assertNoLeakedSecrets" . }}
{{- $files := .Files }}
{{- $root := . -}}
{{- with .Values.plugins }}
plugins: {{ join "," . }}
{{- end }}
grafana.ini: |
{{- range $elem, $elemVal := index .Values "grafana.ini" }}
{{- if not (kindIs "map" $elemVal) }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $value := index .Values "grafana.ini" }}
{{- if kindIs "map" $value }}
[{{ $key }}]
{{- range $elem, $elemVal := $value }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.datasources }}
{{- if not (hasKey $value "secret") }}
{{ $key }}: |
{{- tpl (toYaml $value | nindent 2) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.notifiers }}
{{- if not (hasKey $value "secret") }}
{{ $key }}: |
{{- toYaml $value | nindent 2 }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.alerting }}
{{- if (hasKey $value "file") }}
{{ $key }}:
{{- toYaml ( $files.Get $value.file ) | nindent 2 }}
{{- else if (or (hasKey $value "secret") (hasKey $value "secretFile"))}}
{{/* will be stored inside secret generated by "configSecret.yaml"*/}}
{{- else }}
{{ $key }}: |
{{- tpl (toYaml $value | nindent 2) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.dashboardProviders }}
{{ $key }}: |
{{- toYaml $value | nindent 2 }}
{{- end }}
{{- if .Values.dashboards }}
download_dashboards.sh: |
#!/usr/bin/env sh
set -euf
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{- range $value.providers }}
mkdir -p {{ .options.path }}
{{- end }}
{{- end }}
{{- end }}
{{ $dashboardProviders := .Values.dashboardProviders }}
{{- range $provider, $dashboards := .Values.dashboards }}
{{- range $key, $value := $dashboards }}
{{- if (or (hasKey $value "gnetId") (hasKey $value "url")) }}
curl -skf \
--connect-timeout 60 \
--max-time 60 \
{{- if not $value.b64content }}
{{- if not $value.acceptHeader }}
-H "Accept: application/json" \
{{- else }}
-H "Accept: {{ $value.acceptHeader }}" \
{{- end }}
{{- if $value.token }}
-H "Authorization: token {{ $value.token }}" \
{{- end }}
{{- if $value.bearerToken }}
-H "Authorization: Bearer {{ $value.bearerToken }}" \
{{- end }}
{{- if $value.basic }}
-H "Authorization: Basic {{ $value.basic }}" \
{{- end }}
{{- if $value.gitlabToken }}
-H "PRIVATE-TOKEN: {{ $value.gitlabToken }}" \
{{- end }}
-H "Content-Type: application/json;charset=UTF-8" \
{{- end }}
{{- $dpPath := "" -}}
{{- range $kd := (index $dashboardProviders "dashboardproviders.yaml").providers }}
{{- if eq $kd.name $provider }}
{{- $dpPath = $kd.options.path }}
{{- end }}
{{- end }}
{{- if $value.url }}
"{{ $value.url }}" \
{{- else }}
"https://grafana.com/api/dashboards/{{ $value.gnetId }}/revisions/{{- if $value.revision -}}{{ $value.revision }}{{- else -}}1{{- end -}}/download" \
{{- end }}
{{- if $value.datasource }}
{{- if kindIs "string" $value.datasource }}
| sed '/-- .* --/! s/"datasource":.*,/"datasource": "{{ $value.datasource }}",/g' \
{{- end }}
{{- if kindIs "slice" $value.datasource }}
{{- range $value.datasource }}
| sed '/-- .* --/! s/${{"{"}}{{ .name }}}/{{ .value }}/g' \
{{- end }}
{{- end }}
{{- end }}
{{- if $value.b64content }}
| base64 -d \
{{- end }}
> "{{- if $dpPath -}}{{ $dpPath }}{{- else -}}/var/lib/grafana/dashboards/{{ $provider }}{{- end -}}/{{ $key }}.json"
{{ end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{/*
Generate dashboard json config map data
*/}}
{{- define "grafana.configDashboardProviderData" -}}
provider.yaml: |-
apiVersion: 1
providers:
- name: '{{ .Values.sidecar.dashboards.provider.name }}'
orgId: {{ .Values.sidecar.dashboards.provider.orgid }}
{{- if not .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
folder: '{{ .Values.sidecar.dashboards.provider.folder }}'
{{- end }}
type: {{ .Values.sidecar.dashboards.provider.type }}
disableDeletion: {{ .Values.sidecar.dashboards.provider.disableDelete }}
allowUiUpdates: {{ .Values.sidecar.dashboards.provider.allowUiUpdates }}
updateIntervalSeconds: {{ .Values.sidecar.dashboards.provider.updateIntervalSeconds | default 30 }}
options:
foldersFromFilesStructure: {{ .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}
{{- end -}}
{{- define "grafana.secretsData" -}}
{{- if and (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) }}
admin-user: {{ .Values.adminUser | b64enc | quote }}
{{- if .Values.adminPassword }}
admin-password: {{ .Values.adminPassword | b64enc | quote }}
{{- else }}
admin-password: {{ include "grafana.password" . }}
{{- end }}
{{- end }}
{{- if not .Values.ldap.existingSecret }}
ldap-toml: {{ tpl .Values.ldap.config $ | b64enc | quote }}
{{- end }}
{{- end -}}

View File

@ -225,3 +225,54 @@ Formats imagePullSecrets. Input is (dict "root" . "imagePullSecrets" .{specific
{{- end }}
{{- $secretFound}}
{{- end -}}
{{/*
Checks whether the user is attempting to store secrets in plaintext
in the grafana.ini configmap
*/}}
{{/* grafana.assertNoLeakedSecrets checks for sensitive keys in values */}}
{{- define "grafana.assertNoLeakedSecrets" -}}
{{- $sensitiveKeysYaml := `
sensitiveKeys:
- path: ["database", "password"]
- path: ["smtp", "password"]
- path: ["security", "secret_key"]
- path: ["security", "admin_password"]
- path: ["auth.basic", "password"]
- path: ["auth.ldap", "bind_password"]
- path: ["auth.google", "client_secret"]
- path: ["auth.github", "client_secret"]
- path: ["auth.gitlab", "client_secret"]
- path: ["auth.generic_oauth", "client_secret"]
- path: ["auth.okta", "client_secret"]
- path: ["auth.azuread", "client_secret"]
- path: ["auth.grafana_com", "client_secret"]
- path: ["auth.grafananet", "client_secret"]
- path: ["azure", "user_identity_client_secret"]
- path: ["unified_alerting", "ha_redis_password"]
- path: ["metrics", "basic_auth_password"]
- path: ["external_image_storage.s3", "secret_key"]
- path: ["external_image_storage.webdav", "password"]
- path: ["external_image_storage.azure_blob", "account_key"]
` | fromYaml -}}
{{- if $.Values.assertNoLeakedSecrets -}}
{{- $grafanaIni := index .Values "grafana.ini" -}}
{{- range $_, $secret := $sensitiveKeysYaml.sensitiveKeys -}}
{{- $currentMap := $grafanaIni -}}
{{- $shouldContinue := true -}}
{{- range $index, $elem := $secret.path -}}
{{- if and $shouldContinue (hasKey $currentMap $elem) -}}
{{- if eq (len $secret.path) (add1 $index) -}}
{{- if not (regexMatch "\\$(?:__(?:env|file|vault))?{[^}]+}" (index $currentMap $elem)) -}}
{{- fail (printf "Sensitive key '%s' should not be defined explicitly in values. Use variable expansion instead. You can disable this client-side validation by changing the value of assertNoLeakedSecrets." (join "." $secret.path)) -}}
{{- end -}}
{{- else -}}
{{- $currentMap = index $currentMap $elem -}}
{{- end -}}
{{- else -}}
{{- $shouldContinue = false -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -5,7 +5,7 @@
schedulerName: "{{ . }}"
{{- end }}
serviceAccountName: {{ include "grafana.serviceAccountName" . }}
automountServiceAccountToken: {{ .Values.serviceAccount.autoMount }}
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 2 }}
@ -14,6 +14,13 @@ securityContext:
hostAliases:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- if .Values.dnsPolicy }}
dnsPolicy: {{ .Values.dnsPolicy }}
{{- end }}
{{- with .Values.dnsConfig }}
dnsConfig:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
@ -169,7 +176,7 @@ initContainers:
mountPath: "/etc/grafana/provisioning/alerting"
{{- with .Values.sidecar.alerts.extraMounts }}
{{- toYaml . | trim | nindent 6 }}
{{- end }}
{{- end }}
{{- end }}
{{- if and .Values.sidecar.datasources.enabled .Values.sidecar.datasources.initDatasources }}
- name: {{ include "grafana.name" . }}-init-sc-datasources
@ -411,7 +418,7 @@ containers:
mountPath: "/etc/grafana/provisioning/alerting"
{{- with .Values.sidecar.alerts.extraMounts }}
{{- toYaml . | trim | nindent 6 }}
{{- end }}
{{- end }}
{{- end}}
{{- if .Values.sidecar.dashboards.enabled }}
- name: {{ include "grafana.name" . }}-sc-dashboard
@ -427,6 +434,11 @@ containers:
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
{{- range $key, $value := .Values.sidecar.datasources.envValueFrom }}
- name: {{ $key | quote }}
valueFrom:
{{- tpl (toYaml $value) $ | nindent 10 }}
{{- end }}
{{- if .Values.sidecar.dashboards.ignoreAlreadyProcessed }}
- name: IGNORE_ALREADY_PROCESSED
value: "true"
@ -898,26 +910,47 @@ containers:
{{- end }}
{{- end }}
{{- with .Values.datasources }}
{{- $datasources := . }}
{{- range (keys . | sortAlpha) }}
{{- if (or (hasKey (index $datasources .) "secret")) }} {{/*check if current datasource should be handeled as secret */}}
- name: config-secret
mountPath: "/etc/grafana/provisioning/datasources/{{ . }}"
subPath: {{ . | quote }}
{{- else }}
- name: config
mountPath: "/etc/grafana/provisioning/datasources/{{ . }}"
subPath: {{ . | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.notifiers }}
{{- $notifiers := . }}
{{- range (keys . | sortAlpha) }}
{{- if (or (hasKey (index $notifiers .) "secret")) }} {{/*check if current notifier should be handeled as secret */}}
- name: config-secret
mountPath: "/etc/grafana/provisioning/notifiers/{{ . }}"
subPath: {{ . | quote }}
{{- else }}
- name: config
mountPath: "/etc/grafana/provisioning/notifiers/{{ . }}"
subPath: {{ . | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.alerting }}
{{- $alertingmap := .}}
{{- range (keys . | sortAlpha) }}
{{- if (or (hasKey (index $.Values.alerting .) "secret") (hasKey (index $.Values.alerting .) "secretFile")) }} {{/*check if current alerting entry should be handeled as secret */}}
- name: config-secret
mountPath: "/etc/grafana/provisioning/alerting/{{ . }}"
subPath: {{ . | quote }}
{{- else }}
- name: config
mountPath: "/etc/grafana/provisioning/alerting/{{ . }}"
subPath: {{ . | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.dashboardProviders }}
{{- range (keys . | sortAlpha) }}
- name: config
@ -1051,11 +1084,17 @@ containers:
- secretRef:
name: {{ tpl .name $ }}
optional: {{ .optional | default false }}
{{- if .prefix }}
prefix: {{ tpl .prefix $ }}
{{- end }}
{{- end }}
{{- range .Values.envFromConfigMaps }}
- configMapRef:
name: {{ tpl .name $ }}
optional: {{ .optional | default false }}
{{- if .prefix }}
prefix: {{ tpl .prefix $ }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.livenessProbe }}
@ -1097,6 +1136,12 @@ volumes:
- name: config
configMap:
name: {{ include "grafana.fullname" . }}
{{- $createConfigSecret := eq (include "grafana.shouldCreateConfigSecret" .) "true" -}}
{{- if and .Values.createConfigmap $createConfigSecret }}
- name: config-secret
secret:
secretName: {{ include "grafana.fullname" . }}-config-secret
{{- end }}
{{- range .Values.extraConfigmapMounts }}
- name: {{ tpl .name $root }}
configMap:
@ -1230,10 +1275,13 @@ volumes:
{{ toYaml .hostPath | nindent 6 }}
{{- else if .csi }}
csi:
{{- toYaml .data | nindent 6 }}
{{- toYaml .csi | nindent 6 }}
{{- else if .configMap }}
configMap:
{{- toYaml .configMap | nindent 6 }}
{{- else if .emptyDir }}
emptyDir:
{{- toYaml .emptyDir | nindent 6 }}
{{- else }}
emptyDir: {}
{{- end }}
@ -1246,4 +1294,3 @@ volumes:
{{- tpl (toYaml .) $root | nindent 2 }}
{{- end }}
{{- end }}

View File

@ -25,13 +25,13 @@ stringData:
{{- range $key, $value := .Values.datasources }}
{{- if (hasKey $value "secret") }}
{{- $key | nindent 2 }}: |
{{- tpl (toYaml $value | nindent 4) $root }}
{{- tpl (toYaml $value.secret | nindent 4) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.notifiers }}
{{- if (hasKey $value "secret") }}
{{- $key | nindent 2 }}: |
{{- tpl (toYaml $value | nindent 4) $root }}
{{- tpl (toYaml $value.secret | nindent 4) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.alerting }}
@ -40,4 +40,4 @@ stringData:
{{- tpl (toYaml $value.secret | nindent 4) $root }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -11,19 +11,5 @@ metadata:
name: {{ include "grafana.fullname" . }}-config-dashboards
namespace: {{ include "grafana.namespace" . }}
data:
provider.yaml: |-
apiVersion: 1
providers:
- name: '{{ .Values.sidecar.dashboards.provider.name }}'
orgId: {{ .Values.sidecar.dashboards.provider.orgid }}
{{- if not .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
folder: '{{ .Values.sidecar.dashboards.provider.folder }}'
{{- end }}
type: {{ .Values.sidecar.dashboards.provider.type }}
disableDeletion: {{ .Values.sidecar.dashboards.provider.disableDelete }}
allowUiUpdates: {{ .Values.sidecar.dashboards.provider.allowUiUpdates }}
updateIntervalSeconds: {{ .Values.sidecar.dashboards.provider.updateIntervalSeconds | default 30 }}
options:
foldersFromFilesStructure: {{ .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}
{{- include "grafana.configDashboardProviderData" . | nindent 2 }}
{{- end }}

View File

@ -1,6 +1,4 @@
{{- if .Values.createConfigmap }}
{{- $files := .Files }}
{{- $root := . -}}
apiVersion: v1
kind: ConfigMap
metadata:
@ -13,132 +11,5 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
data:
{{- with .Values.plugins }}
plugins: {{ join "," . }}
{{- end }}
grafana.ini: |
{{- range $elem, $elemVal := index .Values "grafana.ini" }}
{{- if not (kindIs "map" $elemVal) }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $value := index .Values "grafana.ini" }}
{{- if kindIs "map" $value }}
[{{ $key }}]
{{- range $elem, $elemVal := $value }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.datasources }}
{{- if not (hasKey $value "secret") }}
{{- $key | nindent 2 }}: |
{{- tpl (toYaml $value | nindent 4) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.notifiers }}
{{- if not (hasKey $value "secret") }}
{{- $key | nindent 2 }}: |
{{- toYaml $value | nindent 4 }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.alerting }}
{{- if (hasKey $value "file") }}
{{- $key | nindent 2 }}:
{{- toYaml ( $files.Get $value.file ) | nindent 4}}
{{- else if (or (hasKey $value "secret") (hasKey $value "secretFile"))}}
{{/* will be stored inside secret generated by "configSecret.yaml"*/}}
{{- else }}
{{- $key | nindent 2 }}: |
{{- tpl (toYaml $value | nindent 4) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.dashboardProviders }}
{{- $key | nindent 2 }}: |
{{- toYaml $value | nindent 4 }}
{{- end }}
{{- if .Values.dashboards }}
download_dashboards.sh: |
#!/usr/bin/env sh
set -euf
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{- range $value.providers }}
mkdir -p {{ .options.path }}
{{- end }}
{{- end }}
{{- end }}
{{ $dashboardProviders := .Values.dashboardProviders }}
{{- range $provider, $dashboards := .Values.dashboards }}
{{- range $key, $value := $dashboards }}
{{- if (or (hasKey $value "gnetId") (hasKey $value "url")) }}
curl -skf \
--connect-timeout 60 \
--max-time 60 \
{{- if not $value.b64content }}
{{- if not $value.acceptHeader }}
-H "Accept: application/json" \
{{- else }}
-H "Accept: {{ $value.acceptHeader }}" \
{{- end }}
{{- if $value.token }}
-H "Authorization: token {{ $value.token }}" \
{{- end }}
{{- if $value.bearerToken }}
-H "Authorization: Bearer {{ $value.bearerToken }}" \
{{- end }}
{{- if $value.basic }}
-H "Authorization: Basic {{ $value.basic }}" \
{{- end }}
{{- if $value.gitlabToken }}
-H "PRIVATE-TOKEN: {{ $value.gitlabToken }}" \
{{- end }}
-H "Content-Type: application/json;charset=UTF-8" \
{{- end }}
{{- $dpPath := "" -}}
{{- range $kd := (index $dashboardProviders "dashboardproviders.yaml").providers }}
{{- if eq $kd.name $provider }}
{{- $dpPath = $kd.options.path }}
{{- end }}
{{- end }}
{{- if $value.url }}
"{{ $value.url }}" \
{{- else }}
"https://grafana.com/api/dashboards/{{ $value.gnetId }}/revisions/{{- if $value.revision -}}{{ $value.revision }}{{- else -}}1{{- end -}}/download" \
{{- end }}
{{- if $value.datasource }}
{{- if kindIs "string" $value.datasource }}
| sed '/-- .* --/! s/"datasource":.*,/"datasource": "{{ $value.datasource }}",/g' \
{{- end }}
{{- if kindIs "slice" $value.datasource }}
{{- range $value.datasource }}
| sed '/-- .* --/! s/${{"{"}}{{ .name }}}/{{ .value }}/g' \
{{- end }}
{{- end }}
{{- end }}
{{- if $value.b64content }}
| base64 -d \
{{- end }}
> "{{- if $dpPath -}}{{ $dpPath }}{{- else -}}/var/lib/grafana/dashboards/{{ $provider }}{{- end -}}/{{ $key }}.json"
{{ end }}
{{- end }}
{{- end }}
{{- end }}
{{- include "grafana.configData" . | nindent 2 }}
{{- end }}

View File

@ -33,14 +33,16 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/config: {{ include "grafana.configData" . | sha256sum }}
{{- if .Values.dashboards }}
checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }}
{{- end }}
checksum/sc-dashboard-provider-config: {{ include "grafana.configDashboardProviderData" . | sha256sum }}
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
checksum/secret: {{ include "grafana.secretsData" . | sha256sum }}
{{- end }}
{{- if .Values.envRenderSecret }}
checksum/secret-env: {{ include (print $.Template.BasePath "/secret-env.yaml") . | sha256sum }}
checksum/secret-env: {{ tpl (toYaml .Values.envRenderSecret) . | sha256sum }}
{{- end }}
kubectl.kubernetes.io/default-container: {{ .Chart.Name }}
{{- with .Values.podAnnotations }}

View File

@ -34,7 +34,7 @@ spec:
rules:
{{- if .Values.ingress.hosts }}
{{- range .Values.ingress.hosts }}
- host: {{ tpl . $ }}
- host: {{ tpl . $ | quote }}
http:
paths:
{{- with $extraPaths }}

View File

@ -12,15 +12,5 @@ metadata:
{{- end }}
type: Opaque
data:
{{- if and (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) }}
admin-user: {{ .Values.adminUser | b64enc | quote }}
{{- if .Values.adminPassword }}
admin-password: {{ .Values.adminPassword | b64enc | quote }}
{{- else }}
admin-password: {{ include "grafana.password" . }}
{{- end }}
{{- end }}
{{- if not .Values.ldap.existingSecret }}
ldap-toml: {{ tpl .Values.ldap.config $ | b64enc | quote }}
{{- end }}
{{- include "grafana.secretsData" . | nindent 2 }}
{{- end }}

View File

@ -21,10 +21,13 @@ spec:
clusterIP: {{ . }}
{{- end }}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
type: LoadBalancer
{{- with .Values.service.loadBalancerIP }}
loadBalancerIP: {{ . }}
{{- end }}
{{- with .Values.service.loadBalancerClass }}
loadBalancerClass: {{ . }}
{{- end }}
{{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- toYaml . | nindent 4 }}

View File

@ -1,7 +1,7 @@
{{- if .Values.serviceAccount.create }}
{{- $root := . -}}
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: {{ .Values.serviceAccount.autoMount | default .Values.serviceAccount.automountServiceAccountToken }}
metadata:
labels:
{{- include "grafana.labels" . | nindent 4 }}
@ -10,7 +10,7 @@ metadata:
{{- end }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- tpl (toYaml . | nindent 4) $root }}
{{- tpl (toYaml . | nindent 4) $ }}
{{- end }}
name: {{ include "grafana.serviceAccountName" . }}
namespace: {{ include "grafana.namespace" . }}

View File

@ -12,7 +12,7 @@ metadata:
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- tpl (toYaml . | nindent 4) $ }}
{{- end }}
spec:
endpoints:

View File

@ -38,16 +38,22 @@ serviceAccount:
nameTest:
## ServiceAccount labels.
labels: {}
## Service account annotations. Can be templated.
# annotations:
# eks.amazonaws.com/role-arn: arn:aws:iam::123456789000:role/iam-role-name-here
autoMount: true
## Service account annotations. Can be templated.
# annotations:
# eks.amazonaws.com/role-arn: arn:aws:iam::123456789000:role/iam-role-name-here
## autoMount is deprecated in favor of automountServiceAccountToken
# autoMount: false
automountServiceAccountToken: false
replicas: 1
## Create a headless service for the deployment
headlessService: false
## Should the service account be auto mounted on the pod
automountServiceAccountToken: true
## Create HorizontalPodAutoscaler object for deployment type
#
autoscaling:
@ -116,6 +122,16 @@ testFramework:
imagePullPolicy: IfNotPresent
securityContext: {}
# dns configuration for pod
dnsPolicy: ~
dnsConfig: {}
# nameservers:
# - 8.8.8.8
# options:
# - name: ndots
# value: "2"
# - name: edns0
securityContext:
runAsNonRoot: true
runAsUser: 472
@ -197,6 +213,9 @@ gossipPortName: gossip
service:
enabled: true
type: ClusterIP
loadBalancerIP: ""
loadBalancerClass: ""
loadBalancerSourceRanges: []
port: 80
targetPort: 3000
# targetPort: 4181 To be used with a proxy extraContainer
@ -477,6 +496,7 @@ envRenderSecret: {}
## Name is templated.
envFromSecrets: []
## - name: secret-name
## prefix: prefix
## optional: true
## The names of conifgmaps in the same kubernetes namespace which contain values to be added to the environment
@ -485,6 +505,7 @@ envFromSecrets: []
## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#configmapenvsource-v1-core
envFromConfigMaps: []
## - name: configmap-name
## prefix: prefix
## optional: true
# Inject Kubernetes services as environment variables.
@ -530,15 +551,22 @@ extraVolumeMounts: []
# - name: extra-volume-0
# mountPath: /mnt/volume0
# readOnly: true
# existingClaim: volume-claim
# - name: extra-volume-1
# mountPath: /mnt/volume1
# readOnly: true
# hostPath: /usr/shared/
# - name: grafana-secrets
# mountPath: /mnt/volume2
# csi: true
# data:
## Additional Grafana server volumes
extraVolumes: []
# - name: extra-volume-0
# existingClaim: volume-claim
# - name: extra-volume-1
# hostPath:
# path: /usr/shared/
# type: ""
# - name: grafana-secrets
# csi:
# driver: secrets-store.csi.k8s.io
# readOnly: true
# volumeAttributes:
@ -811,7 +839,7 @@ sidecar:
# -- The Docker registry
registry: quay.io
repository: kiwigrid/k8s-sidecar
tag: 1.25.2
tag: 1.26.1
sha: ""
imagePullPolicy: IfNotPresent
resources: {}
@ -944,6 +972,7 @@ sidecar:
enabled: false
# Additional environment variables for the datasourcessidecar
env: {}
envValueFrom: {}
# Do not reprocess already processed unchanged resources on k8s API reconnect.
# ignoreAlreadyProcessed: true
# label that the configmaps with datasources are marked with
@ -975,8 +1004,8 @@ sidecar:
# Absolute path to shell script to execute after a datasource got reloaded
script: null
skipReload: false
# Deploy the datasource sidecar as an initContainer in addition to a container.
# This is needed if skipReload is true, to load any datasources defined at startup time.
# Deploy the datasources sidecar as an initContainer.
initDatasources: false
# Sets the size limit of the datasource sidecar emptyDir volume
sizeLimit: {}
@ -1280,3 +1309,13 @@ extraObjects: []
# data:
# - key: grafana-admin-password
# name: adminPassword
# assertNoLeakedSecrets is a helper function defined in _helpers.tpl that checks if secret
# values are not exposed in the rendered grafana.ini configmap. It is enabled by default.
#
# To pass values into grafana.ini without exposing them in a configmap, use variable expansion:
# https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#variable-expansion
#
# Alternatively, if you wish to allow secret values to be exposed in the rendered grafana.ini configmap,
# you can disable this check by setting assertNoLeakedSecrets to false.
assertNoLeakedSecrets: true

View File

@ -4,7 +4,7 @@ annotations:
- name: Chart Source
url: https://github.com/prometheus-community/helm-charts
apiVersion: v2
appVersion: 2.10.1
appVersion: 2.11.0
description: Install kube-state-metrics to generate and expose cluster-level metrics
home: https://github.com/kubernetes/kube-state-metrics/
keywords:
@ -23,4 +23,4 @@ name: kube-state-metrics
sources:
- https://github.com/kubernetes/kube-state-metrics/
type: application
version: 5.15.2
version: 5.18.0

View File

@ -49,10 +49,10 @@ spec:
{{- toYaml . | nindent 6 }}
{{- end }}
containers:
{{- $httpPort := ternary 9090 (.Values.service.port | default 8080) .Values.kubeRBACProxy.enabled}}
{{- $servicePort := ternary 9090 (.Values.service.port | default 8080) .Values.kubeRBACProxy.enabled}}
{{- $telemetryPort := ternary 9091 (.Values.selfMonitor.telemetryPort | default 8081) .Values.kubeRBACProxy.enabled}}
- name: {{ template "kube-state-metrics.name" . }}
{{- if .Values.autosharding.enabled }}
{{- if .Values.autosharding.enabled }}
env:
- name: POD_NAME
valueFrom:
@ -67,7 +67,7 @@ spec:
{{- if .Values.extraArgs }}
{{- .Values.extraArgs | toYaml | nindent 8 }}
{{- end }}
- --port={{ $httpPort }}
- --port={{ $servicePort }}
{{- if .Values.collectors }}
- --resources={{ .Values.collectors | join "," }}
{{- end }}
@ -115,10 +115,10 @@ spec:
{{- if .Values.selfMonitor.telemetryPort }}
- --telemetry-port={{ $telemetryPort }}
{{- end }}
{{- end }}
{{- if .Values.customResourceState.enabled }}
- --custom-resource-state-config-file=/etc/customresourcestate/config.yaml
{{- end }}
{{- end }}
{{- if or (.Values.kubeconfig.enabled) (.Values.customResourceState.enabled) (.Values.volumeMounts) }}
volumeMounts:
{{- if .Values.kubeconfig.enabled }}
@ -147,17 +147,41 @@ spec:
{{- end }}
{{- end }}
livenessProbe:
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
httpGet:
{{- if .Values.hostNetwork }}
host: 127.0.0.1
{{- end }}
httpHeaders:
{{- range $_, $header := .Values.livenessProbe.httpGet.httpHeaders }}
- name: {{ $header.name }}
value: {{ $header.value }}
{{- end }}
path: /healthz
port: {{ $httpPort }}
initialDelaySeconds: 5
timeoutSeconds: 5
port: {{ $servicePort }}
scheme: {{ upper .Values.livenessProbe.httpGet.scheme }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
httpGet:
{{- if .Values.hostNetwork }}
host: 127.0.0.1
{{- end }}
httpHeaders:
{{- range $_, $header := .Values.readinessProbe.httpGet.httpHeaders }}
- name: {{ $header.name }}
value: {{ $header.value }}
{{- end }}
path: /
port: {{ $httpPort }}
initialDelaySeconds: 5
timeoutSeconds: 5
port: {{ $servicePort }}
scheme: {{ upper .Values.readinessProbe.httpGet.scheme }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
{{- if .Values.resources }}
resources:
{{ toYaml .Values.resources | indent 10 }}
@ -173,7 +197,7 @@ spec:
{{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 8 }}
{{- end }}
- --secure-listen-address=:{{ .Values.service.port | default 8080}}
- --upstream=http://127.0.0.1:{{ $httpPort }}/
- --upstream=http://127.0.0.1:{{ $servicePort }}/
- --proxy-endpoints-port=8888
- --config-file=/etc/kube-rbac-proxy-config/config-file.yaml
volumeMounts:

View File

@ -10,6 +10,8 @@ metadata:
annotations:
{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
{{- end }}
{{- if or .Values.serviceAccount.imagePullSecrets .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- include "kube-state-metrics.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.serviceAccount.imagePullSecrets) | indent 2 }}
{{- end }}
{{- end -}}

View File

@ -37,7 +37,10 @@ autosharding:
replicas: 1
# Change the deployment strategy when autosharding is disabled
# Change the deployment strategy when autosharding is disabled.
# ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
# The default is "RollingUpdate" as per Kubernetes defaults.
# During a release, 'RollingUpdate' can lead to two running instances for a short period of time while 'Recreate' can create a small gap in data.
# updateStrategy: Recreate
# Number of old history to retain to allow rollback
@ -96,7 +99,7 @@ kubeRBACProxy:
image:
registry: quay.io
repository: brancz/kube-rbac-proxy
tag: v0.14.0
tag: v0.16.0
sha: ""
pullPolicy: IfNotPresent
@ -108,7 +111,12 @@ kubeRBACProxy:
## Specify security settings for a Container
## Allows overrides and additional options compared to (Pod) securityContext
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
containerSecurityContext: {}
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
@ -245,6 +253,7 @@ securityContext:
## Allows overrides and additional options compared to (Pod) securityContext
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
containerSecurityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
@ -454,3 +463,27 @@ containers: []
initContainers: []
# - name: crd-sidecar
# image: kiwigrid/k8s-sidecar:latest
## Liveness probe
##
livenessProbe:
failureThreshold: 3
httpGet:
httpHeaders: []
scheme: http
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
## Readiness probe
##
readinessProbe:
failureThreshold: 3
httpGet:
httpHeaders: []
scheme: http
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5

View File

@ -22,4 +22,4 @@ name: prometheus-node-exporter
sources:
- https://github.com/prometheus/node_exporter/
type: application
version: 4.24.0
version: 4.32.0

Some files were not shown because too many files have changed in this diff Show More