Compare commits

...

60 Commits

Author SHA1 Message Date
Stefan Reimer 95ed3a6969 Merge pull request 'chore(deps): update kubezero-telemetry-dependencies' (#206) from renovate/kubezero-telemetry-kubezero-telemetry-dependencies into master
Reviewed-on: #206
2024-05-18 12:56:19 +00:00
Stefan Reimer 40760d4a8e feat: bump ci tools, fix gitea PVC 2024-05-17 11:37:57 +00:00
Stefan Reimer a488b14f97 Squashed '.ci/' changes from 227e39f..2c44e4f
2c44e4f Disable concurrent builds
7144a42 Improve Trivy scanning logic
c1a48a6 Remove auto stash push / pop as being too dangerous
318c19e Add merge comment for subtree
22ed100 Fix custom branch docker tags

git-subtree-dir: .ci
git-subtree-split: 2c44e4fd8550d30fba503a2bcccec8e0bac1c151
2024-05-17 11:36:26 +00:00
Stefan Reimer 7cd1cd0c5e Merge pull request 'chore(deps): update kubezero-argo-dependencies' (#191) from renovate/kubezero-argo-kubezero-argo-dependencies into master
Reviewed-on: #191
2024-05-17 11:35:51 +00:00
Stefan Reimer 48c816b32c Merge pull request 'chore(deps): update kubezero-ci-dependencies' (#208) from renovate/kubezero-ci-kubezero-ci-dependencies into master
Reviewed-on: #208
2024-05-17 11:35:22 +00:00
Renovate Bot e0a24a0af9 chore(deps): update kubezero-argo-dependencies 2024-05-17 11:10:56 +00:00
Renovate Bot d0ed102d57 chore(deps): update kubezero-ci-dependencies 2024-05-17 11:09:41 +00:00
Renovate Bot bbeeb0db3d chore(deps): update kubezero-telemetry-dependencies 2024-05-15 03:08:39 +00:00
Stefan Reimer f8e7a85d9c fix: minor fixes for CI and Telemetry 2024-04-25 15:36:09 +00:00
Stefan Reimer 8bd713c1c7 feat: first step to migrate the logging pipeline into Telemetry 2024-04-25 15:33:49 +00:00
Stefan Reimer 73d457d1b9 doc: update README 2024-04-25 15:21:55 +00:00
Stefan Reimer 46ccd445e0 Merge pull request 'chore(deps): update helm release fluent-bit to v0.46.2' (#192) from renovate/kubezero-logging-kubezero-logging-dependencies into master
Reviewed-on: #192
2024-04-25 14:44:51 +00:00
Stefan Reimer 3c8a2d7dbd Merge pull request 'chore(deps): update helm release opentelemetry-collector to v0.89.0' (#195) from renovate/kubezero-telemetry-kubezero-telemetry-dependencies into master
Reviewed-on: #195
2024-04-25 14:41:46 +00:00
Stefan Reimer 229f5bc759 Merge pull request 'chore(deps): update helm release jaeger to v3' (#201) from renovate/kubezero-telemetry-major-kubezero-telemetry-dependencies into master
Reviewed-on: #201
2024-04-25 14:41:16 +00:00
Stefan Reimer 0060ec1ed1 chore: version bump CI tools 2024-04-25 14:36:22 +00:00
Stefan Reimer f6b54cde36 Merge pull request 'chore(deps): update kubezero-ci-dependencies' (#197) from renovate/kubezero-ci-kubezero-ci-dependencies into master
Reviewed-on: #197
2024-04-25 11:11:11 +00:00
Stefan Reimer b9ee65d128 feat: update Istio to 1.21.2 2024-04-25 10:37:22 +00:00
Stefan Reimer 76cc875990 Merge pull request 'chore(deps): update kubezero-istio-dependencies' (#196) from renovate/kubezero-istio-kubezero-istio-dependencies into master
Reviewed-on: #196
2024-04-25 09:57:06 +00:00
Stefan Reimer 4a54fde888 Merge pull request 'chore(deps): update helm release gateway to v1.21.2' (#203) from renovate/kubezero-istio-gateway-kubezero-istio-gateway-dependencies into master
Reviewed-on: #203
2024-04-25 09:56:47 +00:00
Renovate Bot 2957b898d9 chore(deps): update kubezero-ci-dependencies 2024-04-25 03:06:42 +00:00
Renovate Bot 42d5000fe0 chore(deps): update helm release jaeger to v3 2024-04-24 03:07:05 +00:00
Stefan Reimer e7a66a584b Merge pull request 'chore(deps): update helm release opensearch-operator to v2.6.0' (#204) from renovate/kubezero-operators-kubezero-operators-dependencies into master
Reviewed-on: #204
2024-04-23 11:34:49 +00:00
Renovate Bot 8994289608 chore(deps): update helm release opensearch-operator to v2.6.0 2024-04-23 03:11:32 +00:00
Renovate Bot c93b4c8b52 chore(deps): update kubezero-istio-dependencies 2024-04-23 03:11:13 +00:00
Renovate Bot 8d27fc22a0 chore(deps): update helm release gateway to v1.21.2 2024-04-23 03:09:56 +00:00
Stefan Reimer 7eba80b54d fix: latest nvidia-tooling 2024-04-22 10:51:45 +00:00
Renovate Bot d66cdb42b8 chore(deps): update helm release opentelemetry-collector to v0.89.0 2024-04-20 03:08:15 +00:00
Stefan Reimer 9cfeaec3a8 Merge pull request 'chore(deps): update helm release nvidia-device-plugin to v0.15.0' (#200) from renovate/kubezero-addons-kubezero-addons-dependencies into master
Reviewed-on: #200
2024-04-19 12:24:08 +00:00
Renovate Bot 7bac355303 chore(deps): update helm release fluent-bit to v0.46.2 2024-04-19 03:07:14 +00:00
Renovate Bot dedfd1f7a3 chore(deps): update helm release nvidia-device-plugin to v0.15.0 2024-04-18 03:07:41 +00:00
Stefan Reimer 193967f600 security: release 1.28.9 to follow upstream secuirty patches 2024-04-17 10:26:01 +00:00
Stefan Reimer 5be0f7087e Merge pull request 'chore(deps): update kubezero-ci-dependencies' (#177) from renovate/kubezero-ci-kubezero-ci-dependencies into master
Reviewed-on: #177
2024-04-15 13:46:13 +00:00
Stefan Reimer b9e52bc2d9 fix: make Jaeger work again 2024-04-15 13:25:01 +00:00
Stefan Reimer 2cdb30b178 Merge pull request 'chore(deps): update kubezero-telemetry-dependencies' (#189) from renovate/kubezero-telemetry-kubezero-telemetry-dependencies into master
Reviewed-on: #189
2024-04-15 13:05:22 +00:00
Renovate Bot 828c467d37 chore(deps): update kubezero-telemetry-dependencies 2024-04-15 03:05:29 +00:00
Renovate Bot dbd1ade98c chore(deps): update kubezero-ci-dependencies 2024-04-15 03:05:17 +00:00
Stefan Reimer 730020b329 fix: remove legacy argocd resources properly 2024-04-11 14:42:15 +01:00
Stefan Reimer 1caa01b28b docs: some more details for v1.28 2024-04-09 15:15:44 +00:00
Stefan Reimer c91d570857 Chore: various version bumps 2024-04-09 15:13:16 +00:00
Stefan Reimer da0c33b02b chore: metrics version bump 2024-04-09 14:56:16 +00:00
Stefan Reimer 5c6fd9bd2c secuirty: Istio version bump 2024-04-09 14:56:16 +00:00
Stefan Reimer 995d159d3e Merge pull request 'chore(deps): update kubezero-metrics-dependencies' (#184) from renovate/kubezero-metrics-kubezero-metrics-dependencies into master
Reviewed-on: #184
2024-04-09 14:53:50 +00:00
Renovate Bot c9dc123eff chore(deps): update kubezero-metrics-dependencies 2024-04-09 14:51:51 +00:00
Stefan Reimer 5237b002b4 Merge pull request 'chore(deps): update kubezero-addons-dependencies' (#175) from renovate/kubezero-addons-kubezero-addons-dependencies into master
Reviewed-on: #175
2024-04-09 14:47:57 +00:00
Stefan Reimer 61d373af7a Merge pull request 'chore(deps): update helm release aws-efs-csi-driver to v2.5.7' (#182) from renovate/kubezero-storage-kubezero-storage-dependencies into master
Reviewed-on: #182
2024-04-09 14:47:04 +00:00
Stefan Reimer 012c26d3d6 Merge pull request 'chore(deps): update helm release argo-cd to v6.7.10' (#181) from renovate/kubezero-argo-kubezero-argo-dependencies into master
Reviewed-on: #181
2024-04-09 14:44:51 +00:00
Stefan Reimer 945642a551 Merge pull request 'chore(deps): update helm release opentelemetry-collector to v0.86.2' (#183) from renovate/kubezero-telemetry-kubezero-telemetry-dependencies into master
Reviewed-on: #183
2024-04-09 14:44:30 +00:00
Stefan Reimer 9d835e4385 Merge pull request 'chore(deps): update helm release kube-prometheus-stack to v58' (#185) from renovate/kubezero-metrics-major-kubezero-metrics-dependencies into master
Reviewed-on: #185
2024-04-09 14:42:30 +00:00
Stefan Reimer c057f35547 Merge pull request 'chore(deps): update helm release gateway to v1.21.1' (#186) from renovate/kubezero-istio-gateway-kubezero-istio-gateway-dependencies into master
Reviewed-on: #186
2024-04-09 14:41:49 +00:00
Stefan Reimer b29774d6d5 Merge pull request 'chore(deps): update kubezero-istio-dependencies to v1.21.1' (#187) from renovate/kubezero-istio-kubezero-istio-dependencies into master
Reviewed-on: #187
2024-04-09 14:41:34 +00:00
Renovate Bot e748303864 chore(deps): update kubezero-istio-dependencies to v1.21.1 2024-04-09 03:09:02 +00:00
Renovate Bot 3f8a2c929c chore(deps): update helm release gateway to v1.21.1 2024-04-09 03:08:15 +00:00
Stefan Reimer 7a80650d9c fix: disable feature flag for now 2024-04-08 18:09:22 +00:00
Stefan Reimer 75fc295066 fix: upgrade flow tweaks 2024-04-08 19:08:45 +01:00
Stefan Reimer 705f36f9aa feat: logging module version bumps 2024-04-08 12:30:01 +00:00
Renovate Bot aa597a4970 chore(deps): update helm release kube-prometheus-stack to v58 2024-04-07 03:04:53 +00:00
Renovate Bot 0e4ed20972 chore(deps): update kubezero-addons-dependencies 2024-04-06 03:07:00 +00:00
Renovate Bot 773f968d90 chore(deps): update helm release opentelemetry-collector to v0.86.2 2024-04-06 03:06:56 +00:00
Renovate Bot c54c9d78c4 chore(deps): update helm release argo-cd to v6.7.10 2024-04-06 03:06:38 +00:00
Renovate Bot f8605e4b07 chore(deps): update helm release aws-efs-csi-driver to v2.5.7 2024-03-30 03:05:51 +00:00
105 changed files with 6558 additions and 278 deletions

View File

@ -1,25 +1,26 @@
# Parse version from latest git semver tag
GIT_TAG ?= $(shell git describe --tags --match v*.*.* 2>/dev/null || git rev-parse --short HEAD 2>/dev/null)
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null | sed -e 's/[^a-zA-Z0-9]/-/g')
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
TAG := $(GIT_TAG)
TAG ::= $(GIT_TAG)
# append branch name to tag if NOT main nor master
ifeq (,$(filter main master, $(GIT_BRANCH)))
# If branch is substring of tag, omit branch name
ifeq ($(findstring $(GIT_BRANCH), $(GIT_TAG)),)
# only append branch name if not equal tag
ifneq ($(GIT_TAG), $(GIT_BRANCH))
TAG = $(GIT_TAG)-$(GIT_BRANCH)
# Sanitize GIT_BRANCH to allowed Docker tag character set
TAG = $(GIT_TAG)-$(shell echo $$GIT_BRANCH | sed -e 's/[^a-zA-Z0-9]/-/g')
endif
endif
endif
ARCH := amd64
ALL_ARCHS := amd64 arm64
ARCH ::= amd64
ALL_ARCHS ::= amd64 arm64
_ARCH = $(or $(filter $(ARCH),$(ALL_ARCHS)),$(error $$ARCH [$(ARCH)] must be exactly one of "$(ALL_ARCHS)"))
ifneq ($(TRIVY_REMOTE),)
TRIVY_OPTS := --server $(TRIVY_REMOTE)
TRIVY_OPTS ::= --server $(TRIVY_REMOTE)
endif
.SILENT: ; # no need for @
@ -45,7 +46,7 @@ test:: ## test built artificats
scan: ## Scan image using trivy
echo "Scanning $(IMAGE):$(TAG)-$(_ARCH) using Trivy $(TRIVY_REMOTE)"
trivy image $(TRIVY_OPTS) localhost/$(IMAGE):$(TAG)-$(_ARCH)
trivy image $(TRIVY_OPTS) --quiet --no-progress localhost/$(IMAGE):$(TAG)-$(_ARCH)
# first tag and push all actual images
# create new manifest for each tag and add all available TAG-ARCH before pushing
@ -77,7 +78,7 @@ rm-image:
## some useful tasks during development
ci-pull-upstream: ## pull latest shared .ci subtree
git stash && git subtree pull --prefix .ci ssh://git@git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash && git stash pop
git subtree pull --prefix .ci ssh://git@git.zero-downtime.net/ZeroDownTime/ci-tools-lib.git master --squash -m "Merge latest ci-tools-lib"
create-repo: ## create new AWS ECR public repository
aws ecr-public create-repository --repository-name $(IMAGE) --region $(REGION)

View File

@ -2,6 +2,9 @@
def call(Map config=[:]) {
pipeline {
options {
disableConcurrentBuilds()
}
agent {
node {
label 'podman-aws-trivy'
@ -10,6 +13,8 @@ def call(Map config=[:]) {
stages {
stage('Prepare') {
steps {
sh 'mkdir -p reports'
// we set pull tags as project adv. options
// pull tags
//withCredentials([gitUsernamePassword(credentialsId: 'gitea-jenkins-user')]) {
@ -35,12 +40,13 @@ def call(Map config=[:]) {
// Scan via trivy
stage('Scan') {
environment {
TRIVY_FORMAT = "template"
TRIVY_OUTPUT = "reports/trivy.html"
}
steps {
sh 'mkdir -p reports && make scan'
// we always scan and create the full json report
sh 'TRIVY_FORMAT=json TRIVY_OUTPUT="reports/trivy.json" make scan'
// render custom full html report
sh 'trivy convert -f template -t @/home/jenkins/html.tpl -o reports/trivy.html reports/trivy.json'
publishHTML target: [
allowMissing: true,
alwaysLinkToLastBuild: true,
@ -50,13 +56,12 @@ def call(Map config=[:]) {
reportName: 'TrivyScan',
reportTitles: 'TrivyScan'
]
sh 'echo "Trivy report at: $BUILD_URL/TrivyScan"'
// Scan again and fail on CRITICAL vulns, if not overridden
// fail build if issues found above trivy threshold
script {
if (config.trivyFail == 'NONE') {
echo 'trivyFail == NONE, review Trivy report manually. Proceeding ...'
} else {
sh "TRIVY_EXIT_CODE=1 TRIVY_SEVERITY=${config.trivyFail} make scan"
if ( config.trivyFail ) {
sh "TRIVY_SEVERITY=${config.trivyFail} trivy convert --report summary --exit-code 1 reports/trivy.json"
}
}
}

View File

@ -3,7 +3,7 @@ ARG ALPINE_VERSION=3.19
FROM docker.io/alpine:${ALPINE_VERSION}
ARG ALPINE_VERSION
ARG KUBE_VERSION=1.28.8
ARG KUBE_VERSION=1.28.9
RUN cd /etc/apk/keys && \
wget "https://cdn.zero-downtime.net/alpine/stefan@zero-downtime.net-61bb6bfb.rsa.pub" && \

View File

@ -44,8 +44,8 @@ gantt
# Components
## OS
- all compute nodes are running on Alpine V3.18
- 2 GB encrypted root file system
- all compute nodes are running on Alpine V3.19
- 1 or 2 GB encrypted root file system
- no external dependencies at boot time, apart from container registries
- minimal attack surface
- extremely small memory footprint / overhead

View File

@ -23,21 +23,27 @@ control_plane_upgrade kubeadm_upgrade
# shellcheck disable=SC2015
#argo_used && kubectl edit app kubezero -n argocd || kubectl edit cm kubezero-values -n kube-system
### v1.28
# - remove old argocd app, all resources will be taken over by argo.argo-cd
argo_used && rc=$? || rc=$?
if [ $rc -eq 0 ]; then
kubectl patch app argocd -n argocd \
--type json \
--patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]' && \
kubectl delete app argocd -n argocd || true
# remove legacy argocd app resources, but NOT kubezero-git-sync nor the appproject
kubectl api-resources --verbs=list --namespaced -o name | grep -ve 'app.*argoproj' | xargs -n 1 kubectl delete --ignore-not-found -l argocd.argoproj.io/instance=argocd -n argocd
fi
# upgrade modules
control_plane_upgrade "apply_network apply_addons, apply_storage, apply_operators"
control_plane_upgrade "apply_network, apply_addons, apply_storage, apply_operators"
echo "Checking that all pods in kube-system are running ..."
waitSystemPodsRunning
echo "Applying remaining KubeZero modules..."
### v1.28
# - remove old argocd app, all resources will be taken over by argo.argo-cd
kubectl patch app argocd -n argocd \
--type json \
--patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]' && \
kubectl delete app argocd -n argocd || true
control_plane_upgrade "apply_cert-manager, apply_istio, apply_istio-ingress, apply_istio-private-ingress, apply_logging, apply_metrics, apply_telemetry, apply_argo"
# Trigger backup of upgraded cluster state

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubeadm
description: KubeZero Kubeadm cluster config
type: application
version: 1.28.8
version: 1.28.9
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:

View File

@ -2,7 +2,8 @@ apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: {{ .Chart.Version }}
clusterName: {{ .Values.global.clusterName }}
#featureGates:
featureGates:
EtcdLearnerMode: true # becomes beta in 1.29
# NonGracefulFailover: true
controlPlaneEndpoint: {{ .Values.api.endpoint }}
networking:

View File

@ -6,7 +6,7 @@ cgroupDriver: cgroupfs
logging:
format: json
hairpinMode: hairpin-veth
ContainerRuntimeEndpoint: "unix:///var/run/crio/crio.sock"
containerRuntimeEndpoint: "unix:///var/run/crio/crio.sock"
{{- if .Values.systemd }}
resolvConf: /run/systemd/resolve/resolv.conf
{{- end }}

View File

@ -1,9 +1,11 @@
{{- /* Feature gates for all control plane components */ -}}
{{- /* Issues: "MemoryQoS" */ -}}
{{- /* v1.30?: "NodeSwap" */ -}}
{{- /* v1.29: remove/beta now "SidecarContainers" */ -}}
{{- /* Issues: MemoryQoS */ -}}
{{- /* v1.28: PodAndContainerStatsFromCRI still not working */ -}}
{{- /* v1.28: UnknownVersionInteroperabilityProxy requires StorageVersionAPI which is still alpha in 1.30 */ -}}
{{- /* v1.29: remove/beta SidecarContainers */ -}}
{{- /* v1.30: remove/beta KubeProxyDrainingTerminatingNodes */ -}}
{{- define "kubeadm.featuregates" }}
{{- $gates := list "CustomCPUCFSQuotaPeriod" "SidecarContainers" "PodAndContainerStatsFromCRI" }}
{{- $gates := list "CustomCPUCFSQuotaPeriod" "SidecarContainers" "KubeProxyDrainingTerminatingNodes" }}
{{- if eq .return "csv" }}
{{- range $key := $gates }}
{{- $key }}=true,

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-addons
description: KubeZero umbrella chart for various optional cluster addons
type: application
version: 0.8.5
version: 0.8.7
appVersion: v1.28
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
@ -20,7 +20,7 @@ maintainers:
email: stefan@zero-downtime.net
dependencies:
- name: external-dns
version: 1.14.3
version: 1.14.4
repository: https://kubernetes-sigs.github.io/external-dns/
condition: external-dns.enabled
- name: cluster-autoscaler
@ -28,12 +28,12 @@ dependencies:
repository: https://kubernetes.github.io/autoscaler
condition: cluster-autoscaler.enabled
- name: nvidia-device-plugin
version: 0.14.5
version: 0.15.0
# https://github.com/NVIDIA/k8s-device-plugin
repository: https://nvidia.github.io/k8s-device-plugin
condition: nvidia-device-plugin.enabled
- name: sealed-secrets
version: 2.15.1
version: 2.15.3
repository: https://bitnami-labs.github.io/sealed-secrets
condition: sealed-secrets.enabled
- name: aws-node-termination-handler

View File

@ -1,6 +1,6 @@
# kubezero-addons
![Version: 0.8.5](https://img.shields.io/badge/Version-0.8.5-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.28](https://img.shields.io/badge/AppVersion-v1.28-informational?style=flat-square)
![Version: 0.8.7](https://img.shields.io/badge/Version-0.8.7-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.28](https://img.shields.io/badge/AppVersion-v1.28-informational?style=flat-square)
KubeZero umbrella chart for various optional cluster addons
@ -18,10 +18,10 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://bitnami-labs.github.io/sealed-secrets | sealed-secrets | 2.15.1 |
| https://kubernetes-sigs.github.io/external-dns/ | external-dns | 1.14.3 |
| https://bitnami-labs.github.io/sealed-secrets | sealed-secrets | 2.15.3 |
| https://kubernetes-sigs.github.io/external-dns/ | external-dns | 1.14.4 |
| https://kubernetes.github.io/autoscaler | cluster-autoscaler | 9.36.0 |
| https://nvidia.github.io/k8s-device-plugin | nvidia-device-plugin | 0.14.5 |
| https://nvidia.github.io/k8s-device-plugin | nvidia-device-plugin | 0.15.0 |
| https://twin.github.io/helm-charts | aws-eks-asg-rolling-update-handler | 1.5.0 |
| oci://public.ecr.aws/aws-ec2/helm | aws-node-termination-handler | 0.23.0 |
@ -73,7 +73,6 @@ Device plugin for [AWS Neuron](https://aws.amazon.com/machine-learning/neuron/)
| aws-eks-asg-rolling-update-handler.securityContext.seccompProfile.type | string | `"RuntimeDefault"` | |
| aws-eks-asg-rolling-update-handler.tolerations[0].effect | string | `"NoSchedule"` | |
| aws-eks-asg-rolling-update-handler.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| aws-node-termination-handler.checkASGTagBeforeDraining | bool | `false` | |
| aws-node-termination-handler.deleteLocalData | bool | `true` | |
| aws-node-termination-handler.emitKubernetesEvents | bool | `true` | |
| aws-node-termination-handler.enableProbesServer | bool | `true` | |

View File

@ -24,7 +24,7 @@ spec:
volumeMounts:
- name: host
mountPath: /host
readOnly: true
#readOnly: true
- name: workdir
mountPath: /tmp
env:

View File

@ -1,7 +1,7 @@
apiVersion: v2
description: KubeZero Argo - Events, Workflow, CD
name: kubezero-argo
version: 0.2.0
version: 0.2.2
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -22,7 +22,7 @@ dependencies:
repository: https://argoproj.github.io/argo-helm
condition: argo-events.enabled
- name: argo-cd
version: 6.7.3
version: 6.9.2
repository: https://argoproj.github.io/argo-helm
condition: argo-cd.enabled
- name: argocd-apps
@ -30,7 +30,7 @@ dependencies:
repository: https://argoproj.github.io/argo-helm
condition: argo-cd.enabled
- name: argocd-image-updater
version: 0.9.6
version: 0.10.0
repository: https://argoproj.github.io/argo-helm
condition: argocd-image-updater.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-argo
![Version: 0.2.0](https://img.shields.io/badge/Version-0.2.0-informational?style=flat-square)
![Version: 0.2.1](https://img.shields.io/badge/Version-0.2.1-informational?style=flat-square)
KubeZero Argo - Events, Workflow, CD
@ -18,7 +18,7 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://argoproj.github.io/argo-helm | argo-cd | 6.7.3 |
| https://argoproj.github.io/argo-helm | argo-cd | 6.7.10 |
| https://argoproj.github.io/argo-helm | argo-events | 2.4.4 |
| https://argoproj.github.io/argo-helm | argocd-apps | 2.0.0 |
| https://argoproj.github.io/argo-helm | argocd-image-updater | 0.9.6 |
@ -30,18 +30,18 @@ Kubernetes: `>= 1.26.0`
|-----|------|---------|-------------|
| argo-cd.applicationSet.enabled | bool | `false` | |
| argo-cd.configs.cm."resource.customizations" | string | `"cert-manager.io/Certificate:\n # Lua script for customizing the health status assessment\n health.lua: |\n hs = {}\n if obj.status ~= nil then\n if obj.status.conditions ~= nil then\n for i, condition in ipairs(obj.status.conditions) do\n if condition.type == \"Ready\" and condition.status == \"False\" then\n hs.status = \"Degraded\"\n hs.message = condition.message\n return hs\n end\n if condition.type == \"Ready\" and condition.status == \"True\" then\n hs.status = \"Healthy\"\n hs.message = condition.message\n return hs\n end\n end\n end\n end\n hs.status = \"Progressing\"\n hs.message = \"Waiting for certificate\"\n return hs\n"` | |
| argo-cd.configs.cm."timeout.reconciliation" | int | `300` | |
| argo-cd.configs.cm."timeout.reconciliation" | string | `"300s"` | |
| argo-cd.configs.cm."ui.bannercontent" | string | `"KubeZero v1.27 - Release notes"` | |
| argo-cd.configs.cm."ui.bannerpermanent" | string | `"true"` | |
| argo-cd.configs.cm."ui.bannerposition" | string | `"bottom"` | |
| argo-cd.configs.cm."ui.bannerurl" | string | `"https://kubezero.com/releases/v1.27"` | |
| argo-cd.configs.cm.url | string | `"https://argocd.example.com"` | |
| argo-cd.configs.knownHosts.data.ssh_known_hosts | string | `"bitbucket.org ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4+a2sjSSpBK0iqitSQ+5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE=\nbitbucket.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu+UUO\nbitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M=\ngithub.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=\ngithub.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl\ngithub.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==\ngitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=\ngitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf\ngitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9\ngit.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8YdJ4YcOK7A0K7qOWsRjCS+wHTStXRcwBe7gjG43HPSNijiCKoGf/c+tfNsRhyouawg7Law6M6ahmS/jKWBpznRIM+OdOFVSuhnK/nr6h6wG3/ZfdLicyAPvx1/STGY/Fc6/zXA88i/9PV+g84gSVmhf3fGY92wokiASiu9DU4T9dT1gIkdyOX6fbMi1/mMKLSrHnAQcjyasYDvw9ISCJ95EoSwbj7O4c+7jo9fxYvdCfZZZAEZGozTRLAAO0AnjVcRah7bZV/jfHJuhOipV/TB7UVAhlVv1dfGV7hoTp9UKtKZFJF4cjIrSGxqQA/mdhSdLgkepK7yc4Jp2xGnaarhY29DfqsQqop+ugFpTbj7Xy5Rco07mXc6XssbAZhI1xtCOX20N4PufBuYippCK5AE6AiAyVtJmvfGQk4HP+TjOyhFo7PZm3wc9Hym7IBBVC0Sl30K8ddufkAgHwNGvvu1ZmD9ZWaMOXJDHBCZGMMr16QREZwVtZTwMEQalc7/yqmuqMhmcJIfs/GA2Lt91y+pq9C8XyeUL0VFPch0vkcLSRe3ghMZpRFJ/ht307xPcLzgTJqN6oQtNNDzSQglSEjwhge2K4GyWcIh+oGsWxWz5dHyk1iJmw90Y976BZIl/mYVgbTtZAJ81oGe/0k5rAe+LDL+Yq6tG28QFOg0QmiQ==\n"` | |
| argo-cd.configs.params."controller.operation.processors" | string | `"5"` | |
| argo-cd.configs.params."controller.status.processors" | string | `"10"` | |
| argo-cd.configs.params."server.enable.gzip" | bool | `true` | |
| argo-cd.configs.params."server.insecure" | bool | `true` | |
| argo-cd.configs.secret.createSecret | bool | `false` | |
| argo-cd.configs.ssh.extraHosts | string | `"git.zero-downtime.net ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC8YdJ4YcOK7A0K7qOWsRjCS+wHTStXRcwBe7gjG43HPSNijiCKoGf/c+tfNsRhyouawg7Law6M6ahmS/jKWBpznRIM+OdOFVSuhnK/nr6h6wG3/ZfdLicyAPvx1/STGY/Fc6/zXA88i/9PV+g84gSVmhf3fGY92wokiASiu9DU4T9dT1gIkdyOX6fbMi1/mMKLSrHnAQcjyasYDvw9ISCJ95EoSwbj7O4c+7jo9fxYvdCfZZZAEZGozTRLAAO0AnjVcRah7bZV/jfHJuhOipV/TB7UVAhlVv1dfGV7hoTp9UKtKZFJF4cjIrSGxqQA/mdhSdLgkepK7yc4Jp2xGnaarhY29DfqsQqop+ugFpTbj7Xy5Rco07mXc6XssbAZhI1xtCOX20N4PufBuYippCK5AE6AiAyVtJmvfGQk4HP+TjOyhFo7PZm3wc9Hym7IBBVC0Sl30K8ddufkAgHwNGvvu1ZmD9ZWaMOXJDHBCZGMMr16QREZwVtZTwMEQalc7/yqmuqMhmcJIfs/GA2Lt91y+pq9C8XyeUL0VFPch0vkcLSRe3ghMZpRFJ/ht307xPcLzgTJqN6oQtNNDzSQglSEjwhge2K4GyWcIh+oGsWxWz5dHyk1iJmw90Y976BZIl/mYVgbTtZAJ81oGe/0k5rAe+LDL+Yq6tG28QFOg0QmiQ=="` | |
| argo-cd.configs.styles | string | `".sidebar__logo img { content: url(https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png); }\n.sidebar__logo__text-logo { height: 0em; }\n.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }\n"` | |
| argo-cd.controller.metrics.enabled | bool | `false` | |
| argo-cd.controller.metrics.serviceMonitor.enabled | bool | `true` | |

View File

@ -57,8 +57,8 @@ argo-cd:
.sidebar { background: linear-gradient(to bottom, #6A4D79, #493558, #2D1B30, #0D0711); }
cm:
ui.bannercontent: "KubeZero v1.27 - Release notes"
ui.bannerurl: "https://kubezero.com/releases/v1.27"
ui.bannercontent: "KubeZero v1.28 - Release notes"
ui.bannerurl: "https://kubezero.com/releases/v1.28"
ui.bannerpermanent: "true"
ui.bannerposition: "bottom"

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-ci
description: KubeZero umbrella chart for all things CI
type: application
version: 0.8.8
version: 0.8.11
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -18,11 +18,11 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: gitea
version: 10.1.3
version: 10.1.4
repository: https://dl.gitea.io/charts/
condition: gitea.enabled
- name: jenkins
version: 5.1.3
version: 5.1.18
repository: https://charts.jenkins.io
condition: jenkins.enabled
- name: trivy
@ -30,7 +30,7 @@ dependencies:
repository: https://aquasecurity.github.io/helm-charts/
condition: trivy.enabled
- name: renovate
version: 37.267.1
version: 37.368.2
repository: https://docs.renovatebot.com/helm-charts
condition: renovate.enabled
kubeVersion: ">= 1.25.0"

View File

@ -1,6 +1,6 @@
# kubezero-ci
![Version: 0.8.8](https://img.shields.io/badge/Version-0.8.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.8.11](https://img.shields.io/badge/Version-0.8.11-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero umbrella chart for all things CI
@ -20,9 +20,9 @@ Kubernetes: `>= 1.25.0`
|------------|------|---------|
| https://aquasecurity.github.io/helm-charts/ | trivy | 0.7.0 |
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://charts.jenkins.io | jenkins | 5.1.3 |
| https://dl.gitea.io/charts/ | gitea | 10.1.3 |
| https://docs.renovatebot.com/helm-charts | renovate | 37.267.1 |
| https://charts.jenkins.io | jenkins | 5.1.18 |
| https://dl.gitea.io/charts/ | gitea | 10.1.4 |
| https://docs.renovatebot.com/helm-charts | renovate | 37.368.2 |
# Jenkins
- default build retention 10 builds, 32days
@ -58,6 +58,7 @@ Kubernetes: `>= 1.25.0`
| gitea.gitea.admin.existingSecret | string | `"gitea-admin-secret"` | |
| gitea.gitea.config.cache.ADAPTER | string | `"memory"` | |
| gitea.gitea.config.database.DB_TYPE | string | `"sqlite3"` | |
| gitea.gitea.config.log.LEVEL | string | `"warn"` | |
| gitea.gitea.config.queue.TYPE | string | `"level"` | |
| gitea.gitea.config.session.PROVIDER | string | `"memory"` | |
| gitea.gitea.config.ui.DEFAULT_THEME | string | `"github-dark"` | |
@ -66,13 +67,11 @@ Kubernetes: `>= 1.25.0`
| gitea.gitea.metrics.enabled | bool | `false` | |
| gitea.gitea.metrics.serviceMonitor.enabled | bool | `true` | |
| gitea.image.rootless | bool | `true` | |
| gitea.image.tag | string | `"1.21.9"` | |
| gitea.image.tag | string | `"1.21.11"` | |
| gitea.istio.enabled | bool | `false` | |
| gitea.istio.gateway | string | `"istio-ingress/private-ingressgateway"` | |
| gitea.istio.url | string | `"git.example.com"` | |
| gitea.persistence.create | bool | `false` | |
| gitea.persistence.enabled | bool | `true` | |
| gitea.persistence.mount | bool | `true` | |
| gitea.persistence.claimName | string | `"data-gitea-0"` | |
| gitea.persistence.size | string | `"4Gi"` | |
| gitea.postgresql-ha.enabled | bool | `false` | |
| gitea.postgresql.enabled | bool | `false` | |
@ -98,10 +97,15 @@ Kubernetes: `>= 1.25.0`
| jenkins.agent.resources.limits.memory | string | `""` | |
| jenkins.agent.resources.requests.cpu | string | `""` | |
| jenkins.agent.resources.requests.memory | string | `""` | |
| jenkins.agent.serviceAccount | string | `"jenkins-podman-aws"` | |
| jenkins.agent.showRawYaml | bool | `false` | |
| jenkins.agent.yamlMergeStrategy | string | `"merge"` | |
| jenkins.agent.yamlTemplate | string | `"apiVersion: v1\nkind: Pod\nspec:\n securityContext:\n fsGroup: 1000\n serviceAccountName: jenkins-podman-aws\n containers:\n - name: jnlp\n resources:\n requests:\n cpu: \"512m\"\n memory: \"1024Mi\"\n limits:\n cpu: \"4\"\n memory: \"6144Mi\"\n github.com/fuse: 1\n volumeMounts:\n - name: aws-token\n mountPath: \"/var/run/secrets/sts.amazonaws.com/serviceaccount/\"\n readOnly: true\n - name: host-registries-conf\n mountPath: \"/home/jenkins/.config/containers/registries.conf\"\n readOnly: true\n volumes:\n - name: aws-token\n projected:\n sources:\n - serviceAccountToken:\n path: token\n expirationSeconds: 86400\n audience: \"sts.amazonaws.com\"\n - name: host-registries-conf\n hostPath:\n path: /etc/containers/registries.conf\n type: File"` | |
| jenkins.controller.JCasC.configScripts.zdt-settings | string | `"jenkins:\n noUsageStatistics: true\n disabledAdministrativeMonitors:\n - \"jenkins.security.ResourceDomainRecommendation\"\nappearance:\n themeManager:\n disableUserThemes: true\n theme: \"dark\"\nunclassified:\n buildDiscarders:\n configuredBuildDiscarders:\n - \"jobBuildDiscarder\"\n - defaultBuildDiscarder:\n discarder:\n logRotator:\n artifactDaysToKeepStr: \"32\"\n artifactNumToKeepStr: \"10\"\n daysToKeepStr: \"100\"\n numToKeepStr: \"10\"\n"` | |
| jenkins.agent.yamlTemplate | string | `"apiVersion: v1\nkind: Pod\nspec:\n securityContext:\n fsGroup: 1000\n containers:\n - name: jnlp\n resources:\n requests:\n cpu: \"512m\"\n memory: \"1024Mi\"\n limits:\n cpu: \"4\"\n memory: \"6144Mi\"\n github.com/fuse: 1\n volumeMounts:\n - name: aws-token\n mountPath: \"/var/run/secrets/sts.amazonaws.com/serviceaccount/\"\n readOnly: true\n - name: host-registries-conf\n mountPath: \"/home/jenkins/.config/containers/registries.conf\"\n readOnly: true\n volumes:\n - name: aws-token\n projected:\n sources:\n - serviceAccountToken:\n path: token\n expirationSeconds: 86400\n audience: \"sts.amazonaws.com\"\n - name: host-registries-conf\n hostPath:\n path: /etc/containers/registries.conf\n type: File"` | |
| jenkins.controller.JCasC.configScripts.zdt-settings | string | `"jenkins:\n noUsageStatistics: true\n disabledAdministrativeMonitors:\n - \"jenkins.security.ResourceDomainRecommendation\"\nappearance:\n themeManager:\n disableUserThemes: true\n theme: \"dark\"\nunclassified:\n openTelemetry:\n configurationProperties: |-\n otel.exporter.otlp.protocol=grpc\n otel.instrumentation.jenkins.web.enabled=false\n ignoredSteps: \"dir,echo,isUnix,pwd,properties\"\n #endpoint: \"telemetry-jaeger-collector.telemetry:4317\"\n exportOtelConfigurationAsEnvironmentVariables: false\n #observabilityBackends:\n # - jaeger:\n # jaegerBaseUrl: \"https://jaeger.example.com\"\n # name: \"KubeZero Jaeger\"\n serviceName: \"Jenkins\"\n buildDiscarders:\n configuredBuildDiscarders:\n - \"jobBuildDiscarder\"\n - defaultBuildDiscarder:\n discarder:\n logRotator:\n artifactDaysToKeepStr: \"32\"\n artifactNumToKeepStr: \"10\"\n daysToKeepStr: \"100\"\n numToKeepStr: \"10\"\n"` | |
| jenkins.controller.containerEnv[0].name | string | `"OTEL_LOGS_EXPORTER"` | |
| jenkins.controller.containerEnv[0].value | string | `"none"` | |
| jenkins.controller.containerEnv[1].name | string | `"OTEL_METRICS_EXPORTER"` | |
| jenkins.controller.containerEnv[1].value | string | `"none"` | |
| jenkins.controller.disableRememberMe | bool | `true` | |
| jenkins.controller.enableRawHtmlMarkupFormatter | bool | `true` | |
| jenkins.controller.image.tag | string | `"alpine-jdk17"` | |
@ -114,6 +118,7 @@ Kubernetes: `>= 1.25.0`
| jenkins.controller.installPlugins[12] | string | `"dark-theme"` | |
| jenkins.controller.installPlugins[13] | string | `"matrix-auth"` | |
| jenkins.controller.installPlugins[14] | string | `"reverse-proxy-auth-plugin"` | |
| jenkins.controller.installPlugins[15] | string | `"opentelemetry"` | |
| jenkins.controller.installPlugins[1] | string | `"kubernetes-credentials-provider"` | |
| jenkins.controller.installPlugins[2] | string | `"workflow-aggregator"` | |
| jenkins.controller.installPlugins[3] | string | `"git"` | |
@ -152,7 +157,7 @@ Kubernetes: `>= 1.25.0`
| renovate.env.LOG_FORMAT | string | `"json"` | |
| renovate.securityContext.fsGroup | int | `1000` | |
| trivy.enabled | bool | `false` | |
| trivy.image.tag | string | `"0.49.1"` | |
| trivy.image.tag | string | `"0.50.1"` | |
| trivy.persistence.enabled | bool | `true` | |
| trivy.persistence.size | string | `"1Gi"` | |
| trivy.rbac.create | bool | `false` | |

View File

@ -2,7 +2,7 @@ gitea:
enabled: false
image:
tag: 1.21.9
tag: 1.21.11
rootless: true
repliaCount: 1
@ -13,10 +13,7 @@ gitea:
# Since V9 they default to RWX and deployment, we default to old existing RWO from statefulset
persistence:
enabled: true
mount: true
create: false
#claimName: <set per install>
claimName: data-gitea-0
size: 4Gi
securityContext:
@ -103,6 +100,13 @@ jenkins:
javaOpts: "-XX:+UseContainerSupport -XX:+UseStringDeduplication -Dhudson.model.DirectoryBrowserSupport.CSP=\"sandbox allow-popups; default-src 'none'; img-src 'self' cdn.zero-downtime.net; style-src 'unsafe-inline';\""
jenkinsOpts: "--sessionTimeout=300 --sessionEviction=10800"
# Until we setup the logging and metrics pipelines in OTEL
containerEnv:
- name: OTEL_LOGS_EXPORTER
value: "none"
- name: OTEL_METRICS_EXPORTER
value: "none"
resources:
requests:
cpu: "250m"
@ -130,6 +134,18 @@ jenkins:
disableUserThemes: true
theme: "dark"
unclassified:
openTelemetry:
configurationProperties: |-
otel.exporter.otlp.protocol=grpc
otel.instrumentation.jenkins.web.enabled=false
ignoredSteps: "dir,echo,isUnix,pwd,properties"
#endpoint: "telemetry-jaeger-collector.telemetry:4317"
exportOtelConfigurationAsEnvironmentVariables: false
#observabilityBackends:
# - jaeger:
# jaegerBaseUrl: "https://jaeger.example.com"
# name: "KubeZero Jaeger"
serviceName: "Jenkins"
buildDiscarders:
configuredBuildDiscarders:
- "jobBuildDiscarder"
@ -157,6 +173,7 @@ jenkins:
- dark-theme
- matrix-auth
- reverse-proxy-auth-plugin
- opentelemetry
serviceAccountAgent:
create: true
@ -171,6 +188,7 @@ jenkins:
podRetention: "Default"
showRawYaml: false
podName: "podman-aws"
serviceAccount: jenkins-podman-aws
annotations:
container.apparmor.security.beta.kubernetes.io/jnlp: unconfined
customJenkinsLabels:
@ -198,7 +216,6 @@ jenkins:
spec:
securityContext:
fsGroup: 1000
serviceAccountName: jenkins-podman-aws
containers:
- name: jnlp
resources:
@ -255,7 +272,7 @@ jenkins:
trivy:
enabled: false
image:
tag: 0.49.1
tag: 0.50.1
persistence:
enabled: true
size: 1Gi

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio-gateway
description: KubeZero Umbrella Chart for Istio gateways
type: application
version: 0.21.0
version: 0.21.2
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -17,6 +17,6 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: gateway
version: 1.21.0
version: 1.21.2
repository: https://istio-release.storage.googleapis.com/charts
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-istio-gateway
![Version: 0.21.0](https://img.shields.io/badge/Version-0.21.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.21.2](https://img.shields.io/badge/Version-0.21.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio gateways
@ -21,7 +21,7 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://istio-release.storage.googleapis.com/charts | gateway | 1.21.0 |
| https://istio-release.storage.googleapis.com/charts | gateway | 1.21.2 |
## Values

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.21.0
appVersion: 1.21.2
description: Helm chart for deploying Istio gateways
icon: https://istio.io/latest/favicons/android-192x192.png
keywords:
@ -9,4 +9,4 @@ name: gateway
sources:
- https://github.com/istio/istio
type: application
version: 1.21.0
version: 1.21.2

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-istio
description: KubeZero Umbrella Chart for Istio
type: application
version: 0.21.0
version: 0.21.2
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -16,13 +16,13 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: base
version: 1.21.0
version: 1.21.2
repository: https://istio-release.storage.googleapis.com/charts
- name: istiod
version: 1.21.0
version: 1.21.2
repository: https://istio-release.storage.googleapis.com/charts
- name: kiali-server
version: "1.82.0"
version: "1.83.0"
repository: https://kiali.org/helm-charts
condition: kiali-server.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-istio
![Version: 0.21.0](https://img.shields.io/badge/Version-0.21.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.21.2](https://img.shields.io/badge/Version-0.21.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Istio
@ -21,9 +21,9 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://istio-release.storage.googleapis.com/charts | base | 1.21.0 |
| https://istio-release.storage.googleapis.com/charts | istiod | 1.21.0 |
| https://kiali.org/helm-charts | kiali-server | 1.82.0 |
| https://istio-release.storage.googleapis.com/charts | base | 1.21.2 |
| https://istio-release.storage.googleapis.com/charts | istiod | 1.21.2 |
| https://kiali.org/helm-charts | kiali-server | 1.83.0 |
## Values

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-logging
description: KubeZero Umbrella Chart for complete EFK stack
type: application
version: 0.8.11
version: 0.8.12
appVersion: 1.6.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
@ -24,7 +24,7 @@ dependencies:
repository: https://fluent.github.io/helm-charts
condition: fluentd.enabled
- name: fluent-bit
version: 0.46.0
version: 0.46.2
repository: https://fluent.github.io/helm-charts
condition: fluent-bit.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-logging
![Version: 0.8.9](https://img.shields.io/badge/Version-0.8.9-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.6.0](https://img.shields.io/badge/AppVersion-1.6.0-informational?style=flat-square)
![Version: 0.8.12](https://img.shields.io/badge/Version-0.8.12-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.6.0](https://img.shields.io/badge/AppVersion-1.6.0-informational?style=flat-square)
KubeZero Umbrella Chart for complete EFK stack
@ -19,8 +19,8 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://fluent.github.io/helm-charts | fluent-bit | 0.40.0 |
| https://fluent.github.io/helm-charts | fluentd | 0.5.0 |
| https://fluent.github.io/helm-charts | fluent-bit | 0.46.2 |
| https://fluent.github.io/helm-charts | fluentd | 0.5.2 |
## Changes from upstream
### ECK
@ -56,11 +56,6 @@ Kubernetes: `>= 1.26.0`
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| eck-operator.enabled | bool | `false` | |
| eck-operator.installCRDs | bool | `false` | |
| eck-operator.nodeSelector."node-role.kubernetes.io/control-plane" | string | `""` | |
| eck-operator.tolerations[0].effect | string | `"NoSchedule"` | |
| eck-operator.tolerations[0].key | string | `"node-role.kubernetes.io/control-plane"` | |
| elastic_password | string | `""` | |
| es.nodeSets | list | `[]` | |
| es.prometheus | bool | `false` | |
@ -87,11 +82,10 @@ Kubernetes: `>= 1.26.0`
| fluent-bit.daemonSetVolumes[1].hostPath.path | string | `"/var/lib/containers/logs"` | |
| fluent-bit.daemonSetVolumes[1].name | string | `"newlog"` | |
| fluent-bit.enabled | bool | `false` | |
| fluent-bit.image | string | `nil` | |
| fluent-bit.luaScripts."kubezero.lua" | string | `"function nest_k8s_ns(tag, timestamp, record)\n if not record['kubernetes']['namespace_name'] then\n return 0, 0, 0\n end\n new_record = {}\n for key, val in pairs(record) do\n if key == 'kube' then\n new_record[key] = {}\n new_record[key][record['kubernetes']['namespace_name']] = record[key]\n else\n new_record[key] = record[key]\n end\n end\n return 1, timestamp, new_record\nend\n"` | |
| fluent-bit.resources.limits.memory | string | `"128Mi"` | |
| fluent-bit.resources.requests.cpu | string | `"20m"` | |
| fluent-bit.resources.requests.memory | string | `"32Mi"` | |
| fluent-bit.resources.requests.memory | string | `"48Mi"` | |
| fluent-bit.serviceMonitor.enabled | bool | `false` | |
| fluent-bit.serviceMonitor.selector.release | string | `"metrics"` | |
| fluent-bit.testFramework.enabled | bool | `false` | |
@ -100,17 +94,15 @@ Kubernetes: `>= 1.26.0`
| fluentd.configMapConfigs[0] | string | `"fluentd-prometheus-conf"` | |
| fluentd.dashboards.enabled | bool | `false` | |
| fluentd.enabled | bool | `false` | |
| fluentd.env[0].name | string | `"FLUENTD_CONF"` | |
| fluentd.env[0].value | string | `"../../etc/fluent/fluent.conf"` | |
| fluentd.env[1].name | string | `"OUTPUT_PASSWORD"` | |
| fluentd.env[1].valueFrom.secretKeyRef.key | string | `"elastic"` | |
| fluentd.env[1].valueFrom.secretKeyRef.name | string | `"logging-es-elastic-user"` | |
| fluentd.env[0].name | string | `"OUTPUT_PASSWORD"` | |
| fluentd.env[0].valueFrom.secretKeyRef.key | string | `"elastic"` | |
| fluentd.env[0].valueFrom.secretKeyRef.name | string | `"logging-es-elastic-user"` | |
| fluentd.fileConfigs."00_system.conf" | string | `"<system>\n root_dir /fluentd/log\n log_level info\n ignore_repeated_log_interval 60s\n ignore_same_log_interval 60s\n workers 1\n</system>"` | |
| fluentd.fileConfigs."01_sources.conf" | string | `"<source>\n @type http\n @label @KUBERNETES\n port 9880\n bind 0.0.0.0\n keepalive_timeout 30\n</source>\n\n<source>\n @type forward\n @label @KUBERNETES\n port 24224\n bind 0.0.0.0\n # skip_invalid_event true\n send_keepalive_packet true\n <security>\n self_hostname \"#{ENV['HOSTNAME']}\"\n shared_key {{ .Values.shared_key }}\n </security>\n</source>"` | |
| fluentd.fileConfigs."02_filters.conf" | string | `"<label @KUBERNETES>\n # prevent log feedback loops eg. ES has issues etc.\n # discard logs from our own pods\n <match kube.logging.fluentd>\n @type relabel\n @label @FLUENT_LOG\n </match>\n\n # Exclude current fluent-bit multiline noise\n <filter kube.logging.fluent-bit>\n @type grep\n <exclude>\n key log\n pattern /could not append content to multiline context/\n </exclude>\n </filter>\n\n # Generate Hash ID to break endless loop for already ingested events during retries\n <filter **>\n @type elasticsearch_genid\n use_entire_record true\n </filter>\n\n # Route through DISPATCH for Prometheus metrics\n <match **>\n @type relabel\n @label @DISPATCH\n </match>\n</label>"` | |
| fluentd.fileConfigs."04_outputs.conf" | string | `"<label @OUTPUT>\n <match **>\n @id out_es\n @type elasticsearch\n # @log_level debug\n include_tag_key true\n\n id_key _hash\n remove_keys _hash\n write_operation create\n\n # KubeZero pipeline incl. GeoIP etc.\n pipeline fluentd\n\n hosts \"{{ .Values.output.host }}\"\n port 9200\n scheme http\n user elastic\n password \"#{ENV['OUTPUT_PASSWORD']}\"\n\n log_es_400_reason\n logstash_format true\n reconnect_on_error true\n reload_on_failure true\n request_timeout 300s\n slow_flush_log_threshold 55.0\n\n #with_transporter_log true\n\n verify_es_version_at_startup false\n default_elasticsearch_version 7\n suppress_type_name true\n\n # Retry failed bulk requests\n # https://github.com/uken/fluent-plugin-elasticsearch#unrecoverable-error-types\n unrecoverable_error_types [\"out_of_memory_error\"]\n bulk_message_request_threshold 1048576\n\n <buffer>\n @type file\n\n flush_mode interval\n flush_thread_count 2\n flush_interval 10s\n\n chunk_limit_size 2MB\n total_limit_size 1GB\n\n flush_at_shutdown true\n retry_type exponential_backoff\n retry_timeout 6h\n overflow_action drop_oldest_chunk\n disable_chunk_backup true\n </buffer>\n </match>\n</label>"` | |
| fluentd.image.repository | string | `"public.ecr.aws/zero-downtime/fluentd-concenter"` | |
| fluentd.image.tag | string | `"v1.16.0"` | |
| fluentd.image.tag | string | `"v1.16.3"` | |
| fluentd.istio.enabled | bool | `false` | |
| fluentd.kind | string | `"Deployment"` | |
| fluentd.metrics.serviceMonitor.additionalLabels.release | string | `"metrics"` | |

View File

@ -1,9 +1,9 @@
annotations:
artifacthub.io/changes: |
- kind: changed
description: "Updated Fluent Bit OCI image to v2.2.0."
description: "Updated _Fluent Bit_ OCI image to [v3.0.2](https://github.com/fluent/fluent-bit/releases/tag/v3.0.2)."
apiVersion: v1
appVersion: 2.2.0
appVersion: 3.0.2
description: Fast and lightweight log processor and forwarder or Linux, OSX and BSD
family operating systems.
home: https://fluentbit.io/
@ -24,4 +24,4 @@ maintainers:
name: fluent-bit
sources:
- https://github.com/fluent/fluent-bit/
version: 0.40.0
version: 0.46.2

View File

@ -1,3 +1,6 @@
testFramework:
enabled: true
logLevel: debug
dashboards:

View File

@ -14,7 +14,9 @@ metadata:
{{- include "fluent-bit.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- range $key, $value := . }}
{{ printf "%s: %s" $key ((tpl $value $) | quote) }}
{{- end }}
{{- end }}
spec:
{{- if and $ingressSupportsIngressClassName .Values.ingress.ingressClassName }}

View File

@ -17,6 +17,11 @@ spec:
{{- if and (eq .Values.service.type "ClusterIP") (.Values.service.clusterIP) }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
{{- if (eq .Values.kind "DaemonSet") }}
{{- with .Values.service.internalTrafficPolicy }}
internalTrafficPolicy: {{ . }}
{{- end }}
{{- end }}
{{- if (eq .Values.service.type "LoadBalancer")}}
{{- with .Values.service.loadBalancerClass}}
loadBalancerClass: {{ . }}

View File

@ -13,7 +13,7 @@ spec:
jobLabel: app.kubernetes.io/instance
endpoints:
- port: http
path: /api/v1/metrics/prometheus
path: {{ default "/api/v2/metrics/prometheus" .Values.serviceMonitor.path }}
{{- with .Values.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}

View File

@ -5,16 +5,19 @@ metadata:
name: "{{ include "fluent-bit.fullname" . }}-test-connection"
namespace: {{ default .Release.Namespace .Values.testFramework.namespace }}
labels:
{{- include "fluent-bit.labels" . | nindent 4 }}
helm.sh/chart: {{ include "fluent-bit.chart" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
"helm.sh/hook": test-success
helm.sh/hook: test
helm.sh/hook-delete-policy: hook-succeeded
spec:
containers:
- name: wget
image: {{ include "fluent-bit.image" .Values.testFramework.image | quote }}
imagePullPolicy: {{ .Values.testFramework.image.pullPolicy }}
command: ['wget']
args: ['{{ include "fluent-bit.fullname" . }}:{{ .Values.service.port }}']
command: ["sh"]
args: ["-c", "wget -O- {{ include "fluent-bit.fullname" . }}:{{ .Values.service.port }}"]
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 4 }}

View File

@ -12,7 +12,7 @@ image:
# Set to "-" to not use the default value
tag:
digest:
pullPolicy: Always
pullPolicy: IfNotPresent
testFramework:
enabled: true
@ -91,6 +91,7 @@ securityContext: {}
service:
type: ClusterIP
port: 2020
internalTrafficPolicy:
loadBalancerClass:
loadBalancerSourceRanges: []
labels: {}
@ -128,7 +129,7 @@ serviceMonitor:
# scheme: ""
# tlsConfig: {}
## Beare in mind if youn want to collec metrics from a different port
## Bear in mind if you want to collect metrics from a different port
## you will need to configure the new ports on the extraPorts property.
additionalEndpoints: []
# - port: metrics
@ -418,7 +419,7 @@ config:
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
# This allows adding more files with arbitary filenames to /fluent-bit/etc/conf by providing key/value pairs.
# This allows adding more files with arbitrary filenames to /fluent-bit/etc/conf by providing key/value pairs.
# The key becomes the filename, the value becomes the file content.
extraFiles: {}
# upstream.conf: |

View File

@ -12,4 +12,4 @@ name: fluentd
sources:
- https://github.com/fluent/fluentd/
- https://github.com/fluent/fluentd-kubernetes-daemonset
version: 0.5.0
version: 0.5.2

View File

@ -90,3 +90,15 @@ Name of the configMap used for additional configuration files; allows users to o
{{ printf "%s-%s" "fluentd-config" ( include "fluentd.shortReleaseName" . ) }}
{{- end -}}
{{- end -}}
{{/*
HPA ApiVersion according k8s version
Check legacy first so helm template / kustomize will default to latest version
*/}}
{{- define "fluentd.hpa.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "autoscaling/v2beta2") (semverCompare "<1.23-0" .Capabilities.KubeVersion.GitVersion) -}}
autoscaling/v2beta2
{{- else -}}
autoscaling/v2
{{- end -}}
{{- end -}}

View File

@ -1,5 +1,5 @@
{{- if and ( eq .Values.kind "Deployment" ) .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta2
apiVersion: {{ include "fluentd.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "fluentd.fullname" . }}

View File

@ -1,4 +1,11 @@
{{/*
Target the very simple case where
fluentd is deployed with the default values
If the fluentd config is overriden and the metrics server removed
this will fail.
*/}}
{{- if .Values.testFramework.enabled }}
{{ if empty .Values.service.ports }}
apiVersion: v1
kind: Pod
metadata:
@ -11,7 +18,14 @@ spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "fluentd.fullname" . }}:{{ .Values.service.port }}']
command:
- sh
- -c
- |
set -e
# Give fluentd some time to start up
while :; do nc -vz {{ include "fluentd.fullname" . }}:24231 && break; sleep 1; done
wget '{{ include "fluentd.fullname" . }}:24231/metrics'
restartPolicy: Never
{{ end }}
{{- end }}

View File

@ -321,6 +321,14 @@ fileConfigs:
emit_unmatched_lines true
</source>
# expose metrics in prometheus format
<source>
@type prometheus
bind 0.0.0.0
port 24231
metrics_path /metrics
</source>
02_filters.conf: |-
<label @KUBERNETES>
<match kubernetes.var.log.containers.fluentd**>
@ -378,6 +386,8 @@ fileConfigs:
path ""
user elastic
password changeme
# Don't wait for elastic to start up.
verify_es_version_at_startup false
</match>
</label>

View File

@ -1,32 +1,38 @@
diff -tubrN charts/fluentd/templates/fluentd-configurations-cm.yaml charts/fluentd.zdt/templates/fluentd-configurations-cm.yaml
--- charts/fluentd/templates/fluentd-configurations-cm.yaml 2021-02-12 18:13:04.000000000 +0100
+++ charts/fluentd.zdt/templates/fluentd-configurations-cm.yaml 2021-03-09 17:54:34.904992401 +0100
@@ -7,7 +7,7 @@
diff -rtuN charts/fluentd.orig/templates/fluentd-configurations-cm.yaml charts/fluentd/templates/fluentd-configurations-cm.yaml
--- charts/fluentd.orig/templates/fluentd-configurations-cm.yaml 2024-04-08 11:00:03.030515998 +0000
+++ charts/fluentd/templates/fluentd-configurations-cm.yaml 2024-04-08 11:00:03.040516045 +0000
@@ -9,7 +9,7 @@
data:
{{- range $key, $value := .Values.fileConfigs }}
{{$key }}: |-
- {{- $value | nindent 4 }}
+ {{- (tpl $value $) | nindent 4 }}
{{- end }}
{{- end }}
---
diff -tubrN charts/fluentd/templates/tests/test-connection.yaml charts/fluentd.zdt/templates/tests/test-connection.yaml
--- charts/fluentd/templates/tests/test-connection.yaml 2021-02-12 18:13:04.000000000 +0100
+++ charts/fluentd.zdt/templates/tests/test-connection.yaml 2021-03-09 17:54:34.904992401 +0100
@@ -1,3 +1,4 @@
diff -rtuN charts/fluentd.orig/templates/tests/test-connection.yaml charts/fluentd/templates/tests/test-connection.yaml
--- charts/fluentd.orig/templates/tests/test-connection.yaml 2024-04-08 11:00:03.030515998 +0000
+++ charts/fluentd/templates/tests/test-connection.yaml 2024-04-08 11:03:16.254774985 +0000
@@ -4,6 +4,7 @@
If the fluentd config is overriden and the metrics server removed
this will fail.
*/}}
+{{- if .Values.testFramework.enabled }}
{{ if empty .Values.service.ports }}
apiVersion: v1
kind: Pod
metadata:
@@ -13,3 +14,4 @@
command: ['wget']
args: ['{{ include "fluentd.fullname" . }}:{{ .Values.service.port }}']
@@ -26,4 +27,5 @@
while :; do nc -vz {{ include "fluentd.fullname" . }}:24231 && break; sleep 1; done
wget '{{ include "fluentd.fullname" . }}:24231/metrics'
restartPolicy: Never
-{{ end }}
\ No newline at end of file
+{{ end }}
+{{- end }}
diff -tubrN charts/fluentd/values.yaml charts/fluentd.zdt/values.yaml
--- charts/fluentd/values.yaml 2021-02-12 18:13:04.000000000 +0100
+++ charts/fluentd.zdt/values.yaml 2021-03-09 17:54:34.908325735 +0100
@@ -12,6 +12,9 @@
diff -rtuN charts/fluentd.orig/values.yaml charts/fluentd/values.yaml
--- charts/fluentd.orig/values.yaml 2024-04-08 11:00:03.030515998 +0000
+++ charts/fluentd/values.yaml 2024-04-08 11:00:03.040516045 +0000
@@ -13,6 +13,9 @@
pullPolicy: "IfNotPresent"
tag: ""

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-metrics
description: KubeZero Umbrella Chart for Prometheus, Grafana and Alertmanager as well as all Kubernetes integrations.
type: application
version: 0.9.6
version: 0.9.8
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -19,14 +19,14 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: kube-prometheus-stack
version: 57.2.0
version: 58.0.0
repository: https://prometheus-community.github.io/helm-charts
- name: prometheus-adapter
version: 4.9.1
version: 4.10.0
repository: https://prometheus-community.github.io/helm-charts
condition: prometheus-adapter.enabled
- name: prometheus-pushgateway
version: 2.8.0
version: 2.10.0
repository: https://prometheus-community.github.io/helm-charts
condition: prometheus-pushgateway.enabled
kubeVersion: ">= 1.26.0"

View File

@ -1,6 +1,6 @@
# kubezero-metrics
![Version: 0.9.6](https://img.shields.io/badge/Version-0.9.6-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.9.8](https://img.shields.io/badge/Version-0.9.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for Prometheus, Grafana and Alertmanager as well as all Kubernetes integrations.
@ -19,9 +19,9 @@ Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://prometheus-community.github.io/helm-charts | kube-prometheus-stack | 57.2.0 |
| https://prometheus-community.github.io/helm-charts | prometheus-adapter | 4.9.1 |
| https://prometheus-community.github.io/helm-charts | prometheus-pushgateway | 2.8.0 |
| https://prometheus-community.github.io/helm-charts | kube-prometheus-stack | 58.0.0 |
| https://prometheus-community.github.io/helm-charts | prometheus-adapter | 4.10.0 |
| https://prometheus-community.github.io/helm-charts | prometheus-pushgateway | 2.10.0 |
## Values
@ -103,7 +103,9 @@ Kubernetes: `>= 1.26.0`
| kube-prometheus-stack.coreDns.enabled | bool | `true` | |
| kube-prometheus-stack.defaultRules.create | bool | `false` | |
| kube-prometheus-stack.grafana."grafana.ini"."auth.anonymous".enabled | bool | `true` | |
| kube-prometheus-stack.grafana."grafana.ini"."log.console".format | string | `"json"` | |
| kube-prometheus-stack.grafana."grafana.ini".alerting.enabled | bool | `false` | |
| kube-prometheus-stack.grafana."grafana.ini".analytics.check_for_plugin_updates | bool | `false` | |
| kube-prometheus-stack.grafana."grafana.ini".analytics.check_for_updates | bool | `false` | |
| kube-prometheus-stack.grafana."grafana.ini".dashboards.default_home_dashboard_path | string | `"/tmp/dashboards/KubeZero/home.json"` | |
| kube-prometheus-stack.grafana."grafana.ini".dashboards.min_refresh_interval | string | `"30s"` | |

View File

@ -7,7 +7,7 @@ annotations:
url: https://github.com/prometheus-operator/kube-prometheus
artifacthub.io/operator: "true"
apiVersion: v2
appVersion: v0.72.0
appVersion: v0.73.0
dependencies:
- condition: crds.enabled
name: crds
@ -62,4 +62,4 @@ sources:
- https://github.com/prometheus-community/helm-charts
- https://github.com/prometheus-operator/kube-prometheus
type: application
version: 57.2.0
version: 58.0.0

View File

@ -82,6 +82,25 @@ _See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documen
A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an incompatible breaking change needing manual actions.
### From 57.x to 58.x
This version upgrades Prometheus-Operator to v0.73.0
Run these commands to update the CRDs before applying the upgrade.
```console
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_scrapeconfigs.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
```
### From 56.x to 57.x
This version upgrades Prometheus-Operator to v0.72.0

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: alertmanagerconfigs.monitoring.coreos.com
spec:
@ -137,7 +137,7 @@ spec:
description: Months is a list of MonthRange
items:
description: MonthRange is an inclusive range of months of the year beginning in January Months can be specified by name (e.g 'January') by numerical month (e.g '1') or as an inclusive range (e.g 'January:March', '1:3', '1:March')
pattern: ^((?i)january|february|march|april|may|june|july|august|september|october|november|december|[1-12])(?:((:((?i)january|february|march|april|may|june|july|august|september|october|november|december|[1-12]))$)|$)
pattern: ^((?i)january|february|march|april|may|june|july|august|september|october|november|december|1[0-2]|[1-9])(?:((:((?i)january|february|march|april|may|june|july|august|september|october|november|december|1[0-2]|[1-9]))$)|$)
type: string
type: array
times:
@ -918,6 +918,9 @@ spec:
sendResolved:
description: Whether to notify about resolved alerts.
type: boolean
summary:
description: Message summary template. It requires Alertmanager >= 0.27.0.
type: string
text:
description: Message body template.
type: string

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: alertmanagers.monitoring.coreos.com
spec:
@ -663,7 +663,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
alertmanagerConfiguration:
description: 'EXPERIMENTAL: alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the `configSecret` field. This field may change in future releases.'
description: "alertmanagerConfiguration specifies the configuration of Alertmanager. \n If defined, it takes precedence over the `configSecret` field. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
properties:
global:
description: Defines the global parameters of the Alertmanager configuration.
@ -1964,6 +1964,11 @@ spec:
- name
type: object
type: array
enableFeatures:
description: "Enable access to Alertmanager feature flags. By default, no features are enabled. Enabling features which are disabled by default is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. \n It requires Alertmanager >= 0.27.0."
items:
type: string
type: array
externalUrl:
description: The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name.
type: string

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: podmonitors.monitoring.coreos.com
spec:
@ -44,6 +44,10 @@ spec:
description: When set to true, Prometheus must have the `get` permission on the `Nodes` objects.
type: boolean
type: object
bodySizeLimit:
description: "When defined, bodySizeLimit specifies a job level limit on the size of uncompressed response body that will be accepted by Prometheus. \n It requires Prometheus >= v2.28.0."
pattern: (^0|([0-9]*[.])?[0-9]+((K|M|G|T|E|P)i?)?B)$
type: string
jobLabel:
description: "The label to use to retrieve the job name from. `jobLabel` selects the label from the associated Kubernetes `Pod` object which will be used as the `job` label for all metrics. \n For example if `jobLabel` is set to `foo` and the Kubernetes `Pod` object is labeled with `foo: bar`, then Prometheus adds the `job=\"bar\"` label to all ingested metrics. \n If the value of this field is empty, the `job` label of the metrics defaults to the namespace and name of the PodMonitor object (e.g. `<namespace>/<name>`)."
type: string

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: probes.monitoring.coreos.com
spec:

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: prometheusagents.monitoring.coreos.com
spec:
@ -1688,7 +1688,7 @@ spec:
description: "When not empty, a label will be added to \n 1. All metrics scraped from `ServiceMonitor`, `PodMonitor`, `Probe` and `ScrapeConfig` objects. 2. All metrics generated from recording rules defined in `PrometheusRule` objects. 3. All alerts generated from alerting rules defined in `PrometheusRule` objects. 4. All vector selectors of PromQL expressions defined in `PrometheusRule` objects. \n The label will not added for objects referenced in `spec.excludedFromEnforcement`. \n The label's name is this field's value. The label's value is the namespace of the `ServiceMonitor`, `PodMonitor`, `Probe` or `PrometheusRule` object."
type: string
enforcedSampleLimit:
description: "When defined, enforcedSampleLimit specifies a global limit on the number of scraped samples that will be accepted. This overrides any `spec.sampleLimit` set by ServiceMonitor, PodMonitor, Probe objects unless `spec.sampleLimit` is greater than zero and less than than `spec.enforcedSampleLimit`. \n It is meant to be used by admins to keep the overall number of samples/series under a desired limit."
description: "When defined, enforcedSampleLimit specifies a global limit on the number of scraped samples that will be accepted. This overrides any `spec.sampleLimit` set by ServiceMonitor, PodMonitor, Probe objects unless `spec.sampleLimit` is greater than zero and less than `spec.enforcedSampleLimit`. \n It is meant to be used by admins to keep the overall number of samples/series under a desired limit."
format: int64
type: integer
enforcedTargetLimit:
@ -2742,7 +2742,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
podMonitorSelector:
description: "*Experimental* PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
description: "PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -2785,7 +2785,7 @@ spec:
description: Priority class assigned to the Pods.
type: string
probeNamespaceSelector:
description: '*Experimental* Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only.'
description: Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -2816,7 +2816,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
probeSelector:
description: "*Experimental* Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
description: "Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -3085,12 +3085,14 @@ spec:
properties:
batchSendDeadline:
description: BatchSendDeadline is the maximum time a sample will wait in buffer.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
capacity:
description: Capacity is the number of samples to buffer per shard before we start dropping them.
type: integer
maxBackoff:
description: MaxBackoff is the maximum retry delay.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
maxRetries:
description: MaxRetries is the maximum number of times to retry a batch on recoverable errors.
@ -3103,13 +3105,18 @@ spec:
type: integer
minBackoff:
description: MinBackoff is the initial retry delay. Gets doubled for every retry.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
minShards:
description: MinShards is the minimum number of shards, i.e. amount of concurrency.
type: integer
retryOnRateLimit:
description: Retry upon receiving a 429 status code from the remote-write storage. This is experimental feature and might change in the future.
description: "Retry upon receiving a 429 status code from the remote-write storage. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
type: boolean
sampleAgeLimit:
description: SampleAgeLimit drops samples older than the limit. It requires Prometheus >= v2.50.0.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
type: object
remoteTimeout:
description: Timeout for requests to the remote write endpoint.
@ -3389,7 +3396,7 @@ spec:
format: int64
type: integer
scrapeClasses:
description: EXPERIMENTAL List of scrape classes to expose to monitors and other scrape configs. This is experimental feature and might change in the future.
description: "List of scrape classes to expose to scraping objects such as PodMonitors, ServiceMonitors, Probes and ScrapeConfigs. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
items:
properties:
default:
@ -3399,6 +3406,63 @@ spec:
description: Name of the scrape class.
minLength: 1
type: string
relabelings:
description: "Relabelings configures the relabeling rules to apply to all scrape targets. \n The Operator automatically adds relabelings for a few standard Kubernetes fields like `__meta_kubernetes_namespace` and `__meta_kubernetes_service_name`. Then the Operator adds the scrape class relabelings defined here. Then the Operator adds the target-specific relabelings defined in the scrape object. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
items:
description: "RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
properties:
action:
default: replace
description: "Action to perform based on the regex matching. \n `Uppercase` and `Lowercase` actions require Prometheus >= v2.36.0. `DropEqual` and `KeepEqual` actions require Prometheus >= v2.41.0. \n Default: \"Replace\""
enum:
- replace
- Replace
- keep
- Keep
- drop
- Drop
- hashmod
- HashMod
- labelmap
- LabelMap
- labeldrop
- LabelDrop
- labelkeep
- LabelKeep
- lowercase
- Lowercase
- uppercase
- Uppercase
- keepequal
- KeepEqual
- dropequal
- DropEqual
type: string
modulus:
description: "Modulus to take of the hash of the source label values. \n Only applicable when the action is `HashMod`."
format: int64
type: integer
regex:
description: Regular expression against which the extracted value is matched.
type: string
replacement:
description: "Replacement value against which a Replace action is performed if the regular expression matches. \n Regex capture groups are available."
type: string
separator:
description: Separator is the string between concatenated SourceLabels.
type: string
sourceLabels:
description: The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression.
items:
description: LabelName is a valid Prometheus label name which may only contain ASCII letters, numbers, as well as underscores.
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
type: string
type: array
targetLabel:
description: "Label to which the resulting string is written in a replacement. \n It is mandatory for `Replace`, `HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and `DropEqual` actions. \n Regex capture groups are available."
type: string
type: object
type: array
tlsConfig:
description: TLSConfig section for scrapes.
properties:
@ -3514,7 +3578,7 @@ spec:
- name
x-kubernetes-list-type: map
scrapeConfigNamespaceSelector:
description: Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current current namespace only.
description: "Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. \n Note that the ScrapeConfig custom resource definition is currently at Alpha level."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -3545,7 +3609,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
scrapeConfigSelector:
description: "*Experimental* ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
description: "ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead. \n Note that the ScrapeConfig custom resource definition is currently at Alpha level."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -3755,7 +3819,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
shards:
description: "EXPERIMENTAL: Number of shards to distribute targets onto. `spec.replicas` multiplied by `spec.shards` is the total number of Pods created. \n Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally, use Thanos sidecar and Thanos querier or remote write data to a central location. \n Sharding is performed on the content of the `__address__` target meta-label for PodMonitors and ServiceMonitors and `__param_target__` for Probes. \n Default: 1"
description: "Number of shards to distribute targets onto. `spec.replicas` multiplied by `spec.shards` is the total number of Pods created. \n Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally, use Thanos sidecar and Thanos querier or remote write data to a central location. \n Sharding is performed on the content of the `__address__` target meta-label for PodMonitors and ServiceMonitors and `__param_target__` for Probes. \n Default: 1"
format: int32
type: integer
storage:
@ -4221,7 +4285,7 @@ spec:
type: object
type: array
tracingConfig:
description: 'EXPERIMENTAL: TracingConfig configures tracing in Prometheus. This is an experimental feature, it may change in any upcoming release in a breaking way.'
description: "TracingConfig configures tracing in Prometheus. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
properties:
clientType:
description: Client used to export the traces. Supported values are `http` or `grpc`.

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: prometheuses.monitoring.coreos.com
spec:
@ -1991,7 +1991,7 @@ spec:
description: "When not empty, a label will be added to \n 1. All metrics scraped from `ServiceMonitor`, `PodMonitor`, `Probe` and `ScrapeConfig` objects. 2. All metrics generated from recording rules defined in `PrometheusRule` objects. 3. All alerts generated from alerting rules defined in `PrometheusRule` objects. 4. All vector selectors of PromQL expressions defined in `PrometheusRule` objects. \n The label will not added for objects referenced in `spec.excludedFromEnforcement`. \n The label's name is this field's value. The label's value is the namespace of the `ServiceMonitor`, `PodMonitor`, `Probe` or `PrometheusRule` object."
type: string
enforcedSampleLimit:
description: "When defined, enforcedSampleLimit specifies a global limit on the number of scraped samples that will be accepted. This overrides any `spec.sampleLimit` set by ServiceMonitor, PodMonitor, Probe objects unless `spec.sampleLimit` is greater than zero and less than than `spec.enforcedSampleLimit`. \n It is meant to be used by admins to keep the overall number of samples/series under a desired limit."
description: "When defined, enforcedSampleLimit specifies a global limit on the number of scraped samples that will be accepted. This overrides any `spec.sampleLimit` set by ServiceMonitor, PodMonitor, Probe objects unless `spec.sampleLimit` is greater than zero and less than `spec.enforcedSampleLimit`. \n It is meant to be used by admins to keep the overall number of samples/series under a desired limit."
format: int64
type: integer
enforcedTargetLimit:
@ -3058,7 +3058,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
podMonitorSelector:
description: "*Experimental* PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
description: "PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -3101,7 +3101,7 @@ spec:
description: Priority class assigned to the Pods.
type: string
probeNamespaceSelector:
description: '*Experimental* Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only.'
description: Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -3132,7 +3132,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
probeSelector:
description: "*Experimental* Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
description: "Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -3730,12 +3730,14 @@ spec:
properties:
batchSendDeadline:
description: BatchSendDeadline is the maximum time a sample will wait in buffer.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
capacity:
description: Capacity is the number of samples to buffer per shard before we start dropping them.
type: integer
maxBackoff:
description: MaxBackoff is the maximum retry delay.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
maxRetries:
description: MaxRetries is the maximum number of times to retry a batch on recoverable errors.
@ -3748,13 +3750,18 @@ spec:
type: integer
minBackoff:
description: MinBackoff is the initial retry delay. Gets doubled for every retry.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
minShards:
description: MinShards is the minimum number of shards, i.e. amount of concurrency.
type: integer
retryOnRateLimit:
description: Retry upon receiving a 429 status code from the remote-write storage. This is experimental feature and might change in the future.
description: "Retry upon receiving a 429 status code from the remote-write storage. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
type: boolean
sampleAgeLimit:
description: SampleAgeLimit drops samples older than the limit. It requires Prometheus >= v2.50.0.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
type: object
remoteTimeout:
description: Timeout for requests to the remote write endpoint.
@ -4121,7 +4128,7 @@ spec:
format: int64
type: integer
scrapeClasses:
description: EXPERIMENTAL List of scrape classes to expose to monitors and other scrape configs. This is experimental feature and might change in the future.
description: "List of scrape classes to expose to scraping objects such as PodMonitors, ServiceMonitors, Probes and ScrapeConfigs. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
items:
properties:
default:
@ -4131,6 +4138,63 @@ spec:
description: Name of the scrape class.
minLength: 1
type: string
relabelings:
description: "Relabelings configures the relabeling rules to apply to all scrape targets. \n The Operator automatically adds relabelings for a few standard Kubernetes fields like `__meta_kubernetes_namespace` and `__meta_kubernetes_service_name`. Then the Operator adds the scrape class relabelings defined here. Then the Operator adds the target-specific relabelings defined in the scrape object. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
items:
description: "RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. \n More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config"
properties:
action:
default: replace
description: "Action to perform based on the regex matching. \n `Uppercase` and `Lowercase` actions require Prometheus >= v2.36.0. `DropEqual` and `KeepEqual` actions require Prometheus >= v2.41.0. \n Default: \"Replace\""
enum:
- replace
- Replace
- keep
- Keep
- drop
- Drop
- hashmod
- HashMod
- labelmap
- LabelMap
- labeldrop
- LabelDrop
- labelkeep
- LabelKeep
- lowercase
- Lowercase
- uppercase
- Uppercase
- keepequal
- KeepEqual
- dropequal
- DropEqual
type: string
modulus:
description: "Modulus to take of the hash of the source label values. \n Only applicable when the action is `HashMod`."
format: int64
type: integer
regex:
description: Regular expression against which the extracted value is matched.
type: string
replacement:
description: "Replacement value against which a Replace action is performed if the regular expression matches. \n Regex capture groups are available."
type: string
separator:
description: Separator is the string between concatenated SourceLabels.
type: string
sourceLabels:
description: The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression.
items:
description: LabelName is a valid Prometheus label name which may only contain ASCII letters, numbers, as well as underscores.
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
type: string
type: array
targetLabel:
description: "Label to which the resulting string is written in a replacement. \n It is mandatory for `Replace`, `HashMod`, `Lowercase`, `Uppercase`, `KeepEqual` and `DropEqual` actions. \n Regex capture groups are available."
type: string
type: object
type: array
tlsConfig:
description: TLSConfig section for scrapes.
properties:
@ -4246,7 +4310,7 @@ spec:
- name
x-kubernetes-list-type: map
scrapeConfigNamespaceSelector:
description: Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current current namespace only.
description: "Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. \n Note that the ScrapeConfig custom resource definition is currently at Alpha level."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -4277,7 +4341,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
scrapeConfigSelector:
description: "*Experimental* ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead."
description: "ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. \n If `spec.serviceMonitorSelector`, `spec.podMonitorSelector`, `spec.probeSelector` and `spec.scrapeConfigSelector` are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the `prometheus.yaml.gz` key. This behavior is *deprecated* and will be removed in the next major version of the custom resource definition. It is recommended to use `spec.additionalScrapeConfigs` instead. \n Note that the ScrapeConfig custom resource definition is currently at Alpha level."
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -4490,7 +4554,7 @@ spec:
description: 'Deprecated: use ''spec.image'' instead. The image''s digest can be specified as part of the image name.'
type: string
shards:
description: "EXPERIMENTAL: Number of shards to distribute targets onto. `spec.replicas` multiplied by `spec.shards` is the total number of Pods created. \n Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally, use Thanos sidecar and Thanos querier or remote write data to a central location. \n Sharding is performed on the content of the `__address__` target meta-label for PodMonitors and ServiceMonitors and `__param_target__` for Probes. \n Default: 1"
description: "Number of shards to distribute targets onto. `spec.replicas` multiplied by `spec.shards` is the total number of Pods created. \n Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally, use Thanos sidecar and Thanos querier or remote write data to a central location. \n Sharding is performed on the content of the `__address__` target meta-label for PodMonitors and ServiceMonitors and `__param_target__` for Probes. \n Default: 1"
format: int32
type: integer
storage:
@ -4863,7 +4927,7 @@ spec:
format: int64
type: integer
thanos:
description: "Defines the configuration of the optional Thanos sidecar. \n This section is experimental, it may change significantly without deprecation notice in any release."
description: Defines the configuration of the optional Thanos sidecar.
properties:
additionalArgs:
description: AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged.
@ -5102,7 +5166,7 @@ spec:
description: 'Deprecated: use ''image'' instead. The image''s tag can be specified as as part of the image name.'
type: string
tracingConfig:
description: "Defines the tracing configuration for the Thanos sidecar. \n More info: https://thanos.io/tip/thanos/tracing.md/ \n This is an experimental feature, it may change in any upcoming release in a breaking way. \n tracingConfigFile takes precedence over this field."
description: "Defines the tracing configuration for the Thanos sidecar. \n `tracingConfigFile` takes precedence over this field. \n More info: https://thanos.io/tip/thanos/tracing.md/ \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
properties:
key:
description: The key of the secret to select from. Must be a valid secret key.
@ -5118,7 +5182,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
tracingConfigFile:
description: "Defines the tracing configuration file for the Thanos sidecar. \n More info: https://thanos.io/tip/thanos/tracing.md/ \n This is an experimental feature, it may change in any upcoming release in a breaking way. \n This field takes precedence over tracingConfig."
description: "Defines the tracing configuration file for the Thanos sidecar. \n This field takes precedence over `tracingConfig`. \n More info: https://thanos.io/tip/thanos/tracing.md/ \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
type: string
version:
description: "Version of Thanos being deployed. The operator uses this information to generate the Prometheus StatefulSet + configuration files. \n If not specified, the operator assumes the latest upstream release of Thanos available at the time when the version of the operator was released."
@ -5249,7 +5313,7 @@ spec:
type: object
type: array
tracingConfig:
description: 'EXPERIMENTAL: TracingConfig configures tracing in Prometheus. This is an experimental feature, it may change in any upcoming release in a breaking way.'
description: "TracingConfig configures tracing in Prometheus. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
properties:
clientType:
description: Client used to export the traces. Supported values are `http` or `grpc`.
@ -5399,7 +5463,7 @@ spec:
description: Defines the runtime reloadable configuration of the timeseries database (TSDB).
properties:
outOfOrderTimeWindow:
description: "Configures how old an out-of-order/out-of-bounds sample can be with respect to the TSDB max time. \n An out-of-order/out-of-bounds sample is ingested into the TSDB as long as the timestamp of the sample is >= (TSDB.MaxTime - outOfOrderTimeWindow). \n Out of order ingestion is an experimental feature. \n It requires Prometheus >= v2.39.0."
description: "Configures how old an out-of-order/out-of-bounds sample can be with respect to the TSDB max time. \n An out-of-order/out-of-bounds sample is ingested into the TSDB as long as the timestamp of the sample is >= (TSDB.MaxTime - outOfOrderTimeWindow). \n This is an *experimental feature*, it may change in any upcoming release in a breaking way. \n It requires Prometheus >= v2.39.0."
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
type: object

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: prometheusrules.monitoring.coreos.com
spec:

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: servicemonitors.monitoring.coreos.com
spec:
@ -44,6 +44,10 @@ spec:
description: When set to true, Prometheus must have the `get` permission on the `Nodes` objects.
type: boolean
type: object
bodySizeLimit:
description: "When defined, bodySizeLimit specifies a job level limit on the size of uncompressed response body that will be accepted by Prometheus. \n It requires Prometheus >= v2.28.0."
pattern: (^0|([0-9]*[.])?[0-9]+((K|M|G|T|E|P)i?)?B)$
type: string
endpoints:
description: List of endpoints part of this ServiceMonitor.
items:

View File

@ -1,11 +1,11 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.72.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
operator.prometheus.io/version: 0.72.0
operator.prometheus.io/version: 0.73.0
argocd.argoproj.io/sync-options: ServerSideApply=true
name: thanosrulers.monitoring.coreos.com
spec:
@ -3295,7 +3295,7 @@ spec:
type: object
type: array
tracingConfig:
description: TracingConfig configures tracing in Thanos. This is an experimental feature, it may change in any upcoming release in a breaking way.
description: "TracingConfig configures tracing in Thanos. \n `tracingConfigFile` takes precedence over this field. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
properties:
key:
description: The key of the secret to select from. Must be a valid secret key.
@ -3311,7 +3311,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
tracingConfigFile:
description: TracingConfig specifies the path of the tracing configuration file. When used alongside with TracingConfig, TracingConfigFile takes precedence.
description: "TracingConfig specifies the path of the tracing configuration file. \n This field takes precedence over `tracingConfig`. \n This is an *experimental feature*, it may change in any upcoming release in a breaking way."
type: string
version:
description: Version of Thanos to be deployed.
@ -4332,6 +4332,160 @@ spec:
- name
type: object
type: array
web:
description: Defines the configuration of the ThanosRuler web server.
properties:
httpConfig:
description: Defines HTTP parameters for web server.
properties:
headers:
description: List of headers that can be added to HTTP responses.
properties:
contentSecurityPolicy:
description: Set the Content-Security-Policy header to HTTP responses. Unset if blank.
type: string
strictTransportSecurity:
description: Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
type: string
xContentTypeOptions:
description: Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
enum:
- ""
- NoSniff
type: string
xFrameOptions:
description: Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
enum:
- ""
- Deny
- SameOrigin
type: string
xXSSProtection:
description: Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
type: string
type: object
http2:
description: Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered.
type: boolean
type: object
tlsConfig:
description: Defines the TLS parameters for HTTPS.
properties:
cert:
description: Contains the TLS certificate for the server.
properties:
configMap:
description: ConfigMap containing data to use for the targets.
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
description: Secret containing data to use for the targets.
properties:
key:
description: The key of the secret to select from. Must be a valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
cipherSuites:
description: 'List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants'
items:
type: string
type: array
client_ca:
description: Contains the CA certificate for client certificate authentication to the server.
properties:
configMap:
description: ConfigMap containing data to use for the targets.
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
description: Secret containing data to use for the targets.
properties:
key:
description: The key of the secret to select from. Must be a valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
clientAuthType:
description: 'Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType'
type: string
curvePreferences:
description: 'Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID'
items:
type: string
type: array
keySecret:
description: Secret containing the TLS key for the server.
properties:
key:
description: The key of the secret to select from. Must be a valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
maxVersion:
description: Maximum TLS version that is acceptable. Defaults to TLS13.
type: string
minVersion:
description: Minimum TLS version that is acceptable. Defaults to TLS12.
type: string
preferServerCipherSuites:
description: Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used.
type: boolean
required:
- cert
- keySecret
type: object
type: object
type: object
status:
description: 'Most recent observed status of the ThanosRuler cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status'

View File

@ -4,7 +4,7 @@ annotations:
- name: Chart Source
url: https://github.com/prometheus-community/helm-charts
apiVersion: v2
appVersion: 2.11.0
appVersion: 2.12.0
description: Install kube-state-metrics to generate and expose cluster-level metrics
home: https://github.com/kubernetes/kube-state-metrics/
keywords:
@ -23,4 +23,4 @@ name: kube-state-metrics
sources:
- https://github.com/kubernetes/kube-state-metrics/
type: application
version: 5.18.0
version: 5.18.1

View File

@ -3350,7 +3350,7 @@ prometheus:
image:
registry: quay.io
repository: prometheus/prometheus
tag: v2.51.0
tag: v2.51.1
sha: ""
## Tolerations for use with node taints

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin"
}
},
"version": "984903b16eb72fdf989ffe0959fd477c05fde8b5",
"version": "65ac859a1b613a3b1b509cd80400f9fcbeae97d6",
"sum": "xuUBd2vqF7asyVDe5CE08uPT/RxAdy8O75EjFJoMXXU="
},
{
@ -88,7 +88,7 @@
"subdir": "grafana-builder"
}
},
"version": "f95501009c9b29bed87fe9d57c1a6e72e210f137",
"version": "b5e3f0ecb726452a92f68c5eeb983c9d972cb051",
"sum": "+z5VY+bPBNqXcmNAV8xbJcbsRA+pro1R3IM7aIY8OlU="
},
{
@ -108,8 +108,8 @@
"subdir": ""
}
},
"version": "fc2e57a8839902ed4ba6cab5a99d642500f7102b",
"sum": "43waffw1QzvpY4rKcWoo3L7Vpee+DCYexwLDd5cPG0M="
"version": "63d430b69a95741061c2f7fc9d84b1a778511d9c",
"sum": "qiZi3axUSXCVzKUF83zSAxklwrnitMmrDK4XAfjPMdE="
},
{
"source": {
@ -118,8 +118,8 @@
"subdir": ""
}
},
"version": "346bef2584068e803757e12c4ee4814e72a67927",
"sum": "SvyGvJFtM/grpOAXtN3rMwHNDjLFcbP83ogJ1CCfvRc="
"version": "b247371d1780f530587a8d9dd04ccb19ea970ba0",
"sum": "7M2QHK3WhOc1xT7T7KhL9iKsCYTfsIXpmcItffAcbL0="
},
{
"source": {
@ -128,7 +128,7 @@
"subdir": "jsonnet/kube-state-metrics"
}
},
"version": "20895032eb3094e9e0c1c8e54e0efdcc055a8ca2",
"version": "9e855147a20f2539b0b8c3ea1aa7cd761c104797",
"sum": "msMZyUvcebzRILLzNlTIiSOwa1XgQKtP7jbZTkiqwM0="
},
{
@ -138,7 +138,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin"
}
},
"version": "20895032eb3094e9e0c1c8e54e0efdcc055a8ca2",
"version": "9e855147a20f2539b0b8c3ea1aa7cd761c104797",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
},
{
@ -158,7 +158,7 @@
"subdir": "jsonnet/mixin"
}
},
"version": "d70313bd17cf2a4b911222062608f793be146548",
"version": "06bdd34e7691d13b560cf1694561c5777216472b",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin"
},
@ -169,8 +169,8 @@
"subdir": "jsonnet/prometheus-operator"
}
},
"version": "d70313bd17cf2a4b911222062608f793be146548",
"sum": "5yo+BonL/T9gNS4nUhcM3ymoQ9om0REMi9ZB14kMTfg="
"version": "06bdd34e7691d13b560cf1694561c5777216472b",
"sum": "uZ0NldrHp01uGnOYEKB+Nq8W97bkf4EfMP9ePWIG+wk="
},
{
"source": {
@ -200,7 +200,7 @@
"subdir": "documentation/prometheus-mixin"
}
},
"version": "113938aeb894e60c5706ff9ca993344a990a96e7",
"version": "633224886a1c975dd3a8a8308a0b1d630048a21c",
"sum": "u/Fpz2MPkezy71/q+c7mF0vc3hE9fWt2W/YbvF0LP/8=",
"name": "prometheus"
},
@ -222,7 +222,7 @@
"subdir": "mixin"
}
},
"version": "f80fd94732238797f1b58d5d08de85a341ffffd1",
"version": "f7853dd12cc228960e24c78c10154099a9aeaec8",
"sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=",
"name": "thanos-mixin"
}

View File

@ -84,6 +84,11 @@ cilium:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
# the operator removes the taints,
# so we need to break chicken egg on single controller
- key: node.cilium.io/agent-not-ready
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
prometheus:

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-operators
description: Various operators supported by KubeZero
type: application
version: 0.1.2
version: 0.1.3
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -17,7 +17,7 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: opensearch-operator
version: 2.5.1
version: 2.6.0
repository: https://opensearch-project.github.io/opensearch-k8s-operator/
condition: opensearch-operator.enabled
- name: eck-operator

View File

@ -1,6 +1,6 @@
# kubezero-operators
![Version: 0.1.2](https://img.shields.io/badge/Version-0.1.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
![Version: 0.1.3](https://img.shields.io/badge/Version-0.1.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
Various operators supported by KubeZero
@ -20,7 +20,7 @@ Kubernetes: `>= 1.26.0`
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://helm.elastic.co | eck-operator | 2.12.1 |
| https://opensearch-project.github.io/opensearch-k8s-operator/ | opensearch-operator | 2.5.1 |
| https://opensearch-project.github.io/opensearch-k8s-operator/ | opensearch-operator | 2.6.0 |
## Values

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-storage
description: KubeZero umbrella chart for all things storage incl. AWS EBS/EFS, openEBS-lvm, gemini
type: application
version: 0.8.6
version: 0.8.7
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -28,7 +28,7 @@ dependencies:
condition: aws-ebs-csi-driver.enabled
repository: https://kubernetes-sigs.github.io/aws-ebs-csi-driver
- name: aws-efs-csi-driver
version: 2.5.6
version: 2.5.7
condition: aws-efs-csi-driver.enabled
repository: https://kubernetes-sigs.github.io/aws-efs-csi-driver
- name: gemini

View File

@ -1,4 +1,8 @@
# Helm chart
## v2.29.1
* Bump driver version to `v1.29.1`
* Remove `--reuse-values` deprecation warning
## v2.29.0
### Urgent Upgrade Notes
*(No, really, you MUST read this before you upgrade)*

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.29.0
appVersion: 1.29.1
description: A Helm chart for AWS EBS CSI Driver
home: https://github.com/kubernetes-sigs/aws-ebs-csi-driver
keywords:
@ -13,4 +13,4 @@ maintainers:
name: aws-ebs-csi-driver
sources:
- https://github.com/kubernetes-sigs/aws-ebs-csi-driver
version: 2.29.0
version: 2.29.1

View File

@ -3,5 +3,3 @@ To verify that aws-ebs-csi-driver has started, run:
kubectl get pod -n {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "aws-ebs-csi-driver.name" . }},app.kubernetes.io/instance={{ .Release.Name }}"
NOTE: The [CSI Snapshotter](https://github.com/kubernetes-csi/external-snapshotter) controller and CRDs will no longer be installed as part of this chart and moving forward will be a prerequisite of using the snap shotting functionality.
WARNING: Upgrading the EBS CSI Driver Helm chart with --reuse-values will no longer be supported in a future release. For more information, see https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1864

View File

@ -1,4 +1,6 @@
# Helm chart
# v2.5.7
* Bump app/driver version to `v1.7.7`
# v2.5.6
* Bump app/driver version to `v1.7.6`
# v2.5.5

View File

@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.7.6
appVersion: 1.7.7
description: A Helm chart for AWS EFS CSI Driver
home: https://github.com/kubernetes-sigs/aws-efs-csi-driver
keywords:
@ -15,4 +15,4 @@ maintainers:
name: aws-efs-csi-driver
sources:
- https://github.com/kubernetes-sigs/aws-efs-csi-driver
version: 2.5.6
version: 2.5.7

View File

@ -11,7 +11,7 @@ useFIPS: false
image:
repository: amazon/aws-efs-csi-driver
tag: "v1.7.6"
tag: "v1.7.7"
pullPolicy: IfNotPresent
sidecars:

View File

@ -18,7 +18,7 @@
"subdir": "contrib/mixin"
}
},
"version": "9359aef3e3dd39b7bbf57cab4b6899a238af3144",
"version": "65ac859a1b613a3b1b509cd80400f9fcbeae97d6",
"sum": "xuUBd2vqF7asyVDe5CE08uPT/RxAdy8O75EjFJoMXXU="
},
{
@ -88,7 +88,7 @@
"subdir": "grafana-builder"
}
},
"version": "7561fd330312538d22b00e0c7caecb4ba66321ea",
"version": "b5e3f0ecb726452a92f68c5eeb983c9d972cb051",
"sum": "+z5VY+bPBNqXcmNAV8xbJcbsRA+pro1R3IM7aIY8OlU="
},
{
@ -108,8 +108,8 @@
"subdir": ""
}
},
"version": "fc2e57a8839902ed4ba6cab5a99d642500f7102b",
"sum": "43waffw1QzvpY4rKcWoo3L7Vpee+DCYexwLDd5cPG0M="
"version": "63d430b69a95741061c2f7fc9d84b1a778511d9c",
"sum": "qiZi3axUSXCVzKUF83zSAxklwrnitMmrDK4XAfjPMdE="
},
{
"source": {
@ -118,8 +118,8 @@
"subdir": ""
}
},
"version": "a1c276d7a46c4b06fa5d8b4a64441939d398efe5",
"sum": "b/mEai1MvVnZ22YvZlXEO4jWDZledrtJg8eOS1ZUj0M="
"version": "b247371d1780f530587a8d9dd04ccb19ea970ba0",
"sum": "7M2QHK3WhOc1xT7T7KhL9iKsCYTfsIXpmcItffAcbL0="
},
{
"source": {
@ -128,7 +128,7 @@
"subdir": "jsonnet/kube-state-metrics"
}
},
"version": "9ba1c3702142918e09e8eb5ca530e15198624259",
"version": "9e855147a20f2539b0b8c3ea1aa7cd761c104797",
"sum": "msMZyUvcebzRILLzNlTIiSOwa1XgQKtP7jbZTkiqwM0="
},
{
@ -138,7 +138,7 @@
"subdir": "jsonnet/kube-state-metrics-mixin"
}
},
"version": "9ba1c3702142918e09e8eb5ca530e15198624259",
"version": "9e855147a20f2539b0b8c3ea1aa7cd761c104797",
"sum": "qclI7LwucTjBef3PkGBkKxF0mfZPbHnn4rlNWKGtR4c="
},
{
@ -168,7 +168,7 @@
"subdir": "jsonnet/mixin"
}
},
"version": "8f8464b41775e13c71c2700799352a3dcd82f528",
"version": "06bdd34e7691d13b560cf1694561c5777216472b",
"sum": "gi+knjdxs2T715iIQIntrimbHRgHnpM8IFBJDD1gYfs=",
"name": "prometheus-operator-mixin"
},
@ -179,8 +179,8 @@
"subdir": "jsonnet/prometheus-operator"
}
},
"version": "8f8464b41775e13c71c2700799352a3dcd82f528",
"sum": "/xycwh6lbet/dMzqZHJjSv6AfBEAQPAgk+1usi3d3W4="
"version": "06bdd34e7691d13b560cf1694561c5777216472b",
"sum": "uZ0NldrHp01uGnOYEKB+Nq8W97bkf4EfMP9ePWIG+wk="
},
{
"source": {
@ -200,7 +200,7 @@
"subdir": "docs/node-mixin"
}
},
"version": "6425f079d162ebd22d4c6c4e4d7e4a36ebbe2239",
"version": "b6227af54b20d147463e1672a3e8bfca47fa10ee",
"sum": "vWhHvFqV7+fxrQddTeGVKi1e4EzB3VWtNyD8TjSmevY="
},
{
@ -210,7 +210,7 @@
"subdir": "documentation/prometheus-mixin"
}
},
"version": "bfaa0a319ceca0814b076072a61cc1640e6a4f36",
"version": "633224886a1c975dd3a8a8308a0b1d630048a21c",
"sum": "u/Fpz2MPkezy71/q+c7mF0vc3hE9fWt2W/YbvF0LP/8=",
"name": "prometheus"
},
@ -232,7 +232,7 @@
"subdir": "mixin"
}
},
"version": "4a2a4555d24665a52c3ed43e007301dd492af9b3",
"version": "f7853dd12cc228960e24c78c10154099a9aeaec8",
"sum": "HhSSbGGCNHCMy1ee5jElYDm0yS9Vesa7QB2/SHKdjsY=",
"name": "thanos-mixin"
}

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero-telemetry
description: KubeZero Umbrella Chart for OpenTelemetry, Jaeger etc.
type: application
version: 0.2.0
version: 0.2.4
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
@ -18,11 +18,15 @@ dependencies:
version: ">= 0.1.6"
repository: https://cdn.zero-downtime.net/charts/
- name: opentelemetry-collector
version: 0.86.0
version: 0.91.0
repository: https://open-telemetry.github.io/opentelemetry-helm-charts
condition: opentelemetry-collector.enabled
- name: jaeger
version: 2.0.1
version: 3.0.7
repository: https://jaegertracing.github.io/helm-charts
condition: jaeger.enabled
- name: fluentd
version: 0.5.2
repository: https://fluent.github.io/helm-charts
condition: fluentd.enabled
kubeVersion: ">= 1.26.0"

View File

@ -0,0 +1,62 @@
# kubezero-telemetry
![Version: 0.2.3](https://img.shields.io/badge/Version-0.2.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square)
KubeZero Umbrella Chart for OpenTelemetry, Jaeger etc.
**Homepage:** <https://kubezero.com>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Stefan Reimer | <stefan@zero-downtime.net> | |
## Requirements
Kubernetes: `>= 1.26.0`
| Repository | Name | Version |
|------------|------|---------|
| https://cdn.zero-downtime.net/charts/ | kubezero-lib | >= 0.1.6 |
| https://fluent.github.io/helm-charts | fluentd | 0.5.2 |
| https://jaegertracing.github.io/helm-charts | jaeger | 3.0.3 |
| https://open-telemetry.github.io/opentelemetry-helm-charts | opentelemetry-collector | 0.89.0 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| jaeger.agent.enabled | bool | `false` | |
| jaeger.collector.service.otlp.grpc.name | string | `"otlp-grpc"` | |
| jaeger.collector.service.otlp.grpc.port | int | `4317` | |
| jaeger.collector.service.otlp.http.name | string | `"otlp-http"` | |
| jaeger.collector.service.otlp.http.port | int | `4318` | |
| jaeger.collector.serviceMonitor.enabled | bool | `false` | |
| jaeger.enabled | bool | `false` | |
| jaeger.istio.enabled | bool | `false` | |
| jaeger.istio.gateway | string | `"istio-ingress/private-ingressgateway"` | |
| jaeger.istio.url | string | `"jaeger.example.com"` | |
| jaeger.provisionDataStore.cassandra | bool | `false` | |
| jaeger.provisionDataStore.elasticsearch | bool | `false` | |
| jaeger.query.agentSidecar.enabled | bool | `false` | |
| jaeger.query.serviceMonitor.enabled | bool | `false` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.tls.enabled" | string | `""` | |
| jaeger.storage.elasticsearch.cmdlineParams."es.tls.skip-host-verify" | string | `""` | |
| jaeger.storage.elasticsearch.host | string | `"telemetry"` | |
| jaeger.storage.elasticsearch.password | string | `"admin"` | |
| jaeger.storage.elasticsearch.scheme | string | `"https"` | |
| jaeger.storage.elasticsearch.user | string | `"admin"` | |
| jaeger.storage.type | string | `"elasticsearch"` | |
| opensearch.dashboard.enabled | bool | `false` | |
| opensearch.dashboard.istio.enabled | bool | `false` | |
| opensearch.dashboard.istio.gateway | string | `"istio-ingress/private-ingressgateway"` | |
| opensearch.dashboard.istio.url | string | `"telemetry-dashboard.example.com"` | |
| opensearch.nodeSets | list | `[]` | |
| opensearch.prometheus | bool | `false` | |
| opensearch.version | string | `"2.13.0"` | |
| opentelemetry-collector.enabled | bool | `false` | |
| opentelemetry-collector.mode | string | `"deployment"` | |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,15 @@
apiVersion: v2
appVersion: v1.16.2
description: A Helm chart for Kubernetes
home: https://www.fluentd.org/
icon: https://www.fluentd.org/images/miscellany/fluentd-logo_2x.png
maintainers:
- email: eduardo@treasure-data.com
name: edsiper
- email: diogo.filipe.tomas.guerra@cern.ch
name: dioguerra
name: fluentd
sources:
- https://github.com/fluent/fluentd/
- https://github.com/fluent/fluentd-kubernetes-daemonset
version: 0.5.2

View File

@ -0,0 +1,187 @@
# Fluentd Helm Chart
[Fluentd](https://www.fluentd.org/) is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data.
## Installation
To add the `fluent` helm repo, run:
```sh
helm repo add fluent https://fluent.github.io/helm-charts
helm repo update
```
To install a release named `fluentd`, run:
```sh
helm install fluentd fluent/fluentd
```
## Upgrading
### To 0.4.0
Although the services will deploy and generally work, version 0.4.0 introduces some changes that are considered _breaking changes_. To upgrade, you should do the following to avoid any potential conflicts or problems:
- Add the `mountVarLogDirectory` and `mountDockerContainersDirectory` values and set them to the values you need; to follow the previous setup where these were mounted by default, set the values to `true`, e.g. `mountVarLogDirectory: true`
- If you have the `varlog` mount point defined and enabled under both `volumes` and `volumeMounts`, set `mountVarLogDirectory` to true
- If you have the `varlibdockercontainers` mount point defined and enabled under both `volumes` and `volumeMounts`, set `mountDockerContainersDirectory` to true
- Remove the previous default volume and volume mount definitions - `etcfluentd-main`, `etcfluentd-config`, `varlog`, and `varlibdockercontainers`
- Remove the `FLUENTD_CONF` entry from the `env:` list
## Chart Values
```sh
helm show values fluent/fluentd
```
## Value Details
### default-volumes
The default configurations bellow are required for the fluentd pod to be able to read the hosts container logs. The second section is responsible for allowing the user to load the "extra" configMaps either defined by the `fileConfigs` contained objects or, in addition, loaded externally and indicated by `configMapConfigs`.
```yaml
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
- name: etcfluentd-main
configMap:
name: fluentd-main
defaultMode: 0777
- name: etcfluentd-config
configMap:
name: fluentd-config
defaultMode: 0777
```
### default-volumeMounts
The default configurations bellow are required for the fluentd pod to be able to read the hosts container logs. They should not be removed unless for some reason your container logs are accessible through a different path
```yaml
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
```
The section bellow is responsible for allowing the user to load the "extra" configMaps either defined by the `fileConfigs` contained objects or otherwise load externally and indicated by `configMapConfigs`.
```yaml
- name: etcfluentd-main
mountPath: /etc/fluent
- name: etcfluentd-config
mountPath: /etc/fluent/config.d/
```
### default-fluentdConfig
The `fileConfigs` section is organized by sources -> filters -> destinations. Flow control must be configured using fluentd routing with tags or labels to guarantee that the configurations are executed as intended. Alternatively you can use numeration on your files to control the configurations loading order.
```yaml
01_sources.conf: |-
<source>
@type tail
@id in_tail_container_logs
@label @KUBERNETES
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</pattern>
<pattern>
format regexp
expression /^(?<time>.+) (?<stream>stdout|stderr)( (.))? (?<log>.*)$/
time_format '%Y-%m-%dT%H:%M:%S.%NZ'
keep_time_key false
</pattern>
</parse>
emit_unmatched_lines true
</source>
02_filters.conf: |-
<label @KUBERNETES>
<match kubernetes.var.log.containers.fluentd**>
@type relabel
@label @FLUENT_LOG
</match>
# <match kubernetes.var.log.containers.**_kube-system_**>
# @type null
# @id ignore_kube_system_logs
# </match>
<filter kubernetes.**>
@type record_transformer
enable_ruby
<record>
hostname ${record["kubernetes"]["host"]}
raw ${record["log"]}
</record>
remove_keys $.kubernetes.host,log
</filter>
<match **>
@type relabel
@label @DISPATCH
</match>
</label>
03_dispatch.conf: |-
<label @DISPATCH>
<filter **>
@type prometheus
<metric>
name fluentd_input_status_num_records_total
type counter
desc The total number of incoming records
<labels>
tag ${tag}
hostname ${hostname}
</labels>
</metric>
</filter>
<match **>
@type relabel
@label @OUTPUT
</match>
</label>
04_outputs.conf: |-
<label @OUTPUT>
<match **>
@type elasticsearch
host "elasticsearch-master"
port 9200
path ""
user elastic
password changeme
</match>
</label>
```
## Backwards Compatibility - v0.1.x
The old fluentd chart used the ENV variables and the default fluentd container definitions to set-up automatically many aspects of fluentd. It is still possible to trigger this behaviour by removing this charts current `.Values.env` configuration and replace by:
```yaml
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch-master"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
```

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,5 @@
Get Fluentd build information by running these commands:
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "fluentd.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 24231:24231
curl http://127.0.0.1:24231/metrics

View File

@ -0,0 +1,104 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "fluentd.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "fluentd.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "fluentd.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "fluentd.labels" -}}
helm.sh/chart: {{ include "fluentd.chart" . }}
{{ include "fluentd.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "fluentd.selectorLabels" -}}
app.kubernetes.io/name: {{ include "fluentd.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "fluentd.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "fluentd.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Shortened version of the releaseName, applied as a suffix to numerous resources.
*/}}
{{- define "fluentd.shortReleaseName" -}}
{{- .Release.Name | trunc 35 | trimSuffix "-" -}}
{{- end -}}
{{/*
Name of the configMap used for the fluentd.conf configuration file; allows users to override the default.
*/}}
{{- define "fluentd.mainConfigMapName" -}}
{{- if .Values.mainConfigMapNameOverride -}}
{{ .Values.mainConfigMapNameOverride }}
{{- else -}}
{{ printf "%s-%s" "fluentd-main" ( include "fluentd.shortReleaseName" . ) }}
{{- end -}}
{{- end -}}
{{/*
Name of the configMap used for additional configuration files; allows users to override the default.
*/}}
{{- define "fluentd.extraFilesConfigMapName" -}}
{{- if .Values.extraFilesConfigMapNameOverride -}}
{{ printf "%s" .Values.extraFilesConfigMapNameOverride }}
{{- else -}}
{{ printf "%s-%s" "fluentd-config" ( include "fluentd.shortReleaseName" . ) }}
{{- end -}}
{{- end -}}
{{/*
HPA ApiVersion according k8s version
Check legacy first so helm template / kustomize will default to latest version
*/}}
{{- define "fluentd.hpa.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "autoscaling/v2beta2") (semverCompare "<1.23-0" .Capabilities.KubeVersion.GitVersion) -}}
autoscaling/v2beta2
{{- else -}}
autoscaling/v2
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,130 @@
{{- define "fluentd.pod" -}}
{{- $defaultTag := printf "%s-debian-%s-1.0" (.Chart.AppVersion) (.Values.variant) -}}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
serviceAccountName: {{ include "fluentd.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 2 }}
{{- with .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ . }}
{{- end }}
{{- with .Values.initContainers }}
initContainers:
{{- toYaml . | nindent 2 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 6 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default $defaultTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.plugins }}
command:
- "/bin/sh"
- "-c"
- |
{{- range $plugin := .Values.plugins }}
{{- print "fluent-gem install " $plugin | nindent 6 }}
{{- end }}
exec /fluentd/entrypoint.sh
{{- end }}
env:
- name: FLUENTD_CONF
value: "../../../etc/fluent/fluent.conf"
{{- if .Values.env }}
{{- toYaml .Values.env | nindent 4 }}
{{- end }}
{{- if .Values.envFrom }}
envFrom:
{{- toYaml .Values.envFrom | nindent 4 }}
{{- end }}
ports:
- name: metrics
containerPort: 24231
protocol: TCP
{{- range $port := .Values.service.ports }}
- name: {{ $port.name }}
containerPort: {{ $port.containerPort }}
protocol: {{ $port.protocol }}
{{- end }}
{{- with .Values.lifecycle }}
lifecycle:
{{- toYaml . | nindent 6 }}
{{- end }}
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 6 }}
readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 6 }}
resources:
{{- toYaml .Values.resources | nindent 8 }}
volumeMounts:
- name: etcfluentd-main
mountPath: /etc/fluent
- name: etcfluentd-config
mountPath: /etc/fluent/config.d/
{{- if .Values.mountVarLogDirectory }}
- name: varlog
mountPath: /var/log
{{- end }}
{{- if .Values.mountDockerContainersDirectory }}
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
{{- end }}
{{- if .Values.volumeMounts -}}
{{- toYaml .Values.volumeMounts | nindent 4 }}
{{- end -}}
{{- range $key := .Values.configMapConfigs }}
{{- print "- name: " $key | nindent 4 }}
{{- print "mountPath: /etc/fluent/" $key ".d" | nindent 6 }}
{{- end }}
{{- if .Values.persistence.enabled }}
- mountPath: /var/log/fluent
name: {{ include "fluentd.fullname" . }}-buffer
{{- end }}
volumes:
- name: etcfluentd-main
configMap:
name: {{ include "fluentd.mainConfigMapName" . }}
defaultMode: 0777
- name: etcfluentd-config
configMap:
name: {{ include "fluentd.extraFilesConfigMapName" . }}
defaultMode: 0777
{{- if .Values.mountVarLogDirectory }}
- name: varlog
hostPath:
path: /var/log
{{- end }}
{{- if .Values.mountDockerContainersDirectory }}
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
{{- end }}
{{- if .Values.volumes -}}
{{- toYaml .Values.volumes | nindent 0 }}
{{- end -}}
{{- range $key := .Values.configMapConfigs }}
{{- print "- name: " $key | nindent 0 }}
configMap:
{{- print "name: " $key "-" ( include "fluentd.shortReleaseName" $ ) | nindent 4 }}
defaultMode: 0777
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 2 }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,28 @@
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
{{- if and .Values.podSecurityPolicy.enabled (semverCompare "<1.25-0" .Capabilities.KubeVersion.GitVersion) }}
- apiGroups:
- policy
resourceNames:
- {{ include "fluentd.fullname" . }}
resources:
- podsecuritypolicies
verbs:
- use
{{- end }}
{{- end -}}

View File

@ -0,0 +1,16 @@
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "fluentd.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "fluentd.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if .Values.dashboards.enabled -}}
{{- range $path, $_ := .Files.Glob "dashboards/*.json" }}
apiVersion: v1
kind: ConfigMap
metadata:
name: dashboard-{{ trimSuffix ".json" (base $path) }}-{{ include "fluentd.shortReleaseName" $ }}
namespace: {{ $.Values.dashboards.namespace | default $.Release.Namespace }}
labels:
{{- include "fluentd.labels" $ | nindent 4 }}
{{- range $key, $val := $.Values.dashboards.labels }}
{{ $key }}: {{ $val }}
{{- end }}
data:
{{ base $path }}: |-
{{- $.Files.Get $path | nindent 4 }}
---
{{- end }}
{{- end -}}

View File

@ -0,0 +1,40 @@
{{- if eq .Values.kind "DaemonSet" }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "fluentd.selectorLabels" . | nindent 6 }}
{{- with .Values.updateStrategy }}
updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.minReadySeconds }}
minReadySeconds: {{ . }}
{{- end }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/fluentd-configurations-cm.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "fluentd.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- include "fluentd.pod" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,41 @@
{{- if eq .Values.kind "Deployment" }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.replicaCount }}
{{- with .Values.updateStrategy }}
strategy:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "fluentd.selectorLabels" . | nindent 6 }}
{{- with .Values.minReadySeconds }}
minReadySeconds: {{ . }}
{{- end }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/fluentd-configurations-cm.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "fluentd.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- include "fluentd.pod" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,25 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
{{- include "fluentd.labels" . | nindent 4 }}
name: fluentd-prometheus-conf-{{ include "fluentd.shortReleaseName" . }}
data:
prometheus.conf: |-
<source>
@type prometheus
@id in_prometheus
bind "0.0.0.0"
port 24231
metrics_path "/metrics"
</source>
<source>
@type prometheus_monitor
@id in_prometheus_monitor
</source>
<source>
@type prometheus_output_monitor
@id in_prometheus_output_monitor
</source>

View File

@ -0,0 +1,38 @@
{{- if not .Values.extraFilesConfigMapNameOverride }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config-{{ include "fluentd.shortReleaseName" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
data:
{{- range $key, $value := .Values.fileConfigs }}
{{$key }}: |-
{{- $value | nindent 4 }}
{{- end }}
{{- end }}
{{- if not .Values.mainConfigMapNameOverride }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-main-{{ include "fluentd.shortReleaseName" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
data:
fluent.conf: |-
# do not collect fluentd logs to avoid infinite loops.
<label @FLUENT_LOG>
<match **>
@type null
@id ignore_fluent_logs
</match>
</label>
@include config.d/*.conf
{{- range $key := .Values.configMapConfigs }}
{{- print "@include " $key ".d/*" | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,39 @@
{{- if and ( eq .Values.kind "Deployment" ) .Values.autoscaling.enabled }}
apiVersion: {{ include "fluentd.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
spec:
{{- if .Values.autoscaling.behavior }}
behavior:
{{- toYaml .Values.autoscaling.behavior | nindent 4 }}
{{- end }}
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "fluentd.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
type: Utilization
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
type: Utilization
{{- end }}
{{- if .Values.autoscaling.customRules -}}
{{- toYaml .Values.autoscaling.customRules | nindent 4}}
{{- end -}}
{{- end }}

View File

@ -0,0 +1,44 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "fluentd.fullname" . -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
{{- with .secretName }}
secretName: {{ . }}
{{- end }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ $fullName }}
port:
number: {{ .port }}
{{ if .host -}}
host: {{ .host | quote }}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,42 @@
{{- if and .Values.podSecurityPolicy.enabled (semverCompare "<1.25-0" .Capabilities.KubeVersion.GitVersion) -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- if .Values.podSecurityPolicy.annotations }}
annotations:
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
hostNetwork: false
hostIPC: false
hostPID: false
volumes:
- 'configMap'
- 'secret'
- 'hostPath'
{{- if .Values.persistence.enabled }}
- 'persistentVolumeClaim'
{{- end }}
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.metrics.prometheusRule.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "fluentd.fullname" . }}
{{- if .Values.metrics.prometheusRule.namespace }}
namespace: {{ .Values.metrics.prometheusRule.namespace }}
{{- end }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.metrics.prometheusRule.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- with .Values.metrics.prometheusRule.rules }}
groups:
- name: {{ template "fluentd.fullname" $ }}
rules:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,35 @@
{{- if .Values.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- port: 24231
targetPort: metrics
protocol: TCP
name: metrics
{{- if .Values.service.ports }}
{{- range $port := .Values.service.ports }}
- name: {{ $port.name }}
port: {{ $port.containerPort }}
targetPort: {{ $port.containerPort }}
protocol: {{ $port.protocol }}
{{- end }}
{{- end }}
selector:
{{- include "fluentd.selectorLabels" . | nindent 4 }}
{{- end -}}

View File

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "fluentd.serviceAccountName" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,44 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) .Values.metrics.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "fluentd.fullname" . }}
{{- with .Values.metrics.serviceMonitor.namespace }}
namespace: {{ . }}
{{- end }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.metrics.serviceMonitor.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
jobLabel: {{ .Values.metrics.serviceMonitor.jobLabel | default .Release.Name }}
endpoints:
- port: metrics
path: /metrics
{{- with .Values.metrics.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.metrics.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
{{- if .Values.metrics.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{ tpl (toYaml .Values.metrics.serviceMonitor.metricRelabelings | indent 6) . }}
{{- end }}
{{- if .Values.metrics.serviceMonitor.relabelings }}
relabelings:
{{ toYaml .Values.metrics.serviceMonitor.relabelings | indent 6 }}
{{- end }}
{{- if .Values.metrics.serviceMonitor.namespaceSelector }}
namespaceSelector:
{{ toYaml .Values.metrics.serviceMonitor.namespaceSelector | indent 4 -}}
{{ else }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
{{- end }}
selector:
matchLabels:
{{- include "fluentd.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,55 @@
{{- if eq .Values.kind "StatefulSet" }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "fluentd.fullname" . }}
labels:
{{- include "fluentd.labels" . | nindent 4 }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.replicaCount }}
serviceName: {{ include "fluentd.fullname" . }}
{{- with .Values.updateStrategy }}
updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "fluentd.selectorLabels" . | nindent 6 }}
{{- with .Values.minReadySeconds }}
minReadySeconds: {{ . }}
{{- end }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/fluentd-configurations-cm.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "fluentd.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- include "fluentd.pod" . | nindent 6 }}
{{- if or .Values.persistence.enabled }}
volumeClaimTemplates:
{{- if or .Values.persistence.enabled }}
- metadata:
name: {{ include "fluentd.fullname" . }}-buffer
spec:
accessModes: [{{ .Values.persistence.accessMode }}]
resources:
requests:
storage: {{ .Values.persistence.size }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,29 @@
{{/*
Target the very simple case where
fluentd is deployed with the default values
If the fluentd config is overriden and the metrics server removed
this will fail.
*/}}
{{ if empty .Values.service.ports }}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "fluentd.fullname" . }}-test-connection"
labels:
{{- include "fluentd.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command:
- sh
- -c
- |
set -e
# Give fluentd some time to start up
while :; do nc -vz {{ include "fluentd.fullname" . }}:24231 && break; sleep 1; done
wget '{{ include "fluentd.fullname" . }}:24231/metrics'
restartPolicy: Never
{{ end }}

View File

@ -0,0 +1,403 @@
nameOverride: ""
fullnameOverride: ""
# DaemonSet, Deployment or StatefulSet
kind: "DaemonSet"
# azureblob, cloudwatch, elasticsearch7, elasticsearch8, gcs, graylog , kafka, kafka2, kinesis, opensearch
variant: elasticsearch7
# # Only applicable for Deployment or StatefulSet
# replicaCount: 1
image:
repository: "fluent/fluentd-kubernetes-daemonset"
pullPolicy: "IfNotPresent"
tag: ""
## Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
serviceAccount:
create: true
annotations: {}
name: null
rbac:
create: true
# from Kubernetes 1.25, PSP is deprecated
# See: https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes
# We automatically disable PSP if Kubernetes version is 1.25 or higher
podSecurityPolicy:
enabled: true
annotations: {}
## Security Context policies for controller pods
## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
## notes on enabling and using sysctls
##
podSecurityContext: {}
# seLinuxOptions:
# type: "spc_t"
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# Configure the livecycle
# Ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "sleep 20"]
# Configure the livenessProbe
# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
livenessProbe:
httpGet:
path: /metrics
port: metrics
# initialDelaySeconds: 0
# periodSeconds: 10
# timeoutSeconds: 1
# successThreshold: 1
# failureThreshold: 3
# Configure the readinessProbe
# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
readinessProbe:
httpGet:
path: /metrics
port: metrics
# initialDelaySeconds: 0
# periodSeconds: 10
# timeoutSeconds: 1
# successThreshold: 1
# failureThreshold: 3
resources: {}
# requests:
# cpu: 10m
# memory: 128Mi
# limits:
# memory: 128Mi
## only available if kind is Deployment
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
## see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics
customRules: []
# - type: Pods
# pods:
# metric:
# name: packets-per-second
# target:
# type: AverageValue
# averageValue: 1k
## see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
# behavior:
# scaleDown:
# policies:
# - type: Pods
# value: 4
# periodSeconds: 60
# - type: Percent
# value: 10
# periodSeconds: 60
# priorityClassName: "system-node-critical"
nodeSelector: {}
## Node tolerations for server scheduling to nodes with taints
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
tolerations: []
# - key: null
# operator: Exists
# effect: "NoSchedule"
## Affinity and anti-affinity
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Annotations to be added to fluentd DaemonSet/Deployment
##
annotations: {}
## Labels to be added to fluentd DaemonSet/Deployment
##
labels: {}
## Annotations to be added to fluentd pods
##
podAnnotations: {}
## Labels to be added to fluentd pods
##
podLabels: {}
## How long (in seconds) a pods needs to be stable before progressing the deployment
##
minReadySeconds:
## How long (in seconds) a pod may take to exit (useful with lifecycle hooks to ensure lb deregistration is done)
##
terminationGracePeriodSeconds:
## Deployment strategy / DaemonSet updateStrategy
##
updateStrategy: {}
# type: RollingUpdate
# rollingUpdate:
# maxUnavailable: 1
## Additional environment variables to set for fluentd pods
env: []
# - name: "FLUENTD_CONF"
# value: "../../../etc/fluent/fluent.conf"
# - name: FLUENT_ELASTICSEARCH_HOST
# value: "elasticsearch-master"
# - name: FLUENT_ELASTICSEARCH_PORT
# value: "9200"
envFrom: []
initContainers: []
## Name of the configMap containing a custom fluentd.conf configuration file to use instead of the default.
# mainConfigMapNameOverride: ""
## Name of the configMap containing files to be placed under /etc/fluent/config.d/
## NOTE: This will replace ALL default files in the aforementioned path!
# extraFilesConfigMapNameOverride: ""
mountVarLogDirectory: true
mountDockerContainersDirectory: true
volumes: []
volumeMounts: []
## Only available if kind is StatefulSet
## Fluentd persistence
##
persistence:
enabled: false
storageClass: ""
accessMode: ReadWriteOnce
size: 10Gi
## Fluentd service
##
service:
enabled: true
type: "ClusterIP"
annotations: {}
# loadBalancerIP:
# externalTrafficPolicy: Local
ports: []
# - name: "forwarder"
# protocol: TCP
# containerPort: 24224
## Prometheus Monitoring
##
metrics:
serviceMonitor:
enabled: false
additionalLabels:
release: prometheus-operator
namespace: ""
namespaceSelector: {}
## metric relabel configs to apply to samples before ingestion.
##
metricRelabelings: []
# - sourceLabels: [__name__]
# separator: ;
# regex: ^fluentd_output_status_buffer_(oldest|newest)_.+
# replacement: $1
# action: drop
## relabel configs to apply to samples after ingestion.
##
relabelings: []
# - sourceLabels: [__meta_kubernetes_pod_node_name]
# separator: ;
# regex: ^(.*)$
# targetLabel: nodename
# replacement: $1
# action: replace
## Additional serviceMonitor config
##
# jobLabel: fluentd
# scrapeInterval: 30s
# scrapeTimeout: 5s
# honorLabels: true
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
rules: []
# - alert: FluentdDown
# expr: up{job="fluentd"} == 0
# for: 5m
# labels:
# context: fluentd
# severity: warning
# annotations:
# summary: "Fluentd Down"
# description: "{{ $labels.pod }} on {{ $labels.nodename }} is down"
# - alert: FluentdScrapeMissing
# expr: absent(up{job="fluentd"} == 1)
# for: 15m
# labels:
# context: fluentd
# severity: warning
# annotations:
# summary: "Fluentd Scrape Missing"
# description: "Fluentd instance has disappeared from Prometheus target discovery"
## Grafana Monitoring Dashboard
##
dashboards:
enabled: "true"
namespace: ""
labels:
grafana_dashboard: '"1"'
## Fluentd list of plugins to install
##
plugins: []
# - fluent-plugin-out-http
## Add fluentd config files from K8s configMaps
##
configMapConfigs: []
# - fluentd-prometheus-conf
# - fluentd-systemd-conf
## Fluentd configurations:
##
fileConfigs:
01_sources.conf: |-
## logs from podman
<source>
@type tail
@id in_tail_container_logs
@label @KUBERNETES
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</pattern>
<pattern>
format regexp
expression /^(?<time>.+) (?<stream>stdout|stderr)( (.))? (?<log>.*)$/
time_format '%Y-%m-%dT%H:%M:%S.%NZ'
keep_time_key false
</pattern>
</parse>
emit_unmatched_lines true
</source>
# expose metrics in prometheus format
<source>
@type prometheus
bind 0.0.0.0
port 24231
metrics_path /metrics
</source>
02_filters.conf: |-
<label @KUBERNETES>
<match kubernetes.var.log.containers.fluentd**>
@type relabel
@label @FLUENT_LOG
</match>
# <match kubernetes.var.log.containers.**_kube-system_**>
# @type null
# @id ignore_kube_system_logs
# </match>
<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
skip_labels false
skip_container_metadata false
skip_namespace_metadata true
skip_master_url true
</filter>
<match **>
@type relabel
@label @DISPATCH
</match>
</label>
03_dispatch.conf: |-
<label @DISPATCH>
<filter **>
@type prometheus
<metric>
name fluentd_input_status_num_records_total
type counter
desc The total number of incoming records
<labels>
tag ${tag}
hostname ${hostname}
</labels>
</metric>
</filter>
<match **>
@type relabel
@label @OUTPUT
</match>
</label>
04_outputs.conf: |-
<label @OUTPUT>
<match **>
@type elasticsearch
host "elasticsearch-master"
port 9200
path ""
user elastic
password changeme
# Don't wait for elastic to start up.
verify_es_version_at_startup false
</match>
</label>
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
# - host: fluentd.example.tld
- port: 9880
tls: []
# - secretName: fluentd-tls
# hosts:
# - fluentd.example.tld

View File

@ -12,3 +12,10 @@ dashboards:
tags:
- OpenSearch
- Telemetry
- name: fluent-logging
url: https://grafana.com/api/dashboards/7752/revisions/6/download
#url: https://grafana.com/api/dashboards/13042/revisions/2/download
tags:
- fluentd
- fluent-bit
- Telemetry

File diff suppressed because one or more lines are too long

View File

@ -17,6 +17,14 @@ spec:
enable: {{ .Values.opensearch.prometheus }}
tlsConfig:
insecureSkipVerify: true
podSecurityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
fsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
privileged: false
{{- if .Values.opensearch.dashboard.enabled }}
# https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/config/opensearch_dashboards.yml
dashboards:
@ -47,6 +55,10 @@ spec:
roles:
- "cluster_manager"
- "data"
{{- if gt (int .replicas) 1 }}
pdb:
enable: true
maxUnavailable: 1
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
@ -54,17 +66,21 @@ spec:
labelSelector:
matchLabels:
opster.io/opensearch-cluster: {{ template "kubezero-lib.fullname" $ }}
{{- end }}
additionalConfig:
index.codec: zstd_no_dict
indices.time_series_index.default_index_merge_policy: log_byte_size
indices.time_series_index.default_index_merge_policy: log_byte_size
{{- with .zone }}
cluster.routing.allocation.awareness.attributes: k8s_node_name,zone
node.attr.zone: {{ . }}
{{- end }}
{{- with $.Values.opensearch.settings }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
security:
config:
adminSecret:
adminSecret:
name: {{ template "kubezero-lib.fullname" . }}-admin-tls
tls:
transport:

View File

@ -3,7 +3,20 @@ set -ex
. ../../scripts/lib-update.sh
../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/grafana-dashboards.yaml
#login_ecr_public
update_helm
#FLUENT_BIT_VERSION=$(yq eval '.dependencies[] | select(.name=="fluent-bit") | .version' Chart.yaml)
FLUENTD_VERSION=$(yq eval '.dependencies[] | select(.name=="fluentd") | .version' Chart.yaml)
# fluent-bit
#patch_chart fluent-bit
# FluentD
patch_chart fluentd
rm -f charts/fluentd/templates/files.conf/systemd.yaml
# Fetch dashboards from Grafana.com and update ZDT CM
../kubezero-metrics/sync_grafana_dashboards.py dashboards.yaml templates/grafana-dashboards.yaml
update_docs

View File

@ -52,6 +52,10 @@ opensearch:
version: 2.11.1
prometheus: false
# custom cluster settings
#settings:
# index.number_of_shards: 1
nodeSets:
- name: default
replicas: 2

View File

@ -18,6 +18,9 @@ jaeger:
http:
name: otlp-http
port: 4318
extraEnv:
- name: ES_TAGS_AS_FIELDS_ALL
value: "true"
serviceMonitor:
enabled: false
@ -49,9 +52,13 @@ jaeger:
url: jaeger.example.com
opensearch:
version: 2.12.0
version: 2.13.0
prometheus: false
# custom cluster settings
#settings:
# index.number_of_shards: 1
nodeSets: []
#- name: default-nodes
# replicas: 2

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero
description: KubeZero - Root App of Apps chart
type: application
version: 1.28.8
version: 1.28.9
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:

View File

@ -57,10 +57,8 @@ argocd-apps:
server: https://kubernetes.default.svc
namespace: argocd
{{- with .Values.kubezero.syncPolicy }}
syncPolicy:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- toYaml (default dict .Values.kubezero.syncPolicy) | nindent 8 }}
argocd-image-updater:
enabled: {{ default "false" (index .Values "argo" "argocd-image-updater" "enabled") }}

Some files were not shown because too many files have changed in this diff Show More