First version of calico as helm chart

This commit is contained in:
Stefan Reimer 2020-07-06 12:32:24 +01:00
parent 967f27baac
commit 3689f9aaeb
16 changed files with 3983 additions and 11 deletions

View File

@ -24,3 +24,8 @@ Changes:
- Set FELIX log level to warning - Set FELIX log level to warning
- Enable Prometheus metrics - Enable Prometheus metrics
## Prometheus
See: https://grafana.com/grafana/dashboards/12175

View File

@ -2,7 +2,7 @@ kubezero-argo-cd
================ ================
KubeZero ArgoCD Helm chart to install ArgoCD itself and the KubeZero ArgoCD Application KubeZero ArgoCD Helm chart to install ArgoCD itself and the KubeZero ArgoCD Application
Current chart version is `0.3.0` Current chart version is `0.3.1`
Source code can be found [here](https://kubezero.com) Source code can be found [here](https://kubezero.com)
@ -10,7 +10,7 @@ Source code can be found [here](https://kubezero.com)
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://argoproj.github.io/argo-helm | argo-cd | 2.3.2 | | https://argoproj.github.io/argo-helm | argo-cd | 2.5.0 |
| https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 | | https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 |
## Chart Values ## Chart Values

View File

@ -0,0 +1,17 @@
apiVersion: v2
name: kubezero-calico
description: KubeZero Umbrella Chart for Calico
type: application
version: 0.1.0
home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords:
- kubezero
- calico
maintainers:
- name: Quarky9
dependencies:
- name: kubezero-lib
version: ">= 0.1.1"
repository: https://zero-down-time.github.io/kubezero/
kubeVersion: ">= 1.16.0"

View File

@ -0,0 +1,40 @@
kubezero-calico
===============
KubeZero Umbrella Chart for Calico
Current chart version is `0.1.0`
Source code can be found [here](https://kubezero.com)
## Chart Requirements
| Repository | Name | Version |
|------------|------|---------|
| https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 |
## KubeZero default configuration
## AWS
The setup is based on the upstream calico-vxlan config from
`https://docs.projectcalico.org/v3.15/manifests/calico-vxlan.yaml`
### Changes
- VxLAN set to Always to not expose cluster communication to VPC
-> EC2 SecurityGroups still apply and only need to allow UDP 4789 for VxLAN traffic
-> No need to disable source/destination check on EC2 instances
-> Prepared for optional WireGuard encryption for all inter node traffic
- MTU set to 8941
- Removed migration init-container
- Disable BGB and BIRD health checks
- Set FELIX log level to warning
## Resources
- Grafana Dashboard: https://grafana.com/grafana/dashboards/12175

View File

@ -0,0 +1,35 @@
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionLine" . }}
{{ template "chart.sourceLinkLine" . }}
{{ template "chart.requirementsSection" . }}
## KubeZero default configuration
## AWS
The setup is based on the upstream calico-vxlan config from
`https://docs.projectcalico.org/v3.15/manifests/calico-vxlan.yaml`
### Changes
- VxLAN set to Always to not expose cluster communication to VPC
-> EC2 SecurityGroups still apply and only need to allow UDP 4789 for VxLAN traffic
-> No need to disable source/destination check on EC2 instances
-> Prepared for optional WireGuard encryption for all inter node traffic
- MTU set to 8941
- Removed migration init-container
- Disable BGB and BIRD health checks
- Set FELIX log level to warning
## Resources
- Grafana Dashboard: https://grafana.com/grafana/dashboards/12175

View File

@ -0,0 +1,101 @@
--- calico-vxlan.yaml 2020-07-03 15:32:40.740506882 +0100
+++ calico.yaml 2020-07-03 15:27:47.651499841 +0100
@@ -10,13 +10,13 @@
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
- calico_backend: "bird"
+ calico_backend: "vxlan"
# Configure the MTU to use for workload interfaces and tunnels.
# - If Wireguard is enabled, set to your network MTU - 60
# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
# - Otherwise, if IPIP is enabled, set to your network MTU - 20
# - Otherwise, if not using any encapsulation, set to your network MTU.
- veth_mtu: "1410"
+ veth_mtu: "8941"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
@@ -3451,29 +3451,6 @@
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
- # This container performs upgrade from host-local IPAM to calico-ipam.
- # It can be deleted if this is a fresh installation, or if you have already
- # upgraded to use calico-ipam.
- - name: upgrade-ipam
- image: calico/cni:v3.15.0
- command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
- env:
- - name: KUBERNETES_NODE_NAME
- valueFrom:
- fieldRef:
- fieldPath: spec.nodeName
- - name: CALICO_NETWORKING_BACKEND
- valueFrom:
- configMapKeyRef:
- name: calico-config
- key: calico_backend
- volumeMounts:
- - mountPath: /var/lib/cni/networks
- name: host-local-net-dir
- - mountPath: /host/opt/cni/bin
- name: cni-bin-dir
- securityContext:
- privileged: true
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
@@ -3545,7 +3522,7 @@
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
- value: "k8s,bgp"
+ value: "k8s,kubeadm"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
@@ -3554,7 +3531,7 @@
value: "Never"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
- value: "CrossSubnet"
+ value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
@@ -3595,9 +3572,17 @@
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
- value: "info"
+ value: "Warning"
+ - name: FELIX_LOGSEVERITYFILE
+ value: "Warning"
+ - name: FELIX_LOGSEVERITYSYS
+ value: ""
- name: FELIX_HEALTHENABLED
value: "true"
+ - name: FELIX_PROMETHEUSGOMETRICSENABLED
+ value: "false"
+ - name: FELIX_PROMETHEUSMETRICSENABLED
+ value: "true"
securityContext:
privileged: true
resources:
@@ -3608,7 +3593,6 @@
command:
- /bin/calico-node
- -felix-live
- - -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
@@ -3617,7 +3601,6 @@
command:
- /bin/calico-node
- -felix-ready
- - -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,620 @@
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "{{ .Values.network }}"
# Configure the MTU to use for workload interfaces and tunnels.
# - If Wireguard is enabled, set to your network MTU - 60
# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
# - Otherwise, if IPIP is enabled, set to your network MTU - 20
# - Otherwise, if not using any encapsulation, set to your network MTU.
veth_mtu: "{{ .Values.mtu }}"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
---
# Source: calico/templates/calico-kube-controllers-rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Nodes are watched to monitor for deletions.
- apiGroups: [""]
resources:
- nodes
verbs:
- watch
- list
- get
# Pods are queried to check for existence.
- apiGroups: [""]
resources:
- pods
verbs:
- get
# IPAM resources are manipulated when nodes are deleted.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
verbs:
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
# kube-controllers manages hostendpoints.
- apiGroups: ["crd.projectcalico.org"]
resources:
- hostendpoints
verbs:
- get
- list
- create
- update
- delete
# Needs access to update clusterinformations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- clusterinformations
verbs:
- get
- create
- update
# KubeControllersConfiguration is where it gets its config
- apiGroups: ["crd.projectcalico.org"]
resources:
- kubecontrollersconfigurations
verbs:
# read its own config
- get
# create a default if none exists
- create
# update status
- update
# watch for changes
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only required for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
# The Calico IPAM migration needs to get daemonsets. These permissions can be
# removed if not upgrading from an installation using host-local IPAM.
- apiGroups: ["apps"]
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.15.0
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: calico/pod2daemon-flexvol:v3.15.0
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.15.0
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,kubeadm"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Never"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "{{ .Values.loglevel }}"
- name: FELIX_LOGSEVERITYFILE
value: "{{ .Values.loglevel }}"
- name: FELIX_LOGSEVERITYSYS
value: ""
- name: FELIX_HEALTHENABLED
value: "true"
- name: FELIX_PROMETHEUSGOMETRICSENABLED
value: "{{ .Values.prometheus }}"
- name: FELIX_PROMETHEUSMETRICSENABLED
value: "{{ .Values.prometheus }}"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: calico/kube-controllers:v3.15.0
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml

View File

@ -0,0 +1,17 @@
{{- if .Values.prometheus }}
kind: Service
metadata:
labels:
k8s-app: calico-node
name: calico-node
spec:
clusterIP: None
ports:
- name: metrics
port: 9091
protocol: TCP
targetPort: 9091
selector:
k8s-app: calico-node
type: ClusterIP
{{- end }}

View File

@ -0,0 +1,19 @@
{{- if .Values.prometheus }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: calico-node
labels:
k8s-app: calico-node
prometheus: kube-prometheus
spec:
jobLabel: k8s-app
selector:
matchLabels:
k8s-app: calico-node
namespaceSelector:
matchNames:
- kube-system
endpoints:
- port: metrics
{{- end }}

View File

@ -0,0 +1,4 @@
network: vxlan
mtu: 8941
loglevel: Warning
prometheus: false

View File

@ -2,7 +2,7 @@ kubezero-kiam
============= =============
KubeZero Umbrella Chart for Kiam KubeZero Umbrella Chart for Kiam
Current chart version is `0.2.4` Current chart version is `0.2.5`
Source code can be found [here](https://kubezero.com) Source code can be found [here](https://kubezero.com)
@ -10,7 +10,7 @@ Source code can be found [here](https://kubezero.com)
| Repository | Name | Version | | Repository | Name | Version |
|------------|------|---------| |------------|------|---------|
| https://uswitch.github.io/kiam-helm-charts/charts/ | kiam | 5.7.0 | | https://uswitch.github.io/kiam-helm-charts/charts/ | kiam | 5.8.1 |
| https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 | | https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 |
## KubeZero default configuration ## KubeZero default configuration

View File

@ -2,7 +2,7 @@ apiVersion: v2
name: kubezero name: kubezero
description: KubeZero ArgoCD Application - Root App of Apps chart of KubeZero description: KubeZero ArgoCD Application - Root App of Apps chart of KubeZero
type: application type: application
version: 0.3.0 version: 0.3.1
home: https://kubezero.com home: https://kubezero.com
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
keywords: keywords:

View File

@ -2,7 +2,7 @@ kubezero
======== ========
KubeZero ArgoCD Application - Root App of Apps chart of KubeZero KubeZero ArgoCD Application - Root App of Apps chart of KubeZero
Current chart version is `0.3.0` Current chart version is `0.3.1`
Source code can be found [here](https://kubezero.com) Source code can be found [here](https://kubezero.com)
@ -17,11 +17,12 @@ Source code can be found [here](https://kubezero.com)
| Key | Type | Default | Description | | Key | Type | Default | Description |
|-----|------|---------|-------------| |-----|------|---------|-------------|
| aws-ebs-csi-driver.enabled | bool | `false` | | | aws-ebs-csi-driver.enabled | bool | `false` | |
| calico.enabled | bool | `false` | | | calico.enabled | bool | `true` | |
| cert-manager.enabled | bool | `false` | | | cert-manager.enabled | bool | `true` | |
| global.defaultDestination.server | string | `"https://kubernetes.default.svc"` | | | global.defaultDestination.server | string | `"https://kubernetes.default.svc"` | |
| global.defaultSource.pathPrefix | string | `""` | | | global.defaultSource.pathPrefix | string | `""` | |
| global.defaultSource.repoURL | string | `"https://github.com/zero-down-time/kubezero"` | | | global.defaultSource.repoURL | string | `"https://github.com/zero-down-time/kubezero"` | |
| global.defaultSource.targetRevision | string | `"HEAD"` | | | global.defaultSource.targetRevision | string | `"HEAD"` | |
| kiam.enabled | bool | `false` | | | kiam.enabled | bool | `false` | |
| local-volume-provisioner.enabled | bool | `false` | | | local-volume-provisioner.enabled | bool | `false` | |
| platform | string | `"aws"` | |

View File

@ -18,14 +18,14 @@ spec:
targetRevision: {{ .root.Values.global.defaultSource.targetRevision }} targetRevision: {{ .root.Values.global.defaultSource.targetRevision }}
{{- if eq .type "helm" }} {{- if eq .type "helm" }}
{{- $my_values := index .root.Values .name "values" }} {{- $my_values := index .root.Values .name "values" }}
path: {{ .root.Values.global.defaultSource.pathPrefix}}charts/kubezero-{{ default .name .path }} path: {{ .root.Values.global.defaultSource.pathPrefix}}charts/kubezero-{{ .name }}
{{- if $my_values }} {{- if $my_values }}
helm: helm:
values: | values: |
{{- toYaml $my_values | nindent 8 }} {{- toYaml $my_values | nindent 8 }}
{{- end }} {{- end }}
{{- else }} {{- else }}
path: {{ .root.Values.global.defaultSource.pathPrefix }}artifacts/kubezero-{{ default .name .path }} path: {{ .root.Values.global.defaultSource.pathPrefix }}artifacts/kubezero-{{ .name }}
{{- end }} {{- end }}
destination: destination:

View File

@ -1,7 +1,7 @@
# {{ .Values.calico.network }} # {{ .Values.calico.network }}
{{- if .Values.calico.enabled }} {{- if .Values.calico.enabled }}
{{- if .Values.calico.network }} {{- if .Values.calico.network }}
{{ template "kubezero-app.app" dict "root" . "name" "calico" "type" "kustomize" "retain" true "path" (printf "%s/%s" "calico" .Values.platform) }} {{ template "kubezero-app.app" dict "root" . "name" "calico" "type" "helm" "retain" true }}
{{- else }} {{- else }}
{{ template "kubezero-app.app" dict "root" . "name" "calico" "type" "kustomize" "retain" true }} {{ template "kubezero-app.app" dict "root" . "name" "calico" "type" "kustomize" "retain" true }}
{{- end }} {{- end }}