Get new Calico #6
|
@ -7,35 +7,35 @@
|
|||
- check config/kube/kube-control-plane.yaml
|
||||
- check config/kube/kube-workers.yaml
|
||||
|
||||
|
||||
## Deploy Control Plane
|
||||
- cloudbender sync kube-control-plane
|
||||
## Deploy Cluster
|
||||
- cloudbender sync config/kube --multi
|
||||
The latest versions now support waiting for the control plane to bootstrap allowing deployments in one step !
|
||||
|
||||
## Get kubectl config
|
||||
- get admin.conf from S3 and store in your local `~/.kube` folder
|
||||
S3 URL will also be in the Slack message after successful bootstrap !
|
||||
|
||||
## Verify controller nodes
|
||||
- Verify all controller nodes have the expected version and are *Ready*, eg via: `kubectl get nodes`
|
||||
|
||||
## Deploy Worker group
|
||||
- cloudbender sync kube-workers
|
||||
|
||||
## Verify all nodes
|
||||
- Verify all nodes incl. workers have the expected version and are *Ready*, eg via: `kubectl get nodes`
|
||||
## Verify nodes
|
||||
- Verify all nodes have the expected version and are *Ready*, eg via: `kubectl get nodes`
|
||||
|
||||
|
||||
---
|
||||
# KubeZero
|
||||
All configs and scriptss are normally under:
|
||||
`artifacts/<ENV>/<REGION>/kubezero`
|
||||
|
||||
## Prepare Config
|
||||
- check values.yaml
|
||||
check values.yaml for your cluster
|
||||
|
||||
Easiest way to get the ARNs for various IAM roles is to use the CloudBender output command:
|
||||
`cloudbender outputs config/kube-control-plane.yaml`
|
||||
## Get CloudBender kubezero config
|
||||
Cloudbender creates a kubezero config file, which incl. all outputs from the Cloudformation stacks in `outputs/kube/kubezero.yaml`.
|
||||
- copy kubezero.yaml *next* to the values.yaml named as `cloudbender.yaml`.
|
||||
|
||||
## Deploy KubeZero Helm chart
|
||||
`./deploy.sh`
|
||||
|
||||
The deploy script will handle the initial bootstrap process up to point of installing advanced services like Istio or Prometheus.
|
||||
It will take about 10min to reach the point of being able to install these advanced services.
|
||||
|
||||
## Verify ArgoCD
|
||||
At this stage we there is no support for any kind of Ingress yet. To reach the Argo API port forward from localhost via:
|
||||
|
@ -52,16 +52,8 @@ eg. `argocd app cert-manager sync`
|
|||
|
||||
# Only proceed any further if all Argo Applications show healthy !!
|
||||
|
||||
|
||||
## WIP not yet integrated into KubeZero
|
||||
|
||||
### EFS CSI
|
||||
To deploy the EFS CSI driver the backing EFS filesystem needs to be in place ahead of time. This is easy to do by enabling the EFS functionality in the worker CloudBender stack.
|
||||
|
||||
- retrieve the EFS: `cloudbender outputs config/kube-control-worker.yaml` and look for *EfsFileSystemId*
|
||||
- update values.yaml in the `aws-efs-csi` artifact folder as well as the efs_pv.yaml
|
||||
- execute `deploy.sh`
|
||||
|
||||
### Istio
|
||||
Istio is currently pinned to version 1.4.X as this is the last version supporting installation via helm charts.
|
||||
|
||||
|
@ -82,6 +74,9 @@ To deploy fluentbit only required adjustment is the `fluentd_host=<LOG_HOST>` in
|
|||
Only adjustment required is the ingress routing config in istio-service.yaml. Adjust as needed before executing:
|
||||
`deploy.sh`
|
||||
|
||||
### EFS CSI
|
||||
- add the EFS fs-ID from the worker cloudformation output into values.yaml and the efs-pv.yaml
|
||||
- `./deploy.sh`
|
||||
|
||||
# Demo / own apps
|
||||
- Add your own application to ArgoCD via the cli
|
|
@ -1,11 +1,31 @@
|
|||
# Calico CNI
|
||||
|
||||
## Known issues
|
||||
Due to a bug in Kustomize V2 vs. V3 we have to remove all namespaces from the base resources.
|
||||
The kube-system namespace will be applied by kustomize.
|
||||
Current top-level still contains the deprecated Canal implementation.
|
||||
Removed once new AWS config is tested and rolled out to all existing clusters.
|
||||
|
||||
See eg: `https://github.com/kubernetes-sigs/kustomize/issues/1351`
|
||||
## AWS
|
||||
Calico is setup based on the upstream calico-vxlan config from
|
||||
`https://docs.projectcalico.org/v3.15/manifests/calico-vxlan.yaml`
|
||||
|
||||
## Upgrade
|
||||
See: https://docs.projectcalico.org/maintenance/kubernetes-upgrade
|
||||
`curl https://docs.projectcalico.org/manifests/canal.yaml -O && patch < remove-namespace.patch`
|
||||
Changes:
|
||||
|
||||
- VxLAN set to Always to not expose cluster communication to VPC
|
||||
|
||||
-> EC2 SecurityGroups still apply and only need to allow UDP 4789 for VxLAN traffic
|
||||
-> No need to disable source/destination check on EC2 instances
|
||||
-> Prepared for optional WireGuard encryption for all inter node traffic
|
||||
|
||||
- MTU set to 8941
|
||||
|
||||
- Removed migration init-container
|
||||
|
||||
- Disable BGB and BIRD health checks
|
||||
|
||||
- Set FELIX log level to warning
|
||||
|
||||
- Enable Prometheus metrics
|
||||
|
||||
|
||||
## Prometheus
|
||||
|
||||
See: https://grafana.com/grafana/dashboards/12175
|
||||
|
|
|
@ -0,0 +1,101 @@
|
|||
--- calico-vxlan.yaml 2020-07-03 15:32:40.740506882 +0100
|
||||
+++ calico.yaml 2020-07-03 15:27:47.651499841 +0100
|
||||
@@ -10,13 +10,13 @@
|
||||
# Typha is disabled.
|
||||
typha_service_name: "none"
|
||||
# Configure the backend to use.
|
||||
- calico_backend: "bird"
|
||||
+ calico_backend: "vxlan"
|
||||
# Configure the MTU to use for workload interfaces and tunnels.
|
||||
# - If Wireguard is enabled, set to your network MTU - 60
|
||||
# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
|
||||
# - Otherwise, if IPIP is enabled, set to your network MTU - 20
|
||||
# - Otherwise, if not using any encapsulation, set to your network MTU.
|
||||
- veth_mtu: "1410"
|
||||
+ veth_mtu: "8941"
|
||||
|
||||
# The CNI network configuration to install on each node. The special
|
||||
# values in this config will be automatically populated.
|
||||
@@ -3451,29 +3451,6 @@
|
||||
terminationGracePeriodSeconds: 0
|
||||
priorityClassName: system-node-critical
|
||||
initContainers:
|
||||
- # This container performs upgrade from host-local IPAM to calico-ipam.
|
||||
- # It can be deleted if this is a fresh installation, or if you have already
|
||||
- # upgraded to use calico-ipam.
|
||||
- - name: upgrade-ipam
|
||||
- image: calico/cni:v3.15.0
|
||||
- command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
|
||||
- env:
|
||||
- - name: KUBERNETES_NODE_NAME
|
||||
- valueFrom:
|
||||
- fieldRef:
|
||||
- fieldPath: spec.nodeName
|
||||
- - name: CALICO_NETWORKING_BACKEND
|
||||
- valueFrom:
|
||||
- configMapKeyRef:
|
||||
- name: calico-config
|
||||
- key: calico_backend
|
||||
- volumeMounts:
|
||||
- - mountPath: /var/lib/cni/networks
|
||||
- name: host-local-net-dir
|
||||
- - mountPath: /host/opt/cni/bin
|
||||
- name: cni-bin-dir
|
||||
- securityContext:
|
||||
- privileged: true
|
||||
# This container installs the CNI binaries
|
||||
# and CNI network config file on each node.
|
||||
- name: install-cni
|
||||
@@ -3545,7 +3522,7 @@
|
||||
key: calico_backend
|
||||
# Cluster type to identify the deployment type
|
||||
- name: CLUSTER_TYPE
|
||||
- value: "k8s,bgp"
|
||||
+ value: "k8s,kubeadm"
|
||||
# Auto-detect the BGP IP address.
|
||||
- name: IP
|
||||
value: "autodetect"
|
||||
@@ -3554,7 +3531,7 @@
|
||||
value: "Never"
|
||||
# Enable or Disable VXLAN on the default IP pool.
|
||||
- name: CALICO_IPV4POOL_VXLAN
|
||||
- value: "CrossSubnet"
|
||||
+ value: "Always"
|
||||
# Set MTU for tunnel device used if ipip is enabled
|
||||
- name: FELIX_IPINIPMTU
|
||||
valueFrom:
|
||||
@@ -3595,9 +3572,17 @@
|
||||
value: "false"
|
||||
# Set Felix logging to "info"
|
||||
- name: FELIX_LOGSEVERITYSCREEN
|
||||
- value: "info"
|
||||
+ value: "Warning"
|
||||
+ - name: FELIX_LOGSEVERITYFILE
|
||||
+ value: "Warning"
|
||||
+ - name: FELIX_LOGSEVERITYSYS
|
||||
+ value: ""
|
||||
- name: FELIX_HEALTHENABLED
|
||||
value: "true"
|
||||
+ - name: FELIX_PROMETHEUSGOMETRICSENABLED
|
||||
+ value: "false"
|
||||
+ - name: FELIX_PROMETHEUSMETRICSENABLED
|
||||
+ value: "true"
|
||||
securityContext:
|
||||
privileged: true
|
||||
resources:
|
||||
@@ -3608,7 +3593,6 @@
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-live
|
||||
- - -bird-live
|
||||
periodSeconds: 10
|
||||
initialDelaySeconds: 10
|
||||
failureThreshold: 6
|
||||
@@ -3617,7 +3601,6 @@
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-ready
|
||||
- - -bird-ready
|
||||
periodSeconds: 10
|
||||
volumeMounts:
|
||||
- mountPath: /lib/modules
|
File diff suppressed because it is too large
Load Diff
|
@ -2,7 +2,7 @@ kubezero-argo-cd
|
|||
================
|
||||
KubeZero ArgoCD Helm chart to install ArgoCD itself and the KubeZero ArgoCD Application
|
||||
|
||||
Current chart version is `0.3.0`
|
||||
Current chart version is `0.3.1`
|
||||
|
||||
Source code can be found [here](https://kubezero.com)
|
||||
|
||||
|
@ -10,7 +10,7 @@ Source code can be found [here](https://kubezero.com)
|
|||
|
||||
| Repository | Name | Version |
|
||||
|------------|------|---------|
|
||||
| https://argoproj.github.io/argo-helm | argo-cd | 2.3.2 |
|
||||
| https://argoproj.github.io/argo-helm | argo-cd | 2.5.0 |
|
||||
| https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 |
|
||||
|
||||
## Chart Values
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: v2
|
||||
name: kubezero-calico
|
||||
description: KubeZero Umbrella Chart for Calico
|
||||
type: application
|
||||
version: 0.1.3
|
||||
home: https://kubezero.com
|
||||
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
|
||||
keywords:
|
||||
- kubezero
|
||||
- calico
|
||||
maintainers:
|
||||
- name: Quarky9
|
||||
dependencies:
|
||||
- name: kubezero-lib
|
||||
version: ">= 0.1.1"
|
||||
repository: https://zero-down-time.github.io/kubezero/
|
||||
kubeVersion: ">= 1.16.0"
|
|
@ -0,0 +1,40 @@
|
|||
kubezero-calico
|
||||
===============
|
||||
KubeZero Umbrella Chart for Calico
|
||||
|
||||
Current chart version is `0.1.3`
|
||||
|
||||
Source code can be found [here](https://kubezero.com)
|
||||
|
||||
## Chart Requirements
|
||||
|
||||
| Repository | Name | Version |
|
||||
|------------|------|---------|
|
||||
| https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 |
|
||||
|
||||
## KubeZero default configuration
|
||||
|
||||
## AWS
|
||||
The setup is based on the upstream calico-vxlan config from
|
||||
`https://docs.projectcalico.org/v3.15/manifests/calico-vxlan.yaml`
|
||||
|
||||
### Changes
|
||||
|
||||
- VxLAN set to Always to not expose cluster communication to VPC
|
||||
|
||||
-> EC2 SecurityGroups still apply and only need to allow UDP 4789 for VxLAN traffic
|
||||
-> No need to disable source/destination check on EC2 instances
|
||||
-> Prepared for optional WireGuard encryption for all inter node traffic
|
||||
|
||||
- MTU set to 8941
|
||||
|
||||
- Removed migration init-container
|
||||
|
||||
- Disable BGB and BIRD health checks
|
||||
|
||||
- Set FELIX log level to warning
|
||||
|
||||
|
||||
## Resources
|
||||
|
||||
- Grafana Dashboard: https://grafana.com/grafana/dashboards/12175
|
|
@ -0,0 +1,35 @@
|
|||
{{ template "chart.header" . }}
|
||||
{{ template "chart.description" . }}
|
||||
|
||||
{{ template "chart.versionLine" . }}
|
||||
|
||||
{{ template "chart.sourceLinkLine" . }}
|
||||
|
||||
{{ template "chart.requirementsSection" . }}
|
||||
|
||||
## KubeZero default configuration
|
||||
|
||||
## AWS
|
||||
The setup is based on the upstream calico-vxlan config from
|
||||
`https://docs.projectcalico.org/v3.15/manifests/calico-vxlan.yaml`
|
||||
|
||||
### Changes
|
||||
|
||||
- VxLAN set to Always to not expose cluster communication to VPC
|
||||
|
||||
-> EC2 SecurityGroups still apply and only need to allow UDP 4789 for VxLAN traffic
|
||||
-> No need to disable source/destination check on EC2 instances
|
||||
-> Prepared for optional WireGuard encryption for all inter node traffic
|
||||
|
||||
- MTU set to 8941
|
||||
|
||||
- Removed migration init-container
|
||||
|
||||
- Disable BGB and BIRD health checks
|
||||
|
||||
- Set FELIX log level to warning
|
||||
|
||||
|
||||
## Resources
|
||||
|
||||
- Grafana Dashboard: https://grafana.com/grafana/dashboards/12175
|
|
@ -0,0 +1,101 @@
|
|||
--- calico-vxlan.yaml 2020-07-03 15:32:40.740506882 +0100
|
||||
+++ calico.yaml 2020-07-03 15:27:47.651499841 +0100
|
||||
@@ -10,13 +10,13 @@
|
||||
# Typha is disabled.
|
||||
typha_service_name: "none"
|
||||
# Configure the backend to use.
|
||||
- calico_backend: "bird"
|
||||
+ calico_backend: "vxlan"
|
||||
# Configure the MTU to use for workload interfaces and tunnels.
|
||||
# - If Wireguard is enabled, set to your network MTU - 60
|
||||
# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
|
||||
# - Otherwise, if IPIP is enabled, set to your network MTU - 20
|
||||
# - Otherwise, if not using any encapsulation, set to your network MTU.
|
||||
- veth_mtu: "1410"
|
||||
+ veth_mtu: "8941"
|
||||
|
||||
# The CNI network configuration to install on each node. The special
|
||||
# values in this config will be automatically populated.
|
||||
@@ -3451,29 +3451,6 @@
|
||||
terminationGracePeriodSeconds: 0
|
||||
priorityClassName: system-node-critical
|
||||
initContainers:
|
||||
- # This container performs upgrade from host-local IPAM to calico-ipam.
|
||||
- # It can be deleted if this is a fresh installation, or if you have already
|
||||
- # upgraded to use calico-ipam.
|
||||
- - name: upgrade-ipam
|
||||
- image: calico/cni:v3.15.0
|
||||
- command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
|
||||
- env:
|
||||
- - name: KUBERNETES_NODE_NAME
|
||||
- valueFrom:
|
||||
- fieldRef:
|
||||
- fieldPath: spec.nodeName
|
||||
- - name: CALICO_NETWORKING_BACKEND
|
||||
- valueFrom:
|
||||
- configMapKeyRef:
|
||||
- name: calico-config
|
||||
- key: calico_backend
|
||||
- volumeMounts:
|
||||
- - mountPath: /var/lib/cni/networks
|
||||
- name: host-local-net-dir
|
||||
- - mountPath: /host/opt/cni/bin
|
||||
- name: cni-bin-dir
|
||||
- securityContext:
|
||||
- privileged: true
|
||||
# This container installs the CNI binaries
|
||||
# and CNI network config file on each node.
|
||||
- name: install-cni
|
||||
@@ -3545,7 +3522,7 @@
|
||||
key: calico_backend
|
||||
# Cluster type to identify the deployment type
|
||||
- name: CLUSTER_TYPE
|
||||
- value: "k8s,bgp"
|
||||
+ value: "k8s,kubeadm"
|
||||
# Auto-detect the BGP IP address.
|
||||
- name: IP
|
||||
value: "autodetect"
|
||||
@@ -3554,7 +3531,7 @@
|
||||
value: "Never"
|
||||
# Enable or Disable VXLAN on the default IP pool.
|
||||
- name: CALICO_IPV4POOL_VXLAN
|
||||
- value: "CrossSubnet"
|
||||
+ value: "Always"
|
||||
# Set MTU for tunnel device used if ipip is enabled
|
||||
- name: FELIX_IPINIPMTU
|
||||
valueFrom:
|
||||
@@ -3595,9 +3572,17 @@
|
||||
value: "false"
|
||||
# Set Felix logging to "info"
|
||||
- name: FELIX_LOGSEVERITYSCREEN
|
||||
- value: "info"
|
||||
+ value: "Warning"
|
||||
+ - name: FELIX_LOGSEVERITYFILE
|
||||
+ value: "Warning"
|
||||
+ - name: FELIX_LOGSEVERITYSYS
|
||||
+ value: ""
|
||||
- name: FELIX_HEALTHENABLED
|
||||
value: "true"
|
||||
+ - name: FELIX_PROMETHEUSGOMETRICSENABLED
|
||||
+ value: "false"
|
||||
+ - name: FELIX_PROMETHEUSMETRICSENABLED
|
||||
+ value: "true"
|
||||
securityContext:
|
||||
privileged: true
|
||||
resources:
|
||||
@@ -3608,7 +3593,6 @@
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-live
|
||||
- - -bird-live
|
||||
periodSeconds: 10
|
||||
initialDelaySeconds: 10
|
||||
failureThreshold: 6
|
||||
@@ -3617,7 +3601,6 @@
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-ready
|
||||
- - -bird-ready
|
||||
periodSeconds: 10
|
||||
volumeMounts:
|
||||
- mountPath: /lib/modules
|
|
@ -0,0 +1,624 @@
|
|||
---
|
||||
# Source: calico/templates/calico-config.yaml
|
||||
# This ConfigMap is used to configure a self-hosted Calico installation.
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: calico-config
|
||||
namespace: kube-system
|
||||
data:
|
||||
# Typha is disabled.
|
||||
typha_service_name: "none"
|
||||
# Configure the backend to use.
|
||||
calico_backend: "{{ .Values.network }}"
|
||||
# Configure the MTU to use for workload interfaces and tunnels.
|
||||
# - If Wireguard is enabled, set to your network MTU - 60
|
||||
# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
|
||||
# - Otherwise, if IPIP is enabled, set to your network MTU - 20
|
||||
# - Otherwise, if not using any encapsulation, set to your network MTU.
|
||||
veth_mtu: "{{ .Values.mtu }}"
|
||||
|
||||
# The CNI network configuration to install on each node. The special
|
||||
# values in this config will be automatically populated.
|
||||
cni_network_config: |-
|
||||
{
|
||||
"name": "k8s-pod-network",
|
||||
"cniVersion": "0.3.1",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "calico",
|
||||
"log_level": "info",
|
||||
"datastore_type": "kubernetes",
|
||||
"nodename": "__KUBERNETES_NODE_NAME__",
|
||||
"mtu": __CNI_MTU__,
|
||||
"ipam": {
|
||||
"type": "calico-ipam"
|
||||
},
|
||||
"policy": {
|
||||
"type": "k8s"
|
||||
},
|
||||
"kubernetes": {
|
||||
"kubeconfig": "__KUBECONFIG_FILEPATH__"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"snat": true,
|
||||
"capabilities": {"portMappings": true}
|
||||
},
|
||||
{
|
||||
"type": "bandwidth",
|
||||
"capabilities": {"bandwidth": true}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-kube-controllers-rbac.yaml
|
||||
|
||||
# Include a clusterrole for the kube-controllers component,
|
||||
# and bind it to the calico-kube-controllers serviceaccount.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
rules:
|
||||
# Nodes are watched to monitor for deletions.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- get
|
||||
# Pods are queried to check for existence.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
# IPAM resources are manipulated when nodes are deleted.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- ippools
|
||||
verbs:
|
||||
- list
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- blockaffinities
|
||||
- ipamblocks
|
||||
- ipamhandles
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
# kube-controllers manages hostendpoints.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- hostendpoints
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
# Needs access to update clusterinformations.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
# KubeControllersConfiguration is where it gets its config
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- kubecontrollersconfigurations
|
||||
verbs:
|
||||
# read its own config
|
||||
- get
|
||||
# create a default if none exists
|
||||
- create
|
||||
# update status
|
||||
- update
|
||||
# watch for changes
|
||||
- watch
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: calico-kube-controllers
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
---
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-node-rbac.yaml
|
||||
# Include a clusterrole for the calico-node DaemonSet,
|
||||
# and bind it to the calico-node serviceaccount.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: calico-node
|
||||
rules:
|
||||
# The CNI plugin needs to get pods, nodes, and namespaces.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- endpoints
|
||||
- services
|
||||
verbs:
|
||||
# Used to discover service IPs for advertisement.
|
||||
- watch
|
||||
- list
|
||||
# Used to discover Typhas.
|
||||
- get
|
||||
# Pod CIDR auto-detection on kubeadm needs access to config maps.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes/status
|
||||
verbs:
|
||||
# Needed for clearing NodeNetworkUnavailable flag.
|
||||
- patch
|
||||
# Calico stores some configuration information in node annotations.
|
||||
- update
|
||||
# Watch for changes to Kubernetes NetworkPolicies.
|
||||
- apiGroups: ["networking.k8s.io"]
|
||||
resources:
|
||||
- networkpolicies
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
# Used by Calico for policy information.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
- namespaces
|
||||
- serviceaccounts
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
# The CNI plugin patches pods/status.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods/status
|
||||
verbs:
|
||||
- patch
|
||||
# Calico monitors various CRDs for config.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- globalfelixconfigs
|
||||
- felixconfigurations
|
||||
- bgppeers
|
||||
- globalbgpconfigs
|
||||
- bgpconfigurations
|
||||
- ippools
|
||||
- ipamblocks
|
||||
- globalnetworkpolicies
|
||||
- globalnetworksets
|
||||
- networkpolicies
|
||||
- networksets
|
||||
- clusterinformations
|
||||
- hostendpoints
|
||||
- blockaffinities
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
# Calico must create and update some CRDs on startup.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- ippools
|
||||
- felixconfigurations
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- create
|
||||
- update
|
||||
# Calico stores some configuration information on the node.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
# These permissions are only required for upgrade from v2.6, and can
|
||||
# be removed after upgrade or on fresh installations.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- bgpconfigurations
|
||||
- bgppeers
|
||||
verbs:
|
||||
- create
|
||||
- update
|
||||
# These permissions are required for Calico CNI to perform IPAM allocations.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- blockaffinities
|
||||
- ipamblocks
|
||||
- ipamhandles
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- ipamconfigs
|
||||
verbs:
|
||||
- get
|
||||
# Block affinities must also be watchable by confd for route aggregation.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- blockaffinities
|
||||
verbs:
|
||||
- watch
|
||||
# The Calico IPAM migration needs to get daemonsets. These permissions can be
|
||||
# removed if not upgrading from an installation using host-local IPAM.
|
||||
- apiGroups: ["apps"]
|
||||
resources:
|
||||
- daemonsets
|
||||
verbs:
|
||||
- get
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: calico-node
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: calico-node
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-node.yaml
|
||||
# This manifest installs the calico-node container, as well
|
||||
# as the CNI plugins and network config on
|
||||
# each master and worker node in a Kubernetes cluster.
|
||||
kind: DaemonSet
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-node
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
{{- if .Values.migration }}
|
||||
# Only run Calico on nodes that have been migrated.
|
||||
projectcalico.org/node-network-during-migration: calico
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
tolerations:
|
||||
# Make sure calico-node gets scheduled on all nodes.
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
# Mark the pod as a critical add-on for rescheduling.
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
serviceAccountName: calico-node
|
||||
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
|
||||
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
|
||||
terminationGracePeriodSeconds: 0
|
||||
priorityClassName: system-node-critical
|
||||
initContainers:
|
||||
# This container installs the CNI binaries
|
||||
# and CNI network config file on each node.
|
||||
- name: install-cni
|
||||
image: calico/cni:v3.15.0
|
||||
command: ["/install-cni.sh"]
|
||||
env:
|
||||
# Name of the CNI config file to create.
|
||||
- name: CNI_CONF_NAME
|
||||
value: "10-calico.conflist"
|
||||
# The CNI network config to install on each node.
|
||||
- name: CNI_NETWORK_CONFIG
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: cni_network_config
|
||||
# Set the hostname based on the k8s node name.
|
||||
- name: KUBERNETES_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
# CNI MTU Config variable
|
||||
- name: CNI_MTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Prevents the container from sleeping forever.
|
||||
- name: SLEEP
|
||||
value: "false"
|
||||
volumeMounts:
|
||||
- mountPath: /host/opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
- mountPath: /host/etc/cni/net.d
|
||||
name: cni-net-dir
|
||||
securityContext:
|
||||
privileged: true
|
||||
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
|
||||
# to communicate with Felix over the Policy Sync API.
|
||||
- name: flexvol-driver
|
||||
image: calico/pod2daemon-flexvol:v3.15.0
|
||||
volumeMounts:
|
||||
- name: flexvol-driver-host
|
||||
mountPath: /host/driver
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
# Runs calico-node container on each Kubernetes node. This
|
||||
# container programs network policy and routes on each
|
||||
# host.
|
||||
- name: calico-node
|
||||
image: calico/node:v3.15.0
|
||||
env:
|
||||
# Use Kubernetes API as the backing datastore.
|
||||
- name: DATASTORE_TYPE
|
||||
value: "kubernetes"
|
||||
# Wait for the datastore.
|
||||
- name: WAIT_FOR_DATASTORE
|
||||
value: "true"
|
||||
# Set based on the k8s node name.
|
||||
- name: NODENAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
# Choose the backend to use.
|
||||
- name: CALICO_NETWORKING_BACKEND
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: calico_backend
|
||||
# Cluster type to identify the deployment type
|
||||
- name: CLUSTER_TYPE
|
||||
value: "k8s,kubeadm"
|
||||
# Auto-detect the BGP IP address.
|
||||
- name: IP
|
||||
value: "autodetect"
|
||||
# Enable IPIP
|
||||
- name: CALICO_IPV4POOL_IPIP
|
||||
value: "Never"
|
||||
# Enable or Disable VXLAN on the default IP pool.
|
||||
- name: CALICO_IPV4POOL_VXLAN
|
||||
value: "Always"
|
||||
# Set MTU for tunnel device used if ipip is enabled
|
||||
- name: FELIX_IPINIPMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Set MTU for the VXLAN tunnel device.
|
||||
- name: FELIX_VXLANMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Set MTU for the Wireguard tunnel device.
|
||||
- name: FELIX_WIREGUARDMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
|
||||
# chosen from this range. Changing this value after installation will have
|
||||
# no effect. This should fall within `--cluster-cidr`.
|
||||
# - name: CALICO_IPV4POOL_CIDR
|
||||
# value: "192.168.0.0/16"
|
||||
# Set MTU for the Wireguard tunnel device.
|
||||
- name: FELIX_WIREGUARDMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Disable file logging so `kubectl logs` works.
|
||||
- name: CALICO_DISABLE_FILE_LOGGING
|
||||
value: "true"
|
||||
# Set Felix endpoint to host default action to ACCEPT.
|
||||
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
|
||||
value: "ACCEPT"
|
||||
# Disable IPv6 on Kubernetes.
|
||||
- name: FELIX_IPV6SUPPORT
|
||||
value: "false"
|
||||
# Set Felix logging to "info"
|
||||
- name: FELIX_LOGSEVERITYSCREEN
|
||||
value: "{{ .Values.loglevel }}"
|
||||
- name: FELIX_LOGSEVERITYFILE
|
||||
value: "{{ .Values.loglevel }}"
|
||||
- name: FELIX_LOGSEVERITYSYS
|
||||
value: ""
|
||||
- name: FELIX_HEALTHENABLED
|
||||
value: "true"
|
||||
- name: FELIX_PROMETHEUSGOMETRICSENABLED
|
||||
value: "{{ .Values.prometheus }}"
|
||||
- name: FELIX_PROMETHEUSMETRICSENABLED
|
||||
value: "{{ .Values.prometheus }}"
|
||||
securityContext:
|
||||
privileged: true
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-live
|
||||
periodSeconds: 10
|
||||
initialDelaySeconds: 10
|
||||
failureThreshold: 6
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-ready
|
||||
periodSeconds: 10
|
||||
volumeMounts:
|
||||
- mountPath: /lib/modules
|
||||
name: lib-modules
|
||||
readOnly: true
|
||||
- mountPath: /run/xtables.lock
|
||||
name: xtables-lock
|
||||
readOnly: false
|
||||
- mountPath: /var/run/calico
|
||||
name: var-run-calico
|
||||
readOnly: false
|
||||
- mountPath: /var/lib/calico
|
||||
name: var-lib-calico
|
||||
readOnly: false
|
||||
- name: policysync
|
||||
mountPath: /var/run/nodeagent
|
||||
volumes:
|
||||
# Used by calico-node.
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: var-run-calico
|
||||
hostPath:
|
||||
path: /var/run/calico
|
||||
- name: var-lib-calico
|
||||
hostPath:
|
||||
path: /var/lib/calico
|
||||
- name: xtables-lock
|
||||
hostPath:
|
||||
path: /run/xtables.lock
|
||||
type: FileOrCreate
|
||||
# Used to install CNI.
|
||||
- name: cni-bin-dir
|
||||
hostPath:
|
||||
path: /opt/cni/bin
|
||||
- name: cni-net-dir
|
||||
hostPath:
|
||||
path: /etc/cni/net.d
|
||||
# Mount in the directory for host-local IPAM allocations. This is
|
||||
# used when upgrading from host-local to calico-ipam, and can be removed
|
||||
# if not using the upgrade-ipam init container.
|
||||
- name: host-local-net-dir
|
||||
hostPath:
|
||||
path: /var/lib/cni/networks
|
||||
# Used to create per-pod Unix Domain Sockets
|
||||
- name: policysync
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: /var/run/nodeagent
|
||||
# Used to install Flex Volume Driver
|
||||
- name: flexvol-driver-host
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-kube-controllers.yaml
|
||||
# See https://github.com/projectcalico/kube-controllers
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
# The controllers can only have a single active instance.
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-kube-controllers
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
tolerations:
|
||||
# Mark the pod as a critical add-on for rescheduling.
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
serviceAccountName: calico-kube-controllers
|
||||
priorityClassName: system-cluster-critical
|
||||
containers:
|
||||
- name: calico-kube-controllers
|
||||
image: calico/kube-controllers:v3.15.0
|
||||
env:
|
||||
# Choose which controllers to run.
|
||||
- name: ENABLED_CONTROLLERS
|
||||
value: node
|
||||
- name: DATASTORE_TYPE
|
||||
value: kubernetes
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /usr/bin/check-status
|
||||
- -r
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-etcd-secrets.yaml
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-typha.yaml
|
||||
|
||||
---
|
||||
# Source: calico/templates/configure-canal.yaml
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,192 @@
|
|||
{{- if .Values.migration }}
|
||||
---
|
||||
# This ConfigMap is used to store Flannel subnet.env content.
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: flannel-migration-config
|
||||
namespace: kube-system
|
||||
data:
|
||||
# Do not edit! This field is updated by migration controller.
|
||||
flannel_subnet_env: ""
|
||||
|
||||
---
|
||||
# Include a clusterrole for the kube-controllers component,
|
||||
# and bind it to the flannel-migration-controller serviceaccount.
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: flannel-migration-controller
|
||||
rules:
|
||||
# Nodes are watched to monitor for deletions.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- get
|
||||
- patch
|
||||
- update
|
||||
# Nodes are watched to monitor for deletions.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes/status
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
# Pods are created/deleted.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- delete
|
||||
# Pods/exec are created.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods/exec
|
||||
verbs:
|
||||
- create
|
||||
# Configmaps are updated.
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods/eviction
|
||||
verbs:
|
||||
- create
|
||||
# Daemonset are watched to monitor for deletions.
|
||||
- apiGroups: ["apps", "extensions"]
|
||||
resources:
|
||||
- daemonsets
|
||||
verbs:
|
||||
- get
|
||||
- delete
|
||||
- update
|
||||
# IPAM resources are manipulated when nodes are deleted.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- ippools
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- ipamconfigs
|
||||
- blockaffinities
|
||||
- ipamblocks
|
||||
- ipamhandles
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
# Needs access to update clusterinformations.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
# Needs access to update felixconfigurations.
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- felixconfigurations
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: flannel-migration-controller
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: flannel-migration-controller
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: flannel-migration-controller
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# See https://github.com/projectcalico/kube-controllers
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: flannel-migration
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: flannel-migration-controller
|
||||
spec:
|
||||
backoffLimit: 10
|
||||
template:
|
||||
metadata:
|
||||
name: flannel-migration-controller
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: flannel-migration-controller
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
tolerations:
|
||||
# Mark the pod as a critical add-on for rescheduling.
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
serviceAccountName: flannel-migration-controller
|
||||
priorityClassName: system-cluster-critical
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: flannel-migration-controller
|
||||
image: calico/flannel-migration-controller:v3.15.0
|
||||
env:
|
||||
# Choose which controllers to run.
|
||||
- name: ENABLED_CONTROLLERS
|
||||
value: flannelmigration
|
||||
- name: DATASTORE_TYPE
|
||||
value: kubernetes
|
||||
- name: FLANNEL_DAEMONSET_NAME
|
||||
value: canal
|
||||
- name: FLANNEL_SUBNET_ENV
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: flannel-migration-config
|
||||
key: flannel_subnet_env
|
||||
- name: POD_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
volumeMounts:
|
||||
- mountPath: /host/run/flannel/subnet.env
|
||||
name: flannel-env-file
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /usr/bin/check-status
|
||||
- -r
|
||||
volumes:
|
||||
- name: flannel-env-file
|
||||
hostPath:
|
||||
path: /run/flannel/subnet.env
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: flannel-migration-controller
|
||||
namespace: kube-system
|
||||
{{- end }}
|
|
@ -0,0 +1,17 @@
|
|||
{{- if .Values.prometheus }}
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
name: calico-node
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: metrics
|
||||
port: 9091
|
||||
protocol: TCP
|
||||
targetPort: 9091
|
||||
selector:
|
||||
k8s-app: calico-node
|
||||
type: ClusterIP
|
||||
{{- end }}
|
|
@ -0,0 +1,19 @@
|
|||
{{- if .Values.prometheus }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: calico-node
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
prometheus: kube-prometheus
|
||||
spec:
|
||||
jobLabel: k8s-app
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-node
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- kube-system
|
||||
endpoints:
|
||||
- port: metrics
|
||||
{{- end }}
|
|
@ -0,0 +1,9 @@
|
|||
migration: false
|
||||
|
||||
network: vxlan
|
||||
|
||||
mtu: 8941
|
||||
|
||||
loglevel: Warning
|
||||
|
||||
prometheus: false
|
|
@ -2,7 +2,7 @@ kubezero-kiam
|
|||
=============
|
||||
KubeZero Umbrella Chart for Kiam
|
||||
|
||||
Current chart version is `0.2.4`
|
||||
Current chart version is `0.2.5`
|
||||
|
||||
Source code can be found [here](https://kubezero.com)
|
||||
|
||||
|
@ -10,7 +10,7 @@ Source code can be found [here](https://kubezero.com)
|
|||
|
||||
| Repository | Name | Version |
|
||||
|------------|------|---------|
|
||||
| https://uswitch.github.io/kiam-helm-charts/charts/ | kiam | 5.7.0 |
|
||||
| https://uswitch.github.io/kiam-helm-charts/charts/ | kiam | 5.8.1 |
|
||||
| https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.1 |
|
||||
|
||||
## KubeZero default configuration
|
||||
|
|
|
@ -2,7 +2,7 @@ apiVersion: v2
|
|||
name: kubezero
|
||||
description: KubeZero ArgoCD Application - Root App of Apps chart of KubeZero
|
||||
type: application
|
||||
version: 0.3.0
|
||||
version: 0.3.4
|
||||
home: https://kubezero.com
|
||||
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
|
||||
keywords:
|
||||
|
|
|
@ -2,7 +2,7 @@ kubezero
|
|||
========
|
||||
KubeZero ArgoCD Application - Root App of Apps chart of KubeZero
|
||||
|
||||
Current chart version is `0.3.0`
|
||||
Current chart version is `0.3.1`
|
||||
|
||||
Source code can be found [here](https://kubezero.com)
|
||||
|
||||
|
@ -17,11 +17,12 @@ Source code can be found [here](https://kubezero.com)
|
|||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| aws-ebs-csi-driver.enabled | bool | `false` | |
|
||||
| calico.enabled | bool | `false` | |
|
||||
| cert-manager.enabled | bool | `false` | |
|
||||
| calico.enabled | bool | `true` | |
|
||||
| cert-manager.enabled | bool | `true` | |
|
||||
| global.defaultDestination.server | string | `"https://kubernetes.default.svc"` | |
|
||||
| global.defaultSource.pathPrefix | string | `""` | |
|
||||
| global.defaultSource.repoURL | string | `"https://github.com/zero-down-time/kubezero"` | |
|
||||
| global.defaultSource.targetRevision | string | `"HEAD"` | |
|
||||
| kiam.enabled | bool | `false` | |
|
||||
| local-volume-provisioner.enabled | bool | `false` | |
|
||||
| platform | string | `"aws"` | |
|
||||
|
|
|
@ -9,7 +9,7 @@ metadata:
|
|||
{{- if not .retain }}
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
{{ end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
project: kubezero
|
||||
|
||||
|
@ -31,11 +31,4 @@ spec:
|
|||
destination:
|
||||
server: {{ .root.Values.global.defaultDestination.server }}
|
||||
namespace: {{ default "kube-system" .namespace }}
|
||||
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
{{- if .selfheal }}
|
||||
selfHeal: true
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
|
|
@ -1,3 +1,6 @@
|
|||
{{- if index .Values "aws-ebs-csi-driver" "enabled" }}
|
||||
{{ template "kubezero-app.app" dict "root" . "name" "aws-ebs-csi-driver" "type" "helm" }}
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
{{- end }}
|
||||
|
|
|
@ -1,3 +1,14 @@
|
|||
{{- if .Values.calico.enabled }}
|
||||
{{ template "kubezero-app.app" dict "root" . "name" "calico" "type" "kustomize" "retain" true }}
|
||||
{{ template "kubezero-app.app" dict "root" . "name" "calico" "type" .Values.calico.type "retain" true }}
|
||||
{{- if not .Values.calico.values.migration }}
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
{{- end }}
|
||||
|
||||
ignoreDifferences:
|
||||
- group: apiextensions.k8s.io
|
||||
kind: CustomResourceDefinition
|
||||
jsonPointers:
|
||||
- /status
|
||||
{{- end }}
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
{{- if index .Values "cert-manager" "enabled" }}
|
||||
{{ template "kubezero-app.app" dict "root" . "name" "cert-manager" "type" "helm" "namespace" "cert-manager" "selfheal" "true" }}
|
||||
{{ template "kubezero-app.app" dict "root" . "name" "cert-manager" "type" "helm" "namespace" "cert-manager" }}
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
selfHeal: true
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
|
|
|
@ -1,3 +1,6 @@
|
|||
{{- if index .Values "kiam" "enabled" }}
|
||||
{{ template "kubezero-app.app" dict "root" . "name" "kiam" "type" "helm" }}
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
{{- end }}
|
||||
|
|
|
@ -1,3 +1,6 @@
|
|||
{{- if index .Values "local-volume-provisioner" "enabled" }}
|
||||
{{ template "kubezero-app.app" dict "root" . "name" "local-volume-provisioner" "type" "kustomize" }}
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
{{- end }}
|
||||
|
|
|
@ -13,8 +13,13 @@ global:
|
|||
# defaultSource.pathPrefix -- optional path prefix within repoURL to support eg. remote subtrees
|
||||
pathPrefix: ''
|
||||
|
||||
platform: aws
|
||||
|
||||
calico:
|
||||
enabled: true
|
||||
type: kustomize
|
||||
values:
|
||||
migration: false
|
||||
|
||||
cert-manager:
|
||||
enabled: true
|
||||
|
|
|
@ -58,7 +58,8 @@ EOF
|
|||
helm template $DEPLOY_DIR -f values.yaml -f cloudbender.yaml --set istio.enabled=false --set prometheus.enabled=false > generated-values.yaml
|
||||
helm upgrade -n argocd kubezero kubezero/kubezero-argo-cd -f generated-values.yaml
|
||||
|
||||
exit 0
|
||||
echo "Install Istio / kube-prometheus manually for now, before proceeding! <Any key to continue>"
|
||||
read
|
||||
# Todo: Now we need to wait till all is synced and healthy ... argocd cli or kubectl ?
|
||||
# Wait for aws-ebs or kiam to be all ready, or all pods running ?
|
||||
|
||||
|
|
|
@ -1,10 +1,18 @@
|
|||
kubezero:
|
||||
{{- if .Values.global }}
|
||||
globals:
|
||||
global:
|
||||
{{- toYaml .Values.global | nindent 4 }}
|
||||
{{- end }}
|
||||
calico:
|
||||
enabled: {{ .Values.calico.enabled }}
|
||||
type: {{ default "kustomize" .Values.calico.type }}
|
||||
values:
|
||||
migration: {{ default false .Values.calico.migration }}
|
||||
prometheus: false
|
||||
# prometheus: {{ .Values.prometheus.enabled }}
|
||||
{{- if .Values.calico.network }}
|
||||
network: {{ .Values.calico.network }}
|
||||
{{- end }}
|
||||
cert-manager:
|
||||
enabled: {{ index .Values "cert-manager" "enabled" }}
|
||||
values:
|
||||
|
|
|
@ -6,6 +6,7 @@ HighAvailableControlplane: false
|
|||
|
||||
calico:
|
||||
enabled: true
|
||||
migration: false
|
||||
|
||||
cert-manager:
|
||||
enabled: true
|
||||
|
@ -24,3 +25,6 @@ istio:
|
|||
|
||||
prometheus:
|
||||
enabled: false
|
||||
|
||||
argo-cd:
|
||||
server: {}
|
||||
|
|
|
@ -14,6 +14,7 @@ helm repo add stable https://kubernetes-charts.storage.googleapis.com
|
|||
helm repo add argoproj https://argoproj.github.io/argo-helm
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo add uswitch https://uswitch.github.io/kiam-helm-charts/charts/
|
||||
helm repo update
|
||||
|
||||
for dir in $(find $SRCROOT/charts -mindepth 1 -maxdepth 1 -type d);
|
||||
do
|
||||
|
@ -21,15 +22,9 @@ do
|
|||
|
||||
if [ $(helm dep list $dir 2>/dev/null| wc -l) -gt 1 ]
|
||||
then
|
||||
# Bug with Helm subcharts with hyphen on them
|
||||
# https://github.com/argoproj/argo-helm/pull/270#issuecomment-608695684
|
||||
if [ "$name" == "argo-cd" ]
|
||||
then
|
||||
echo "Restore ArgoCD RedisHA subchart"
|
||||
git checkout $dir
|
||||
fi
|
||||
echo "Processing chart dependencies"
|
||||
helm --debug dep build $dir
|
||||
rm -rf $dir/tmpcharts
|
||||
helm dependency update --skip-refresh $dir
|
||||
fi
|
||||
|
||||
echo "Processing $dir"
|
||||
|
|
Loading…
Reference in New Issue