Add local-path-provisioner, re-org bootstrap
This commit is contained in:
parent
4b48da5935
commit
59ff3cb015
18
charts/kubezero-local-path-provisioner/Chart.yaml
Normal file
18
charts/kubezero-local-path-provisioner/Chart.yaml
Normal file
@ -0,0 +1,18 @@
|
||||
apiVersion: v2
|
||||
name: kubezero-local-path-provisioner
|
||||
description: KubeZero Umbrella Chart for local-path-provisioner
|
||||
type: application
|
||||
version: 0.1.0
|
||||
appVersion: 0.0.18
|
||||
home: https://kubezero.com
|
||||
icon: https://cdn.zero-downtime.net/assets/kubezero/logo-small-64.png
|
||||
keywords:
|
||||
- kubezero
|
||||
- local-path-provisioner
|
||||
maintainers:
|
||||
- name: Quarky9
|
||||
dependencies:
|
||||
- name: kubezero-lib
|
||||
version: ">= 0.1.3"
|
||||
repository: https://zero-down-time.github.io/kubezero/
|
||||
kubeVersion: ">= 1.16.0"
|
42
charts/kubezero-local-path-provisioner/README.md
Normal file
42
charts/kubezero-local-path-provisioner/README.md
Normal file
@ -0,0 +1,42 @@
|
||||
# kubezero-local-volume-provisioner
|
||||
|
||||
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 2.3.4](https://img.shields.io/badge/AppVersion-2.3.4-informational?style=flat-square)
|
||||
|
||||
KubeZero Umbrella Chart for local-static-provisioner
|
||||
|
||||
Provides persistent volumes backed by local volumes, eg. additional SSDs or spindles.
|
||||
|
||||
**Homepage:** <https://kubezero.com>
|
||||
|
||||
## Maintainers
|
||||
|
||||
| Name | Email | Url |
|
||||
| ---- | ------ | --- |
|
||||
| Quarky9 | | |
|
||||
|
||||
## Requirements
|
||||
|
||||
Kubernetes: `>= 1.16.0`
|
||||
|
||||
| Repository | Name | Version |
|
||||
|------------|------|---------|
|
||||
| https://zero-down-time.github.io/kubezero/ | kubezero-lib | >= 0.1.3 |
|
||||
|
||||
## Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| local-static-provisioner.classes[0].hostDir | string | `"/mnt/disks"` | |
|
||||
| local-static-provisioner.classes[0].name | string | `"local-sc-xfs"` | |
|
||||
| local-static-provisioner.common.namespace | string | `"kube-system"` | |
|
||||
| local-static-provisioner.daemonset.nodeSelector."node.kubernetes.io/localVolume" | string | `"present"` | |
|
||||
| local-static-provisioner.prometheus.operator.enabled | bool | `false` | |
|
||||
|
||||
## KubeZero default configuration
|
||||
|
||||
- add nodeSelector to only install on nodes actually having ephemeral local storage
|
||||
- provide matching storage class to expose mounted disks under `/mnt/disks`
|
||||
|
||||
## Resources
|
||||
|
||||
- https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner
|
27
charts/kubezero-local-path-provisioner/README.md.gotmpl
Normal file
27
charts/kubezero-local-path-provisioner/README.md.gotmpl
Normal file
@ -0,0 +1,27 @@
|
||||
{{ template "chart.header" . }}
|
||||
{{ template "chart.deprecationWarning" . }}
|
||||
|
||||
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
|
||||
|
||||
{{ template "chart.description" . }}
|
||||
|
||||
Provides persistent volumes backed by local volumes, eg. additional SSDs or spindles.
|
||||
|
||||
{{ template "chart.homepageLine" . }}
|
||||
|
||||
{{ template "chart.maintainersSection" . }}
|
||||
|
||||
{{ template "chart.sourcesSection" . }}
|
||||
|
||||
{{ template "chart.requirementsSection" . }}
|
||||
|
||||
{{ template "chart.valuesSection" . }}
|
||||
|
||||
## KubeZero default configuration
|
||||
|
||||
- add nodeSelector to only install on nodes actually having ephemeral local storage
|
||||
- provide matching storage class to expose mounted disks under `/mnt/disks`
|
||||
|
||||
## Resources
|
||||
|
||||
- https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner
|
@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
description: Use HostPath for persistent local storage with Kubernetes
|
||||
name: local-path-provisioner
|
||||
version: 0.0.18
|
||||
appVersion: "v0.0.18"
|
||||
keywords:
|
||||
- storage
|
||||
- hostpath
|
||||
kubeVersion: ">=1.12.0-r0"
|
||||
home: https://github.com/rancher/local-path-provisioner
|
||||
sources:
|
||||
- https://github.com/rancher/local-path-provisioner.git
|
@ -0,0 +1,116 @@
|
||||
# Local Path Provisioner
|
||||
|
||||
[Local Path Provisioner](https://github.com/rancher/local-path-provisioner) provides a way for the Kubernetes users to
|
||||
utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create
|
||||
`hostPath` based persistent volume on the node automatically. It utilizes the features introduced by Kubernetes [Local
|
||||
Persistent Volume feature](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/), but make it a simpler
|
||||
solution than the built-in `local` volume feature in Kubernetes.
|
||||
|
||||
## TL;DR;
|
||||
|
||||
```console
|
||||
$ git clone https://github.com/rancher/local-path-provisioner.git
|
||||
$ cd local-path-provisioner
|
||||
$ helm install --name local-path-storage --namespace local-path-storage ./deploy/chart/
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps a [Local Path Provisioner](https://github.com/rancher/local-path-provisioner) deployment on a
|
||||
[Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.12+ with Beta APIs enabled
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `local-path-storage`:
|
||||
|
||||
```console
|
||||
$ git clone https://github.com/rancher/local-path-provisioner.git
|
||||
$ cd local-path-provisioner
|
||||
$ helm install ./deploy/chart/ --name local-path-storage --namespace local-path-storage
|
||||
```
|
||||
|
||||
The command deploys Local Path Provisioner on the Kubernetes cluster in the default configuration. The
|
||||
[configuration](#configuration) section lists the parameters that can be configured during installation.
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `local-path-storage` deployment:
|
||||
|
||||
```console
|
||||
$ helm delete --purge local-path-storage
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the configurable parameters of the Local Path Provisioner for Kubernetes chart and their
|
||||
default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| ----------------------------------- | ------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
|
||||
| `image.repository` | Local Path Provisioner image name | `rancher/local-path-provisioner` |
|
||||
| `image.tag` | Local Path Provisioner image tag | `v0.0.18` |
|
||||
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
|
||||
| `storageClass.create` | If true, create a `StorageClass` | `true` |
|
||||
| `storageClass.provisionerName` | The provisioner name for the storage class | `nil` |
|
||||
| `storageClass.defaultClass` | If true, set the created `StorageClass` as the cluster's default `StorageClass` | `false` |
|
||||
| `storageClass.name` | The name to assign the created StorageClass | local-path |
|
||||
| `storageClass.reclaimPolicy` | ReclaimPolicy field of the class | Delete |
|
||||
| `nodePathMap` | Configuration of where to store the data on each node | `[{node: DEFAULT_PATH_FOR_NON_LISTED_NODES, paths: [/opt/local-path-provisioner]}]` |
|
||||
| `resources` | Local Path Provisioner resource requests & limits | `{}` |
|
||||
| `rbac.create` | If true, create & use RBAC resources | `true` |
|
||||
| `serviceAccount.create` | If true, create the Local Path Provisioner service account | `true` |
|
||||
| `serviceAccount.name` | Name of the Local Path Provisioner service account to use or create | `nil` |
|
||||
| `nodeSelector` | Node labels for Local Path Provisioner pod assignment | `{}` |
|
||||
| `tolerations` | Node taints to tolerate | `[]` |
|
||||
| `affinity` | Pod affinity | `{}` |
|
||||
| `configmap.setup` | Configuration of script to execute setup operations on each node | #!/bin/sh<br>while getopts "m:s:p:" opt<br>do<br> case $opt in <br>  p)<br>  absolutePath=$OPTARG<br>  ;;<br>  s)<br>  sizeInBytes=$OPTARG<br>  ;;<br>  m)<br>  volMode=$OPTARG<br>  ;;<br> esac<br>done<br>mkdir -m 0777 -p ${absolutePath} |
|
||||
| `configmap.teardown` | Configuration of script to execute teardown operations on each node | #!/bin/sh<br>while getopts "m:s:p:" opt<br>do<br> case $opt in <br>  p)<br>  absolutePath=$OPTARG<br>  ;;<br>  s)<br>  sizeInBytes=$OPTARG<br>  ;;<br>  m)<br>  volMode=$OPTARG<br>  ;;<br> esac<br>done<br>rm -rf ${absolutePath} |
|
||||
| `configmap.name` | configmap name | `local-path-config` |
|
||||
| `configmap.helperPod` | helper pod yaml file | apiVersion: v1<br>kind: Pod<br>metadata:<br> name: helper-pod<br>spec:<br> containers:<br> - name: helper-pod<br>  image: busybox |
|
||||
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
$ helm install ./deploy/chart/ --name local-path-storage --namespace local-path-storage --set storageClass.provisionerName=rancher.io/local-path
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
|
||||
chart. For example,
|
||||
|
||||
```console
|
||||
$ helm install --name local-path-storage --namespace local-path-storage ./deploy/chart/ -f values.yaml
|
||||
```
|
||||
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## RBAC
|
||||
|
||||
By default the chart will install the recommended RBAC roles and rolebindings.
|
||||
|
||||
You need to have the flag `--authorization-mode=RBAC` on the api server. See the following document for how to enable
|
||||
[RBAC](https://kubernetes.io/docs/admin/authorization/rbac/).
|
||||
|
||||
To determine if your cluster supports RBAC, run the following command:
|
||||
|
||||
```console
|
||||
$ kubectl api-versions | grep rbac
|
||||
```
|
||||
|
||||
If the output contains "beta", you may install the chart with RBAC enabled (see below).
|
||||
|
||||
### Enable RBAC role/rolebinding creation
|
||||
|
||||
To enable the creation of RBAC resources (On clusters with RBAC). Do the following:
|
||||
|
||||
```console
|
||||
$ helm install ./deploy/chart/ --name local-path-storage --namespace local-path-storage --set rbac.create=true
|
||||
```
|
@ -0,0 +1,13 @@
|
||||
You can create a hostpath-backed persistent volume with a persistent volume claim like this:
|
||||
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: local-path-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: {{ .Values.storageClass.name }}
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
@ -0,0 +1,71 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "local-path-provisioner.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "local-path-provisioner.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "local-path-provisioner.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "local-path-provisioner.labels" -}}
|
||||
app.kubernetes.io/name: {{ include "local-path-provisioner.name" . }}
|
||||
helm.sh/chart: {{ include "local-path-provisioner.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use.
|
||||
*/}}
|
||||
{{- define "local-path-provisioner.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (include "local-path-provisioner.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the provisioner to use.
|
||||
*/}}
|
||||
{{- define "local-path-provisioner.provisionerName" -}}
|
||||
{{- if .Values.storageClass.provisionerName -}}
|
||||
{{- printf .Values.storageClass.provisionerName -}}
|
||||
{{- else -}}
|
||||
cluster.local/{{ template "local-path-provisioner.fullname" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "local-path-provisioner.secret" }}
|
||||
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" .Values.privateRegistry.registryUrl (printf "%s:%s" .Values.privateRegistry.registryUser .Values.privateRegistry.registryPasswd | b64enc) | b64enc }}
|
||||
{{- end }}
|
@ -0,0 +1,21 @@
|
||||
{{- if .Values.rbac.create -}}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: {{ include "local-path-provisioner.fullname" . }}
|
||||
labels:
|
||||
{{ include "local-path-provisioner.labels" . | indent 4 }}
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes", "persistentvolumeclaims", "configmaps"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["endpoints", "persistentvolumes", "pods"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["create", "patch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
{{- end -}}
|
@ -0,0 +1,16 @@
|
||||
{{- if .Values.rbac.create -}}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: {{ include "local-path-provisioner.fullname" . }}
|
||||
labels:
|
||||
{{ include "local-path-provisioner.labels" . | indent 4 }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: {{ template "local-path-provisioner.fullname" . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "local-path-provisioner.serviceAccountName" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end -}}
|
@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ .Values.configmap.name }}
|
||||
labels:
|
||||
{{ include "local-path-provisioner.labels" . | indent 4 }}
|
||||
data:
|
||||
config.json: |-
|
||||
{
|
||||
"nodePathMap": {{ .Values.nodePathMap | toPrettyJson | nindent 8 }}
|
||||
}
|
||||
setup: |-
|
||||
{{ .Values.configmap.setup | nindent 4 }}
|
||||
teardown: |-
|
||||
{{ .Values.configmap.teardown | nindent 4 }}
|
||||
helperPod.yaml: |-
|
||||
{{ .Values.configmap.helperPod | nindent 4 }}
|
||||
|
@ -0,0 +1,73 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "local-path-provisioner.fullname" . }}
|
||||
labels:
|
||||
{{ include "local-path-provisioner.labels" . | indent 4 }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ include "local-path-provisioner.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "local-path-provisioner.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
spec:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ template "local-path-provisioner.serviceAccountName" . }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
{{- if .Values.privateRegistry.registryUrl }}
|
||||
image: "{{ .Values.privateRegistry.registryUrl }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
{{- else }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- local-path-provisioner
|
||||
- --debug
|
||||
- start
|
||||
- --config
|
||||
- /etc/config/config.json
|
||||
- --service-account-name
|
||||
- {{ template "local-path-provisioner.serviceAccountName" . }}
|
||||
- --provisioner-name
|
||||
- {{ template "local-path-provisioner.provisionerName" . }}
|
||||
- --helper-image
|
||||
{{- if .Values.privateRegistry.registryUrl }}
|
||||
- "{{ .Values.privateRegistry.registryUrl }}/{{ .Values.helperImage.repository }}:{{ .Values.helperImage.tag }}"
|
||||
{{- else }}
|
||||
- "{{ .Values.helperImage.repository }}:{{ .Values.helperImage.tag }}"
|
||||
{{- end }}
|
||||
- --configmap-name
|
||||
- {{ .Values.configmap.name }}
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config/
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
value: {{ .Release.Namespace }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: {{ .Values.configmap.name }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
@ -0,0 +1,9 @@
|
||||
{{- if .Values.defaultSettings.registrySecret }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ .Values.defaultSettings.registrySecret }}
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
data:
|
||||
.dockerconfigjson: {{ template "local-path-provisioner.secret" . }}
|
||||
{{- end }}
|
@ -0,0 +1,15 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "local-path-provisioner.serviceAccountName" . }}
|
||||
labels:
|
||||
{{ include "local-path-provisioner.labels" . | indent 4 }}
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
{{- if .Values.defaultSettings.registrySecret }}
|
||||
- name: {{ .Values.defaultSettings.registrySecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
@ -0,0 +1,15 @@
|
||||
{{ if .Values.storageClass.create -}}
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.storageClass.name }}
|
||||
labels:
|
||||
{{ include "local-path-provisioner.labels" . | indent 4 }}
|
||||
{{- if .Values.storageClass.defaultClass }}
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
{{- end }}
|
||||
provisioner: {{ template "local-path-provisioner.provisionerName" . }}
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
reclaimPolicy: {{ .Values.storageClass.reclaimPolicy }}
|
||||
{{- end }}
|
@ -0,0 +1,144 @@
|
||||
# Default values for local-path-provisioner.
|
||||
|
||||
replicaCount: 1
|
||||
|
||||
image:
|
||||
repository: rancher/local-path-provisioner
|
||||
tag: v0.0.18
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
helperImage:
|
||||
repository: busybox
|
||||
tag: latest
|
||||
|
||||
defaultSettings:
|
||||
registrySecret: ~
|
||||
|
||||
privateRegistry:
|
||||
registryUrl: ~
|
||||
registryUser: ~
|
||||
registryPasswd: ~
|
||||
|
||||
imagePullSecrets: []
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
## For creating the StorageClass automatically:
|
||||
storageClass:
|
||||
create: true
|
||||
|
||||
## Set a provisioner name. If unset, a name will be generated.
|
||||
# provisionerName: rancher.io/local-path
|
||||
|
||||
## Set StorageClass as the default StorageClass
|
||||
## Ignored if storageClass.create is false
|
||||
defaultClass: false
|
||||
|
||||
## Set a StorageClass name
|
||||
## Ignored if storageClass.create is false
|
||||
name: local-path
|
||||
|
||||
## ReclaimPolicy field of the class, which can be either Delete or Retain
|
||||
reclaimPolicy: Delete
|
||||
|
||||
# nodePathMap is the place user can customize where to store the data on each node.
|
||||
# 1. If one node is not listed on the nodePathMap, and Kubernetes wants to create volume on it, the paths specified in
|
||||
# DEFAULT_PATH_FOR_NON_LISTED_NODES will be used for provisioning.
|
||||
# 2. If one node is listed on the nodePathMap, the specified paths will be used for provisioning.
|
||||
# 1. If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
|
||||
# 2. If more than one path was specified, the path would be chosen randomly when provisioning.
|
||||
#
|
||||
# The configuration must obey following rules:
|
||||
# 1. A path must start with /, a.k.a an absolute path.
|
||||
# 2. Root directory (/) is prohibited.
|
||||
# 3. No duplicate paths allowed for one node.
|
||||
# 4. No duplicate node allowed.
|
||||
nodePathMap:
|
||||
- node: DEFAULT_PATH_FOR_NON_LISTED_NODES
|
||||
paths:
|
||||
- /opt/local-path-provisioner
|
||||
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
rbac:
|
||||
# Specifies whether RBAC resources should be created
|
||||
create: true
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a ServiceAccount should be created
|
||||
create: true
|
||||
# The name of the ServiceAccount to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name:
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
configmap:
|
||||
# specify the config map name
|
||||
name: local-path-config
|
||||
# specify the custom script for setup and teardown
|
||||
setup: |-
|
||||
#!/bin/sh
|
||||
while getopts "m:s:p:" opt
|
||||
do
|
||||
case $opt in
|
||||
p)
|
||||
absolutePath=$OPTARG
|
||||
;;
|
||||
s)
|
||||
sizeInBytes=$OPTARG
|
||||
;;
|
||||
m)
|
||||
volMode=$OPTARG
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
mkdir -m 0777 -p ${absolutePath}
|
||||
teardown: |-
|
||||
#!/bin/sh
|
||||
while getopts "m:s:p:" opt
|
||||
do
|
||||
case $opt in
|
||||
p)
|
||||
absolutePath=$OPTARG
|
||||
;;
|
||||
s)
|
||||
sizeInBytes=$OPTARG
|
||||
;;
|
||||
m)
|
||||
volMode=$OPTARG
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
rm -rf ${absolutePath}
|
||||
# specify the custom helper pod yaml
|
||||
helperPod: |-
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: helper-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: helper-pod
|
||||
image: busybox
|
||||
|
||||
|
||||
|
||||
|
||||
|
8
charts/kubezero-local-path-provisioner/update.sh
Executable file
8
charts/kubezero-local-path-provisioner/update.sh
Executable file
@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
|
||||
# get subchart until they have upstream repo
|
||||
rm -rf charts/local-path-provisioner && mkdir -p charts/local-path-provisioner
|
||||
|
||||
git clone --depth=1 https://github.com/rancher/local-path-provisioner.git
|
||||
cp -r local-path-provisioner/deploy/chart/* charts/local-path-provisioner
|
||||
rm -rf local-path-provisioner
|
16
charts/kubezero-local-path-provisioner/values.yaml
Normal file
16
charts/kubezero-local-path-provisioner/values.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
local-path-provisioner:
|
||||
storageClass:
|
||||
create: true
|
||||
defaultClass: false
|
||||
|
||||
nodePathMap:
|
||||
- node: DEFAULT_PATH_FOR_NON_LISTED_NODES
|
||||
paths:
|
||||
- /opt/local-path-provisioner
|
||||
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
||||
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
@ -6,18 +6,19 @@ ARTIFACTS=("$2")
|
||||
CLUSTER=$3
|
||||
LOCATION=${4:-""}
|
||||
|
||||
API_VERSIONS="-a monitoring.coreos.com/v1"
|
||||
|
||||
DEPLOY_DIR=$( dirname $( realpath $0 ))
|
||||
which yq || { echo "yq not found!"; exit 1; }
|
||||
|
||||
TMPDIR=$(mktemp -d kubezero.XXX)
|
||||
|
||||
function join { local IFS="$1"; shift; echo "$*"; }
|
||||
|
||||
# First lets generate kubezero.yaml
|
||||
# Add all yaml files in $CLUSTER
|
||||
VALUES="$(find $CLUSTER -name '*.yaml' | tr '\n' ',')"
|
||||
helm template $DEPLOY_DIR -f ${VALUES%%,} --set argo=false > $TMPDIR/kubezero.yaml
|
||||
|
||||
# Resolve all the all enabled artifacts in order of their appearance
|
||||
if [ ${ARTIFACTS[0]} == "all" ]; then
|
||||
ARTIFACTS=($(yq r -p p $TMPDIR/kubezero.yaml "*.enabled" | awk -F "." '{print $1}'))
|
||||
fi
|
||||
@ -49,44 +50,76 @@ function chart_location() {
|
||||
}
|
||||
|
||||
|
||||
function _helm() {
|
||||
local action=$1
|
||||
local chart=$2
|
||||
local release=$3
|
||||
local namespace=$4
|
||||
shift 4
|
||||
# make sure namespace exists prior to calling helm as the create-namespace options doesn't work
|
||||
function create_ns() {
|
||||
local namespace=$1
|
||||
kubectl get ns $namespace || kubectl create ns $namespace
|
||||
}
|
||||
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds $@ > $TMPDIR/helm.yaml
|
||||
|
||||
if [ $action == "apply" ]; then
|
||||
# make sure namespace exists prior to calling helm as the create-namespace options doesn't work
|
||||
kubectl get ns $namespace || kubectl create ns $namespace
|
||||
fi
|
||||
# delete non kube-system ns
|
||||
function delete_ns() {
|
||||
local namespace=$1
|
||||
[ "$namespace" != "kube-system" ] && kubectl delete ns $namespace
|
||||
}
|
||||
|
||||
|
||||
# Extract crds via helm calls and apply delta=crds only
|
||||
function _crds() {
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds > $TMPDIR/helm-no-crds.yaml
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --include-crds > $TMPDIR/helm-crds.yaml
|
||||
diff -e $TMPDIR/helm-no-crds.yaml $TMPDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $TMPDIR/crds.yaml
|
||||
kubectl apply -f $TMPDIR/crds.yaml
|
||||
}
|
||||
|
||||
|
||||
# helm template | kubectl apply -f -
|
||||
# confine to one namespace if possible
|
||||
function apply(){
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds -f $TMPDIR/values.yaml $API_VERSIONS $@ > $TMPDIR/helm.yaml
|
||||
|
||||
# If resources are out of the single $namespace, apply without restrictions
|
||||
nr_ns=$(grep -e '^ namespace:' $TMPDIR/helm.yaml | sed "s/\"//g" | sort | uniq | wc -l)
|
||||
if [ $nr_ns -gt 1 ]; then
|
||||
kubectl $action -f $TMPDIR/helm.yaml
|
||||
kubectl $action -f $TMPDIR/helm.yaml && rc=$? || rc=$?
|
||||
else
|
||||
kubectl $action --namespace $namespace -f $TMPDIR/helm.yaml
|
||||
kubectl $action --namespace $namespace -f $TMPDIR/helm.yaml && rc=$? || rc=$?
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
function deploy() {
|
||||
_helm apply $@
|
||||
}
|
||||
function _helm() {
|
||||
local action=$1
|
||||
|
||||
local chart="kubezero-$2"
|
||||
local release=$2
|
||||
local namespace=$(get_namespace $2)
|
||||
|
||||
function delete() {
|
||||
_helm delete $@
|
||||
if [ $action == "crds" ]; then
|
||||
_crds
|
||||
else
|
||||
|
||||
# namespace must exist prior to apply
|
||||
[ $action == "apply" ] && create_ns $namespace
|
||||
|
||||
# Optional pre hook
|
||||
declare -F ${release}-pre && ${release}-pre
|
||||
|
||||
apply
|
||||
|
||||
# Optional post hook
|
||||
declare -F ${release}-post && ${release}-post
|
||||
|
||||
# Delete dedicated namespace if not kube-system
|
||||
[ $action == "delete" ] && delete_ns $namespace
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
function is_enabled() {
|
||||
local chart=$1
|
||||
local enabled=$(yq r $TMPDIR/kubezero.yaml ${chart}.enabled)
|
||||
|
||||
enabled=$(yq r $TMPDIR/kubezero.yaml ${chart}.enabled)
|
||||
if [ "$enabled" == "true" ]; then
|
||||
yq r $TMPDIR/kubezero.yaml ${chart}.values > $TMPDIR/values.yaml
|
||||
return 0
|
||||
@ -95,262 +128,84 @@ function is_enabled() {
|
||||
}
|
||||
|
||||
|
||||
##########
|
||||
# Calico #
|
||||
##########
|
||||
function calico() {
|
||||
local chart="kubezero-calico"
|
||||
local release="calico"
|
||||
local namespace="kube-system"
|
||||
function has_crds() {
|
||||
local chart=$1
|
||||
local enabled=$(yq r $TMPDIR/kubezero.yaml ${chart}.crds)
|
||||
|
||||
local task=$1
|
||||
[ "$enabled" == "true" ] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml && rc=$? || rc=$?
|
||||
kubectl apply -f $TMPDIR/helm.yaml
|
||||
# Don't delete the only CNI
|
||||
#elif [ $task == "delete" ]; then
|
||||
# delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
elif [ $task == "crds" ]; then
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds > $TMPDIR/helm-no-crds.yaml
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --include-crds > $TMPDIR/helm-crds.yaml
|
||||
diff -e $TMPDIR/helm-no-crds.yaml $TMPDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $TMPDIR/crds.yaml
|
||||
kubectl apply -f $TMPDIR/crds.yaml
|
||||
fi
|
||||
|
||||
function get_namespace() {
|
||||
local namespace=$(yq r $TMPDIR/kubezero.yaml ${1}.namespace)
|
||||
[ -z "$namespace" ] && echo "kube-system" || echo $namespace
|
||||
}
|
||||
|
||||
|
||||
################
|
||||
# cert-manager #
|
||||
################
|
||||
function cert-manager() {
|
||||
local chart="kubezero-cert-manager"
|
||||
local release="cert-manager"
|
||||
local namespace="cert-manager"
|
||||
function cert-manager-post() {
|
||||
# If any error occurs, wait for initial webhook deployment and try again
|
||||
# see: https://cert-manager.io/docs/concepts/webhook/#webhook-connection-problems-shortly-after-cert-manager-installation
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml && rc=$? || rc=$?
|
||||
|
||||
# If any error occurs, wait for initial webhook deployment and try again
|
||||
# see: https://cert-manager.io/docs/concepts/webhook/#webhook-connection-problems-shortly-after-cert-manager-installation
|
||||
if [ $rc -ne 0 ]; then
|
||||
wait_for "kubectl get deployment -n $namespace cert-manager-webhook"
|
||||
kubectl rollout status deployment -n $namespace cert-manager-webhook
|
||||
wait_for 'kubectl get validatingwebhookconfigurations -o yaml | grep "caBundle: LS0"'
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
fi
|
||||
|
||||
wait_for "kubectl get ClusterIssuer -n $namespace kubezero-local-ca-issuer"
|
||||
kubectl wait --timeout=180s --for=condition=Ready -n $namespace ClusterIssuer/kubezero-local-ca-issuer
|
||||
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
kubectl delete ns $namespace
|
||||
|
||||
elif [ $task == "crds" ]; then
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds --set cert-manager.installCRDs=false > $TMPDIR/helm-no-crds.yaml
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --include-crds --set cert-manager.installCRDs=true > $TMPDIR/helm-crds.yaml
|
||||
diff -e $TMPDIR/helm-no-crds.yaml $TMPDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $TMPDIR/crds.yaml
|
||||
kubectl apply -f $TMPDIR/crds.yaml
|
||||
if [ $rc -ne 0 ]; then
|
||||
wait_for "kubectl get deployment -n $namespace cert-manager-webhook"
|
||||
kubectl rollout status deployment -n $namespace cert-manager-webhook
|
||||
wait_for 'kubectl get validatingwebhookconfigurations -o yaml | grep "caBundle: LS0"'
|
||||
apply
|
||||
fi
|
||||
|
||||
wait_for "kubectl get ClusterIssuer -n $namespace kubezero-local-ca-issuer"
|
||||
kubectl wait --timeout=180s --for=condition=Ready -n $namespace ClusterIssuer/kubezero-local-ca-issuer
|
||||
}
|
||||
|
||||
|
||||
########
|
||||
# Kiam #
|
||||
########
|
||||
function kiam() {
|
||||
local chart="kubezero-kiam"
|
||||
local release="kiam"
|
||||
local namespace="kube-system"
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
# Certs only first
|
||||
deploy $chart $release $namespace --set kiam.enabled=false
|
||||
kubectl wait --timeout=120s --for=condition=Ready -n kube-system Certificate/kiam-server
|
||||
|
||||
# Make sure kube-system and cert-manager are allowed to kiam
|
||||
kubectl annotate --overwrite namespace kube-system 'iam.amazonaws.com/permitted=.*'
|
||||
kubectl annotate --overwrite namespace cert-manager 'iam.amazonaws.com/permitted=.*CertManagerRole.*'
|
||||
|
||||
# Get kiam rolled out and make sure it is working
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
wait_for 'kubectl get daemonset -n kube-system kiam-agent'
|
||||
kubectl rollout status daemonset -n kube-system kiam-agent
|
||||
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
fi
|
||||
function kiam-pre() {
|
||||
# Certs only first
|
||||
apply --set kiam.enabled=false
|
||||
kubectl wait --timeout=120s --for=condition=Ready -n kube-system Certificate/kiam-server
|
||||
}
|
||||
|
||||
function kiam-post() {
|
||||
wait_for 'kubectl get daemonset -n kube-system kiam-agent'
|
||||
kubectl rollout status daemonset -n kube-system kiam-agent
|
||||
|
||||
#######
|
||||
# EBS #
|
||||
#######
|
||||
function aws-ebs-csi-driver() {
|
||||
local chart="kubezero-aws-ebs-csi-driver"
|
||||
local release="aws-ebs-csi-driver"
|
||||
local namespace="kube-system"
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
#########
|
||||
# Istio #
|
||||
#########
|
||||
function istio() {
|
||||
local chart="kubezero-istio"
|
||||
local release="istio"
|
||||
local namespace="istio-system"
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
kubectl delete ns istio-system
|
||||
|
||||
elif [ $task == "crds" ]; then
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds > $TMPDIR/helm-no-crds.yaml
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --include-crds > $TMPDIR/helm-crds.yaml
|
||||
diff -e $TMPDIR/helm-no-crds.yaml $TMPDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $TMPDIR/crds.yaml
|
||||
kubectl apply -f $TMPDIR/crds.yaml
|
||||
fi
|
||||
}
|
||||
|
||||
#################
|
||||
# Istio Ingress #
|
||||
#################
|
||||
function istio-ingress() {
|
||||
local chart="kubezero-istio-ingress"
|
||||
local release="istio-ingress"
|
||||
local namespace="istio-ingress"
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
kubectl delete ns istio-ingress
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
###########
|
||||
# Metrics #
|
||||
###########
|
||||
function metrics() {
|
||||
local chart="kubezero-metrics"
|
||||
local release="metrics"
|
||||
local namespace="monitoring"
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
kubectl delete ns monitoring
|
||||
|
||||
elif [ $task == "crds" ]; then
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds > $TMPDIR/helm-no-crds.yaml
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --include-crds > $TMPDIR/helm-crds.yaml
|
||||
diff -e $TMPDIR/helm-no-crds.yaml $TMPDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $TMPDIR/crds.yaml
|
||||
kubectl apply -f $TMPDIR/crds.yaml
|
||||
fi
|
||||
# Make sure kube-system and cert-manager are allowed to kiam
|
||||
kubectl annotate --overwrite namespace kube-system 'iam.amazonaws.com/permitted=.*'
|
||||
kubectl annotate --overwrite namespace cert-manager 'iam.amazonaws.com/permitted=.*CertManagerRole.*'
|
||||
}
|
||||
|
||||
|
||||
###########
|
||||
# Logging #
|
||||
###########
|
||||
function logging() {
|
||||
local chart="kubezero-logging"
|
||||
local release="logging"
|
||||
local namespace="logging"
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -a "monitoring.coreos.com/v1" -f $TMPDIR/values.yaml
|
||||
|
||||
kubectl annotate --overwrite namespace logging 'iam.amazonaws.com/permitted=.*ElasticSearchSnapshots.*'
|
||||
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
kubectl delete ns logging
|
||||
|
||||
# Doesnt work right now due to V2 Helm implementation of the eck-operator-crd chart
|
||||
#elif [ $task == "crds" ]; then
|
||||
# helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds > $TMPDIR/helm-no-crds.yaml
|
||||
# helm template $(chart_location $chart) --namespace $namespace --name-template $release --include-crds > $TMPDIR/helm-crds.yaml
|
||||
# diff -e $TMPDIR/helm-no-crds.yaml $TMPDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $TMPDIR/crds.yaml
|
||||
# kubectl apply -f $TMPDIR/crds.yaml
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
##########
|
||||
# ArgoCD #
|
||||
##########
|
||||
function argo-cd() {
|
||||
local chart="kubezero-argo-cd"
|
||||
local release="argocd"
|
||||
local namespace="argocd"
|
||||
|
||||
local task=$1
|
||||
|
||||
if [ $task == "deploy" ]; then
|
||||
deploy $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
|
||||
# Install the kubezero app of apps
|
||||
# deploy kubezero kubezero $namespace -f $TMPDIR/kubezero.yaml
|
||||
|
||||
elif [ $task == "delete" ]; then
|
||||
delete $chart $release $namespace -f $TMPDIR/values.yaml
|
||||
kubectl delete ns argocd
|
||||
|
||||
elif [ $task == "crds" ]; then
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --skip-crds > $TMPDIR/helm-no-crds.yaml
|
||||
helm template $(chart_location $chart) --namespace $namespace --name-template $release --include-crds > $TMPDIR/helm-crds.yaml
|
||||
diff -e $TMPDIR/helm-no-crds.yaml $TMPDIR/helm-crds.yaml | head -n-1 | tail -n+2 > $TMPDIR/crds.yaml
|
||||
kubectl apply -f $TMPDIR/crds.yaml
|
||||
fi
|
||||
function logging-post() {
|
||||
kubectl annotate --overwrite namespace logging 'iam.amazonaws.com/permitted=.*ElasticSearchSnapshots.*'
|
||||
}
|
||||
|
||||
|
||||
## MAIN ##
|
||||
if [ $1 == "deploy" ]; then
|
||||
for t in ${ARTIFACTS[@]}; do
|
||||
is_enabled $t && $t deploy
|
||||
is_enabled $t && _helm apply $t
|
||||
done
|
||||
|
||||
# If artifact enabled and has crds install
|
||||
elif [ $1 == "crds" ]; then
|
||||
for t in ${ARTIFACTS[@]}; do
|
||||
is_enabled $t && $t crds
|
||||
is_enabled $t && has_crds $t && _helm crds $t
|
||||
done
|
||||
|
||||
# Delete in reverse order, continue even if errors
|
||||
elif [ $1 == "delete" ]; then
|
||||
set +e
|
||||
for (( idx=${#ARTIFACTS[@]}-1 ; idx>=0 ; idx-- )) ; do
|
||||
is_enabled ${ARTIFACTS[idx]} && ${ARTIFACTS[idx]} delete
|
||||
is_enabled ${ARTIFACTS[idx]} && _helm delete ${ARTIFACTS[idx]}
|
||||
done
|
||||
fi
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
{{- if not .Values.argo }}
|
||||
|
||||
{{- $artifacts := list "calico" "cert-manager" "kiam" "aws-ebs-csi-driver" "aws-efs-csi-driver" "local-volume-provisioner" "istio" "istio-ingress" "metrics" "logging" "argo-cd" }}
|
||||
{{- $artifacts := list "calico" "cert-manager" "kiam" "aws-ebs-csi-driver" "aws-efs-csi-driver" "local-volume-provisioner" "local-path-provisioner" "istio" "istio-ingress" "metrics" "logging" "argo-cd" }}
|
||||
|
||||
{{- if .Values.global }}
|
||||
global:
|
||||
@ -11,6 +11,8 @@ global:
|
||||
{{- if index $.Values . }}
|
||||
{{ . }}:
|
||||
enabled: {{ index $.Values . "enabled" }}
|
||||
namespace: {{ default "kube-system" ( index $.Values . "namespace" ) }}
|
||||
crds: {{ default false ( index $.Values . "crds" ) }}
|
||||
values:
|
||||
{{- include (print . "-values") $ | nindent 4 }}
|
||||
{{- end }}
|
||||
|
7
charts/kubezero/templates/local-path-provisioner.yaml
Normal file
7
charts/kubezero/templates/local-path-provisioner.yaml
Normal file
@ -0,0 +1,7 @@
|
||||
{{- define "local-path-provisioner-values" }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "local-path-provisioner-argo" }}
|
||||
{{- end }}
|
||||
|
||||
{{ include "kubezero-app.app" . }}
|
@ -9,10 +9,12 @@ global:
|
||||
|
||||
calico:
|
||||
enabled: false
|
||||
crds: true
|
||||
retain: true
|
||||
|
||||
cert-manager:
|
||||
enabled: false
|
||||
crds: true
|
||||
namespace: cert-manager
|
||||
|
||||
kiam:
|
||||
@ -21,6 +23,9 @@ kiam:
|
||||
local-volume-provisioner:
|
||||
enabled: false
|
||||
|
||||
local-path-provisioner:
|
||||
enabled: false
|
||||
|
||||
aws-ebs-csi-driver:
|
||||
enabled: false
|
||||
|
||||
@ -29,6 +34,7 @@ aws-efs-csi-driver:
|
||||
|
||||
istio:
|
||||
enabled: false
|
||||
crds: true
|
||||
namespace: istio-system
|
||||
|
||||
istio-ingress:
|
||||
@ -37,14 +43,15 @@ istio-ingress:
|
||||
|
||||
metrics:
|
||||
enabled: false
|
||||
crds: true
|
||||
namespace: monitoring
|
||||
|
||||
logging:
|
||||
enabled: false
|
||||
crds: true
|
||||
namespace: logging
|
||||
|
||||
argo-cd:
|
||||
enabled: false
|
||||
crds: true
|
||||
namespace: argocd
|
||||
istio:
|
||||
enabled: false
|
||||
|
Loading…
Reference in New Issue
Block a user