2020-06-03 14:45:01 +00:00
# Quickstart
---
# CloudBender
## Prepare Config
- check config/kube/kube-control-plane.yaml
- check config/kube/kube-workers.yaml
## Deploy Control Plane
- cloudbender sync kube-control-plane
## Get kubectl config
- get admin.conf from S3 and store in your local `~/.kube` folder
## Verify controller nodes
- Verify all controller nodes have the expected version and are *Ready* , eg via: `kubectl get nodes`
## Deploy Worker group
- cloudbender sync kube-workers
2020-06-05 15:38:46 +00:00
## Verify all nodes
- Verify all nodes incl. workers have the expected version and are *Ready* , eg via: `kubectl get nodes`
2020-06-03 14:45:01 +00:00
---
# KubeZero
## Prepare Config
- check values.yaml
2020-06-05 15:38:46 +00:00
Easiest way to get the ARNs for various IAM roles is to use the CloudBender output command:
2020-06-05 16:09:42 +00:00
`cloudbender outputs config/kube-control-plane.yaml`
2020-06-05 15:38:46 +00:00
2020-06-03 14:45:01 +00:00
## Deploy KubeZero Helm chart
`./deploy.sh`
## Verify ArgoCD
2020-06-05 15:38:46 +00:00
At this stage we there is no support for any kind of Ingress yet. To reach the Argo API port forward from localhost via:
`kubectl port-forward svc/kubezero-argocd-server -n argocd 8080:443`
2020-06-03 14:45:01 +00:00
2020-06-05 15:38:46 +00:00
Next download the argo-cd cli, details for different OS see https://argoproj.github.io/argo-cd/cli_installation/
2020-06-03 14:45:01 +00:00
Finally login into argo-cd via `argocd login localhost:8080` using the *admin* user and the password set in values.yaml earlier.
2020-06-05 15:38:46 +00:00
List all Argo applications via: `argocd app list` .
Currently it is very likely that you need to manually trigger sync runs for `cert-manager` as well as `kiam` .
eg. `argocd app cert-manager sync`
2020-06-05 16:09:42 +00:00
# Only proceed any further if all Argo Applications show healthy !!
## WIP not yet integrated into KubeZero
### EFS CSI
To deploy the EFS CSI driver the backing EFS filesystem needs to be in place ahead of time. This is easy to do by enabling the EFS functionality in the worker CloudBender stack.
- retrieve the EFS: `cloudbender outputs config/kube-control-worker.yaml` and look for *EfsFileSystemId*
- update values.yaml in the `aws-efs-csi` artifact folder as well as the efs_pv.yaml
- execute `deploy.sh`
### Istio
Istio is currently pinned to version 1.4.X as this is the last version supporting installation via helm charts.
Until Istio is integrated into KubeZero as well as upgraded to 1.6 we have to install manually.
- adjust values.yaml
- update domain in `ingress-certificate.yaml`
- update.sh
- deploy.sh
2020-06-05 16:58:18 +00:00
### Logging
To deploy fluentbit only required adjustment is the `fluentd_host=<LOG_HOST>` in the kustomization.yaml.
- deploy namespace for logging via deploy.sh
- deploy fluentbit via `kubectl apply -k fluentbit`
### Prometheus / Grafana
Only adjustment required is the ingress routing config in istio-service.yaml. Adjust as needed before executing:
`deploy.sh`
2020-06-03 14:45:01 +00:00
# Demo / own apps
- Add your own application to ArgoCD via the cli