More documentation updates

This commit is contained in:
Stefan Reimer 2021-01-26 14:04:47 +00:00
parent 19d10828f6
commit 17ec3b7acc
10 changed files with 119 additions and 153 deletions

46
CHANGELOG.md Normal file
View File

@ -0,0 +1,46 @@
# Changelog
## KubeZero - 2.18 ( Argoless )
### High level / Admin changes
- ArgoCD is now optional and NOT required nor used during initial cluster bootstrap
- the bootstrap process now uses the same config and templates as the optional ArgoCD applications later on
- the bootstrap is can now be restarted at any time and considerably faster
- the top level KubeZero config for the ArgoCD app-of-apps is now also maintained via the gitops workflow. Changes can be applied by a simple git push rather than manual scripts
### Calico
- version bump
### Cert-manager
- local issuers are now cluster issuer to allow them being used across namespaces
- all cert-manager resources moved into the cert-manager namespace
- version bump to 1.10
### Kiam
- set priorty class to cluster essential
- certificates are now issued by the cluster issuer
### EBS / EFS
- version bump
### Istio
- istio operator removed, deployment migrated to helm, various cleanups
- version bump to 1.8
- all ingress resources are now in the dedicated new namespace istio-ingress ( deployed via separate kubezero chart istio-ingress)
- set priorty class of ingress components to cluster essential
### Logging
- ES/Kibana version bump to 7.10
- ECK operator is now installed on demand in logging ns
- Custom event fields configurable via new fluent-bit chart
e.g. clustername could be added to each event allowing easy filtering in case multiple clusters stream events into a single central ES cluster
### ArgoCD
- version bump, new app of app architecure
### Metrics
- version bump
- all servicemonitor resources are now in the same namespaces as the respective apps to avoid deployments across multiple namespaces
### upstream Kubernetes 1.18
https://sysdig.com/blog/whats-new-kubernetes-1-18/

View File

@ -1,15 +0,0 @@
# CFN / Platform
- Kube to 1.17
- Kube-proxy uses ipvs
- metrics support for kube-proxy
- no reliance on custom resource for S3 buckets anymore
# Kubezero
- fully automated one command bootstrap incl. all kubezero components
- migrated from kube-prometheuss to prometheus-operator helm charts for metrics
- latest Grafana incl. peristence
- kube-prometheus adapter improvements / customizations
- integrated EFS CSI driver into Kubezero
- prometheus itself can be exposed via istio ingress on demand to ease development of custom metrics
- backup script to export all cert-manager items between clusters

View File

@ -9,7 +9,6 @@
## Deploy Cluster ## Deploy Cluster
- cloudbender sync config/kube --multi - cloudbender sync config/kube --multi
The latest versions now support waiting for the control plane to bootstrap allowing deployments in one step !
## Get kubectl config ## Get kubectl config
- get admin.conf from S3 and store in your local `~/.kube` folder - get admin.conf from S3 and store in your local `~/.kube` folder
@ -22,36 +21,21 @@
--- ---
# KubeZero # KubeZero
All configs and scriptss are normally under: All configs and scriptss are normally under:
`artifacts/<ENV>/<REGION>/kubezero` `kubezero/clusters/<ENV>/<REGION>`
## Prepare Config ## Prepare Config
check values.yaml for your cluster check values.yaml for your cluster
## Get CloudBender kubezero config ## Get CloudBender kubezero config
Cloudbender creates a kubezero config file, which incl. all outputs from the Cloudformation stacks in `outputs/kube/kubezero.yaml`. Cloudbender creates a kubezero config file, which incl. all outputs from the Cloudformation stacks in `outputs/kube/kubezero.yaml`.
- copy kubezero.yaml *next* to the values.yaml named as `cloudbender.yaml`. Place kubezero.yaml *next* to the values.yaml
## Deploy KubeZero Helm chart ## Bootstrap
`./deploy.sh` The first step will install all CRDs of enabled components only to prevent any dependency issues during the actual install.
`./bootstrap.sh crds all clusters/<ENV>/<REGION>`
The deploy script will handle the initial bootstrap process as well as the roll out of advanced components like Prometheus, Istio and ElasticSearch/Kibana in various phases.
It will take about 10 to 15 minutes for ArgoCD to roll out all the services...
# Own apps
- Add your own application to ArgoCD via the cli
# Troubleshooting
## Verify ArgoCD
To reach the Argo API port forward from localhost via:
`kubectl port-forward svc/kubezero-argocd-server -n argocd 8080:443`
Next download the argo-cd cli, details for different OS see https://argoproj.github.io/argo-cd/cli_installation/
Finally login into argo-cd via `argocd login localhost:8080` using the *admin* user and the password set in values.yaml earlier.
List all Argo applications via: `argocd app list`.
The second step will install all enabled components incl. various checks along the way.
`./bootstrap.sh deploy all clusters/<ENV>/<REGION>`
## Success !
Access your brand new container platform via kubectl / k9s / lens or the tool of your choosing.

View File

@ -1,4 +1,4 @@
# Upgrade to KubeZero V2(Argoless) # Upgrade to KubeZero V2.18.0 (Argoless)
## (optional) Upgrade control plane nodes / worker nodes ## (optional) Upgrade control plane nodes / worker nodes
- Set kube version in the controller config to eg. `1.18` - Set kube version in the controller config to eg. `1.18`
@ -53,56 +53,4 @@ Ingress service interruption ends.
## Verification / Tests ## Verification / Tests
- verify argocd incl. kubezero app - verify argocd incl. kubezero app
- verify all argo apps status - verify all argo apps status
- verify all the things - verify all the things
# Changelog
## Kubernetes 1.18
https://sysdig.com/blog/whats-new-kubernetes-1-18/
## High level / Admin changes
- ArgoCD is now optional and NOT required nor used during initial cluster bootstrap
- the bootstrap process now uses the same config and templates as the optional ArgoCD applications later on
- the bootstrap is can now be restarted at any time and considerably faster
- the top level KubeZero config for the ArgoCD app-of-apps is now also maintained via the gitops workflow. Changes can be applied by a simple git push rather than manual scripts
## Individual changes
### Calico
- version bump
### Cert-manager
- local issuers are now cluster issuer to allow them being used across namespaces
- all cert-manager resources moved into the cert-manager namespace
- version bump to 1.10
### Kiam
- set priorty class to cluster essential
- certificates are now issued by the cluster issuer
### EBS / EFS
- version bump
### Istio
- istio operator removed, deployment migrated to helm, various cleanups
- version bump to 1.8
- all ingress resources are now in the dedicated new namespace istio-ingress ( deployed via separate kubezero chart istio-ingress)
- set priorty class of ingress components to cluster essential
### Logging
- ES/Kibana version bump to 7.10
- ECK operator is now installed on demand in logging ns
- Custom event fields configurable via new fluent-bit chart
e.g. clustername could be added to each event allowing easy filtering in case multiple clusters stream events into a single central ES cluster
### ArgoCD
- version bump, new app of app architecure
### Metrics
- version bump
- all servicemonitor resources are now in the same namespaces as the respective apps to avoid namespace spanning deployments

View File

@ -1,15 +0,0 @@
# api-server OAuth configuration
## Update Api-server config
Add the following extraArgs to the ClusterConfiguration configMap in the kube-system namespace:
`kubectl edit -n kube-system cm kubeadm-config`
```
oidc-issuer-url: "https://accounts.google.com"
oidc-client-id: "<CLIENT_ID from Google>"
oidc-username-claim: "email"
oidc-groups-claim: "groups"
```
## Resources
- https://kubernetes.io/docs/reference/access-authn-authz/authentication/

View File

@ -1,9 +0,0 @@
# Cluster Operations
## Clean up
### Delete evicted pods across all namespaces
`kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
`
### Cleanup old replicasets
`kubectl get rs --all-namespaces | awk {' if ($3 == 0 && $4 == 0) system("kubectl delete rs "$2" --namespace="$1)'}`

View File

@ -1,21 +0,0 @@
# kubectl
kubectl is the basic cmdline tool to interact with any kubernetes cluster via the kube-api server.
## Plugins
As there are various very useful plugins for kubectl the first thing should be to install *krew* the plugin manager.
See: https://github.com/kubernetes-sigs/krew for details
List of awesome plugins: https://github.com/ishantanu/awesome-kubectl-plugins
### kubelogin
To login / authenticate against an openID provider like Google install the kubelogin plugin.
See: https://github.com/int128/kubelogin
Make sure to adjust your kubeconfig files accordingly !
### kauthproxy
Easiest way to access the Kubernetes dashboard, if installed in the targeted cluster, is to use the kauthproxy plugin.
See: https://github.com/int128/kauthproxy
Once installed simply execute:
`kubectl auth-proxy -n kubernetes-dashboard https://kubernetes-dashboard.svc`
and access the dashboard via the automatically opened browser window.

View File

@ -27,3 +27,17 @@ Something along the lines of https://github.com/onfido/k8s-cleanup which doesnt
## Resources ## Resources
- https://docs.google.com/spreadsheets/d/1WPHt0gsb7adVzY3eviMK2W8LejV0I5m_Zpc8tMzl_2w/edit#gid=0 - https://docs.google.com/spreadsheets/d/1WPHt0gsb7adVzY3eviMK2W8LejV0I5m_Zpc8tMzl_2w/edit#gid=0
- https://github.com/ishantanu/awesome-kubectl-plugins - https://github.com/ishantanu/awesome-kubectl-plugins
## Update Api-server config
Add the following extraArgs to the ClusterConfiguration configMap in the kube-system namespace:
`kubectl edit -n kube-system cm kubeadm-config`
```
oidc-issuer-url: "https://accounts.google.com"
oidc-client-id: "<CLIENT_ID from Google>"
oidc-username-claim: "email"
oidc-groups-claim: "groups"
```
## Resources
- https://kubernetes.io/docs/reference/access-authn-authz/authentication/

49
docs/notes.md Normal file
View File

@ -0,0 +1,49 @@
# Cluster Operations
## Delete evicted pods across all namespaces
`
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
`
## Cleanup old replicasets
`kubectl get rs --all-namespaces | awk {' if ($3 == 0 && $4 == 0) system("kubectl delete rs "$2" --namespace="$1)'}`
## Replace worker nodes
In order to change the instance type or in genernal replace worker nodes do:
* (optional) Update the launch configuration of the worker group
* Make sure there is enough capacity in the cluster to handle all pods being evicted for the node
* `kubectl drain --ignore-daemonsets node_name`
will evict all pods except DaemonSets. In case there are pods with local storage review each affected pod. After being sure no important data will be lost add `--delete-local-data` to the original command above and try again.
* Terminate instance matching *node_name*
The new instance should take over the previous node_name assuming only node is being replaced at a time and automatically join and replace the previous node.
---
# kubectl
kubectl is the basic cmdline tool to interact with any kubernetes cluster via the kube-api server
## Plugins
As there are various very useful plugins for kubectl the first thing should be to install *krew* the plugin manager.
See: https://github.com/kubernetes-sigs/krew for details
List of awesome plugins: https://github.com/ishantanu/awesome-kubectl-plugins
### kubelogin
To login / authenticate against an openID provider like Google install the kubelogin plugin.
See: https://github.com/int128/kubelogin
Make sure to adjust your kubeconfig files accordingly !
### kauthproxy
Easiest way to access the Kubernetes dashboard, if installed in the targeted cluster, is to use the kauthproxy plugin.
See: https://github.com/int128/kauthproxy
Once installed simply execute:
`kubectl auth-proxy -n kubernetes-dashboard https://kubernetes-dashboard.svc`
and access the dashboard via the automatically opened browser window.
# api-server OAuth configuration

View File

@ -1,15 +0,0 @@
# Operational guide for worker nodes
## Replace worker node
In order to change the instance type or in genernal replace worker nodes do:
* (optional) Update the launch configuration of the worker group
* Make sure there is enough capacity in the cluster to handle all pods being evicted for the node
* `kubectl drain --ignore-daemonsets node_name`
will evict all pods except DaemonSets. In case there are pods with local storage review each affected pod. After being sure no important data will be lost add `--delete-local-data` to the original command above and try again.
* Terminate instance matching *node_name*
The new instance should take over the previous node_name assuming only node is being replaced at a time and automatically join and replace the previous node.