diff --git a/CHANGELOG.md b/CHANGELOG.md new file mode 100644 index 00000000..b0168fd1 --- /dev/null +++ b/CHANGELOG.md @@ -0,0 +1,46 @@ +# Changelog + +## KubeZero - 2.18 ( Argoless ) + +### High level / Admin changes +- ArgoCD is now optional and NOT required nor used during initial cluster bootstrap +- the bootstrap process now uses the same config and templates as the optional ArgoCD applications later on +- the bootstrap is can now be restarted at any time and considerably faster +- the top level KubeZero config for the ArgoCD app-of-apps is now also maintained via the gitops workflow. Changes can be applied by a simple git push rather than manual scripts + +### Calico +- version bump + +### Cert-manager +- local issuers are now cluster issuer to allow them being used across namespaces +- all cert-manager resources moved into the cert-manager namespace +- version bump to 1.10 + +### Kiam +- set priorty class to cluster essential +- certificates are now issued by the cluster issuer + +### EBS / EFS +- version bump + +### Istio +- istio operator removed, deployment migrated to helm, various cleanups +- version bump to 1.8 +- all ingress resources are now in the dedicated new namespace istio-ingress ( deployed via separate kubezero chart istio-ingress) +- set priorty class of ingress components to cluster essential + +### Logging +- ES/Kibana version bump to 7.10 +- ECK operator is now installed on demand in logging ns +- Custom event fields configurable via new fluent-bit chart + e.g. clustername could be added to each event allowing easy filtering in case multiple clusters stream events into a single central ES cluster + +### ArgoCD +- version bump, new app of app architecure + +### Metrics +- version bump +- all servicemonitor resources are now in the same namespaces as the respective apps to avoid deployments across multiple namespaces + +### upstream Kubernetes 1.18 +https://sysdig.com/blog/whats-new-kubernetes-1-18/ diff --git a/CHANGES.md b/CHANGES.md deleted file mode 100644 index 02ab0729..00000000 --- a/CHANGES.md +++ /dev/null @@ -1,15 +0,0 @@ -# CFN / Platform -- Kube to 1.17 -- Kube-proxy uses ipvs -- metrics support for kube-proxy -- no reliance on custom resource for S3 buckets anymore - - -# Kubezero -- fully automated one command bootstrap incl. all kubezero components -- migrated from kube-prometheuss to prometheus-operator helm charts for metrics -- latest Grafana incl. peristence -- kube-prometheus adapter improvements / customizations -- integrated EFS CSI driver into Kubezero -- prometheus itself can be exposed via istio ingress on demand to ease development of custom metrics -- backup script to export all cert-manager items between clusters diff --git a/docs/Quickstart.md b/docs/Quickstart.md index 6623ce99..485dd688 100644 --- a/docs/Quickstart.md +++ b/docs/Quickstart.md @@ -9,7 +9,6 @@ ## Deploy Cluster - cloudbender sync config/kube --multi - The latest versions now support waiting for the control plane to bootstrap allowing deployments in one step ! ## Get kubectl config - get admin.conf from S3 and store in your local `~/.kube` folder @@ -22,36 +21,21 @@ --- # KubeZero All configs and scriptss are normally under: -`artifacts///kubezero` +`kubezero/clusters//` ## Prepare Config check values.yaml for your cluster ## Get CloudBender kubezero config Cloudbender creates a kubezero config file, which incl. all outputs from the Cloudformation stacks in `outputs/kube/kubezero.yaml`. -- copy kubezero.yaml *next* to the values.yaml named as `cloudbender.yaml`. +Place kubezero.yaml *next* to the values.yaml -## Deploy KubeZero Helm chart -`./deploy.sh` - -The deploy script will handle the initial bootstrap process as well as the roll out of advanced components like Prometheus, Istio and ElasticSearch/Kibana in various phases. - -It will take about 10 to 15 minutes for ArgoCD to roll out all the services... - - -# Own apps -- Add your own application to ArgoCD via the cli - -# Troubleshooting - -## Verify ArgoCD -To reach the Argo API port forward from localhost via: -`kubectl port-forward svc/kubezero-argocd-server -n argocd 8080:443` - -Next download the argo-cd cli, details for different OS see https://argoproj.github.io/argo-cd/cli_installation/ - -Finally login into argo-cd via `argocd login localhost:8080` using the *admin* user and the password set in values.yaml earlier. - -List all Argo applications via: `argocd app list`. +## Bootstrap +The first step will install all CRDs of enabled components only to prevent any dependency issues during the actual install. +`./bootstrap.sh crds all clusters//` +The second step will install all enabled components incl. various checks along the way. +`./bootstrap.sh deploy all clusters//` +## Success ! +Access your brand new container platform via kubectl / k9s / lens or the tool of your choosing. diff --git a/docs/Upgrade.md b/docs/Upgrade-2.18.md similarity index 54% rename from docs/Upgrade.md rename to docs/Upgrade-2.18.md index bc7442dd..9032d9f9 100644 --- a/docs/Upgrade.md +++ b/docs/Upgrade-2.18.md @@ -1,4 +1,4 @@ -# Upgrade to KubeZero V2(Argoless) +# Upgrade to KubeZero V2.18.0 (Argoless) ## (optional) Upgrade control plane nodes / worker nodes - Set kube version in the controller config to eg. `1.18` @@ -53,56 +53,4 @@ Ingress service interruption ends. ## Verification / Tests - verify argocd incl. kubezero app - verify all argo apps status - - verify all the things - - - -# Changelog - -## Kubernetes 1.18 -https://sysdig.com/blog/whats-new-kubernetes-1-18/ - -## High level / Admin changes -- ArgoCD is now optional and NOT required nor used during initial cluster bootstrap -- the bootstrap process now uses the same config and templates as the optional ArgoCD applications later on -- the bootstrap is can now be restarted at any time and considerably faster -- the top level KubeZero config for the ArgoCD app-of-apps is now also maintained via the gitops workflow. Changes can be applied by a simple git push rather than manual scripts - -## Individual changes - -### Calico -- version bump - -### Cert-manager -- local issuers are now cluster issuer to allow them being used across namespaces -- all cert-manager resources moved into the cert-manager namespace -- version bump to 1.10 - -### Kiam -- set priorty class to cluster essential -- certificates are now issued by the cluster issuer - -### EBS / EFS -- version bump - -### Istio -- istio operator removed, deployment migrated to helm, various cleanups -- version bump to 1.8 -- all ingress resources are now in the dedicated new namespace istio-ingress ( deployed via separate kubezero chart istio-ingress) -- set priorty class of ingress components to cluster essential - -### Logging -- ES/Kibana version bump to 7.10 -- ECK operator is now installed on demand in logging ns -- Custom event fields configurable via new fluent-bit chart - e.g. clustername could be added to each event allowing easy filtering in case multiple clusters stream events into a single central ES cluster - -### ArgoCD -- version bump, new app of app architecure - -### Metrics -- version bump -- all servicemonitor resources are now in the same namespaces as the respective apps to avoid namespace spanning deployments - - diff --git a/docs/api-server.md b/docs/api-server.md deleted file mode 100644 index ca66fa6e..00000000 --- a/docs/api-server.md +++ /dev/null @@ -1,15 +0,0 @@ -# api-server OAuth configuration - -## Update Api-server config -Add the following extraArgs to the ClusterConfiguration configMap in the kube-system namespace: -`kubectl edit -n kube-system cm kubeadm-config` - -``` - oidc-issuer-url: "https://accounts.google.com" - oidc-client-id: "" - oidc-username-claim: "email" - oidc-groups-claim: "groups" -``` - -## Resources -- https://kubernetes.io/docs/reference/access-authn-authz/authentication/ diff --git a/docs/cluster.md b/docs/cluster.md deleted file mode 100644 index 0f975515..00000000 --- a/docs/cluster.md +++ /dev/null @@ -1,9 +0,0 @@ -# Cluster Operations - -## Clean up -### Delete evicted pods across all namespaces - -`kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c -` -### Cleanup old replicasets -`kubectl get rs --all-namespaces | awk {' if ($3 == 0 && $4 == 0) system("kubectl delete rs "$2" --namespace="$1)'}` diff --git a/docs/kubectl.md b/docs/kubectl.md deleted file mode 100644 index fa8283e6..00000000 --- a/docs/kubectl.md +++ /dev/null @@ -1,21 +0,0 @@ -# kubectl -kubectl is the basic cmdline tool to interact with any kubernetes cluster via the kube-api server. - -## Plugins -As there are various very useful plugins for kubectl the first thing should be to install *krew* the plugin manager. -See: https://github.com/kubernetes-sigs/krew for details - -List of awesome plugins: https://github.com/ishantanu/awesome-kubectl-plugins - -### kubelogin -To login / authenticate against an openID provider like Google install the kubelogin plugin. -See: https://github.com/int128/kubelogin - -Make sure to adjust your kubeconfig files accordingly ! - -### kauthproxy -Easiest way to access the Kubernetes dashboard, if installed in the targeted cluster, is to use the kauthproxy plugin. -See: https://github.com/int128/kauthproxy -Once installed simply execute: -`kubectl auth-proxy -n kubernetes-dashboard https://kubernetes-dashboard.svc` -and access the dashboard via the automatically opened browser window. diff --git a/docs/misc.md b/docs/misc.md index a48b318c..9b0246ab 100644 --- a/docs/misc.md +++ b/docs/misc.md @@ -27,3 +27,17 @@ Something along the lines of https://github.com/onfido/k8s-cleanup which doesnt ## Resources - https://docs.google.com/spreadsheets/d/1WPHt0gsb7adVzY3eviMK2W8LejV0I5m_Zpc8tMzl_2w/edit#gid=0 - https://github.com/ishantanu/awesome-kubectl-plugins + +## Update Api-server config +Add the following extraArgs to the ClusterConfiguration configMap in the kube-system namespace: +`kubectl edit -n kube-system cm kubeadm-config` + +``` + oidc-issuer-url: "https://accounts.google.com" + oidc-client-id: "" + oidc-username-claim: "email" + oidc-groups-claim: "groups" +``` + +## Resources +- https://kubernetes.io/docs/reference/access-authn-authz/authentication/ diff --git a/docs/notes.md b/docs/notes.md new file mode 100644 index 00000000..a0d5d04e --- /dev/null +++ b/docs/notes.md @@ -0,0 +1,49 @@ +# Cluster Operations + +## Delete evicted pods across all namespaces + +` +kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c +` + +## Cleanup old replicasets +`kubectl get rs --all-namespaces | awk {' if ($3 == 0 && $4 == 0) system("kubectl delete rs "$2" --namespace="$1)'}` + +## Replace worker nodes +In order to change the instance type or in genernal replace worker nodes do: + +* (optional) Update the launch configuration of the worker group + +* Make sure there is enough capacity in the cluster to handle all pods being evicted for the node + +* `kubectl drain --ignore-daemonsets node_name` +will evict all pods except DaemonSets. In case there are pods with local storage review each affected pod. After being sure no important data will be lost add `--delete-local-data` to the original command above and try again. + +* Terminate instance matching *node_name* + +The new instance should take over the previous node_name assuming only node is being replaced at a time and automatically join and replace the previous node. + +--- + +# kubectl +kubectl is the basic cmdline tool to interact with any kubernetes cluster via the kube-api server + +## Plugins +As there are various very useful plugins for kubectl the first thing should be to install *krew* the plugin manager. +See: https://github.com/kubernetes-sigs/krew for details + +List of awesome plugins: https://github.com/ishantanu/awesome-kubectl-plugins + +### kubelogin +To login / authenticate against an openID provider like Google install the kubelogin plugin. +See: https://github.com/int128/kubelogin + +Make sure to adjust your kubeconfig files accordingly ! + +### kauthproxy +Easiest way to access the Kubernetes dashboard, if installed in the targeted cluster, is to use the kauthproxy plugin. +See: https://github.com/int128/kauthproxy +Once installed simply execute: +`kubectl auth-proxy -n kubernetes-dashboard https://kubernetes-dashboard.svc` +and access the dashboard via the automatically opened browser window. +# api-server OAuth configuration diff --git a/docs/worker.md b/docs/worker.md deleted file mode 100644 index 0c4a7678..00000000 --- a/docs/worker.md +++ /dev/null @@ -1,15 +0,0 @@ -# Operational guide for worker nodes - -## Replace worker node -In order to change the instance type or in genernal replace worker nodes do: - -* (optional) Update the launch configuration of the worker group - -* Make sure there is enough capacity in the cluster to handle all pods being evicted for the node - -* `kubectl drain --ignore-daemonsets node_name` -will evict all pods except DaemonSets. In case there are pods with local storage review each affected pod. After being sure no important data will be lost add `--delete-local-data` to the original command above and try again. - -* Terminate instance matching *node_name* - -The new instance should take over the previous node_name assuming only node is being replaced at a time and automatically join and replace the previous node.