fix: another round of ugrade fixes for 1.22
This commit is contained in:
parent
a236ce112c
commit
fa5c1f9b2a
@ -134,7 +134,7 @@ spec:
|
|||||||
memory: 20Mi
|
memory: 20Mi
|
||||||
cpu: 10m
|
cpu: 10m
|
||||||
limits:
|
limits:
|
||||||
memory: 20Mi
|
memory: 64Mi
|
||||||
#cpu: 100m
|
#cpu: 100m
|
||||||
|
|
||||||
volumeMounts:
|
volumeMounts:
|
||||||
|
@ -84,8 +84,6 @@ Might take a while ...
|
|||||||
4. Migrate ArgoCD KubeZero config for your cluster:
|
4. Migrate ArgoCD KubeZero config for your cluster:
|
||||||
```cat <cluster/env/kubezero/application.yaml> | ./release/v1.22/migrate_agro.py```
|
```cat <cluster/env/kubezero/application.yaml> | ./release/v1.22/migrate_agro.py```
|
||||||
Adjust as needed...
|
Adjust as needed...
|
||||||
If the ECK operator is running in your cluster make sure to replace the CRDs *BEFORE* committing the new kubezero config !
|
|
||||||
```kubectl replace -f https://download.elastic.co/downloads/eck/2.1.0/crds.yaml```
|
|
||||||
|
|
||||||
- git add / commit / push
|
- git add / commit / push
|
||||||
- Watch ArgoCD do its work.
|
- Watch ArgoCD do its work.
|
||||||
@ -96,3 +94,10 @@ once all new workers joined, drain old workers one by one,
|
|||||||
finally reset `desired` for each worker ASG which will terminate the old workers.
|
finally reset `desired` for each worker ASG which will terminate the old workers.
|
||||||
|
|
||||||
## Known issues
|
## Known issues
|
||||||
|
|
||||||
|
### Metrics
|
||||||
|
- `metrics-prometheus-node-exporter` will go into `CreateContainerError`
|
||||||
|
on 1.21 nodes until the metrics module is upgraded, due to underlying OS changes
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
- `logging-fluent-bit` will go into `CrashLoopBackoff` on 1.21 nodes, until logging module is upgraded, due to underlying OS changes
|
@ -109,3 +109,12 @@ while true; do
|
|||||||
sleep 3
|
sleep 3
|
||||||
done
|
done
|
||||||
kubectl delete pod kubezero-upgrade-${VERSION//.} -n kube-system
|
kubectl delete pod kubezero-upgrade-${VERSION//.} -n kube-system
|
||||||
|
|
||||||
|
# Now lets rolling restart bunch of ds to make sure they picked up the changes
|
||||||
|
for ds in calico-node kube-multus-ds kube-proxy ebs-csi-node; do
|
||||||
|
kubectl rollout restart daemonset/$ds -n kube-system
|
||||||
|
kubectl rollout status daemonset/$ds -n kube-system
|
||||||
|
done
|
||||||
|
|
||||||
|
# Force replace the ECK CRDs
|
||||||
|
kubectl get crd elasticsearches.elasticsearch.k8s.elastic.co && kubectl replace -f https://download.elastic.co/downloads/eck/2.1.0/crds.yaml
|
||||||
|
Loading…
Reference in New Issue
Block a user