Updating rook 1.4.9 to 1.5.12
The cluster is at version 1.4.9. The toolbox cannot talk to the cluster due to the ceph version being too old. Hopefully going to 1.5 will fix it,but I doubt that as some issues said it was fixed in 1.6.
Either way, the cluster needs to be updated.
Getting the source
git clone git clone https://github.com/rook/rook.git
Check the tags for 1.5
$ git tag -l | grep 1.5
v1.1.5
v1.5.0
v1.5.0-alpha.0
v1.5.0-beta.0
v1.5.1
v1.5.10
v1.5.11
v1.5.12
v1.5.2
v1.5.3
v1.5.4
v1.5.5
v1.5.6
v1.5.7
v1.5.8
v1.5.9
1.5.12 is the latest
$ git checkout tags/v1.5.12
Previous HEAD position was 3bccbc9ef Merge pull request #6963 from travisn/release-1.4.9
HEAD is now at a40bfdd62 Merge pull request #8049 from travisn/release-1.5.12
Upgrading the Common Resource Definitions and Service Accounts
kubectl apply -f common.yaml -f crds.yaml
For example:
$ kubectl apply -f common.yaml -f crds.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/rook-ceph configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket configured
serviceaccount/rook-ceph-admission-controller unchanged
clusterrole.rbac.authorization.k8s.io/rook-ceph-admission-controller-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-admission-controller-rolebinding configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt configured
role.rbac.authorization.k8s.io/rook-ceph-system configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-global configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-ceph-system configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-system configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-ceph-osd configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-ceph-mgr configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-ceph-cmd-reporter configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/rook-ceph-osd configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/rook-ceph-osd configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system configured
role.rbac.authorization.k8s.io/rook-ceph-mgr configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter configured
podsecuritypolicy.policy/00-rook-privileged created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/psp:rook configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-csi-cephfs-plugin-sa configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-csi-cephfs-provisioner-sa configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg configured
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin unchanged
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-csi-rbd-plugin-sa configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/rook-csi-rbd-provisioner-sa configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg configured
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin unchanged
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role configured
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io configured
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io configured
customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io configured
customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io configured
Updating the operator image to update the cluster
First update the image:
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.5.12
Then watch the update rollout:
watch --exec kubectl -n rook-ceph get deployments -l rook_cluster=rook-ceph -o jsonpath='{range .items[*]}{.metadata.name}{" \treq/upd/avl: "}{.spec.replicas}{"/"}{.status.updatedReplicas}{"/"}{.status.readyReplicas}{" \trook-version="}{.metadata.labels.rook-version}{"\n"}{end}'
For example:
Every 2.0s: kubectl -n rook-ceph get deployments -l rook_cluster=rook-ceph -o jsonpath={range .items[*]}{.meta... gold-1: Tue Aug 16 22:04:17 2022
rook-ceph-crashcollector-gold-1 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-crashcollector-gold-4 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-crashcollector-gold-5 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-crashcollector-gold-6 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-mds-myfs-a req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-mds-myfs-b req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-mon-h req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-mon-i req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-mon-k req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-osd-3 req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-osd-5 req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-osd-6 req/upd/avl: 1/1/1 rook-version=v1.4.9
rook-ceph-osd-9 req/upd/avl: 1/1/1 rook-version=v1.4.9
The upgrade ran smoothly and after about 6 min everything was showing 1.5.12
Every 2.0s: kubectl -n rook-ceph get deployments -l rook_cluster=rook-ceph -o jsonpath={range .items[*]}{.meta... gold-1: Tue Aug 16 22:10:46 2022
rook-ceph-crashcollector-gold-1 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-crashcollector-gold-4 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-crashcollector-gold-5 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-crashcollector-gold-6 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-mds-myfs-a req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-mds-myfs-b req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-mon-h req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-mon-i req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-mon-k req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-osd-3 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-osd-5 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-osd-6 req/upd/avl: 1/1/1 rook-version=v1.5.12
rook-ceph-osd-9 req/upd/avl: 1/1/1 rook-version=v1.5.12
Then I deleted the toolbox deployment and reapplied from the latest yaml.
$ kubectl delete deployment rook-ceph-tools
deployment.apps "rook-ceph-tools" deleted
dcaldwel@gold-1:~/github/rook/cluster/examples/kubernetes/ceph$ kubectl apply -f toolbox.yaml
deployment.apps/rook-ceph-tools created
It worked!
$ kubectl exec -it rook-ceph-tools-6c7d6bfc6c-cznc7 -- bash
[root@rook-ceph-tools-6c7d6bfc6c-cznc7 /]# ceph status
cluster:
id: 04461f64-e630-4891-bcea-0de24cf06c51
health: HEALTH_OK
services:
mon: 3 daemons, quorum h,i,k (age 6m)
mgr: a(active, since 5m)
mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
osd: 13 osds: 6 up (since 2m), 6 in (since 6d)
data:
pools: 4 pools, 73 pgs
objects: 6.69M objects, 2.9 TiB
usage: 8.8 TiB used, 46 TiB / 55 TiB avail
pgs: 72 active+clean
1 active+clean+scrubbing+deep+repair
io:
client: 1.2 KiB/s rd, 37 KiB/s wr, 2 op/s rd, 4 op/s wr
So now everything is healty with no warnings. It's been awhile since that happened.
checking the version of the cluster shows that it is using a fixed client
# ceph version
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
Next will be migrating to version 1.6+