清除已安装的rook-ceph集群

2022-11-06,,,,

官方文档地址:https://rook.io/docs/rook/v1.8/ceph-teardown.html

如果要拆除群集并启动新群集,请注意需要清理的以下资源:

rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD)
/var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds # 每一个主机的该目录需要删除

删除现有的资源

kubectl delete -n rook-ceph cephblockpool replicapool
kubectl delete storageclass rook-ceph-block
kubectl delete storageclass csi-cephfs kubectl -n rook-ceph delete cephcluster rook-ceph kubectl delete -f operator.yaml
kubectl delete -f common.yaml
kubectl delete -f crds.yaml

删除每个主机中的/var/lib/rook目录(默认是这个目录)

删除磁盘数据


# yum -y install gdisk #!/usr/bin/env bash
DISK="/dev/sdb" # 修改具体的磁盘 # Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) # You will have to run this step for all disks.
sgdisk --zap-all $DISK # Clean hdds with dd # 机械磁盘使用这个命令清除磁盘数据
dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync # Clean disks such as ssd with blkdiscard instead of dd # 固态磁盘使用这个命令清除磁盘数据,若不是固态磁盘则会提示不支持这一步操作
blkdiscard $DISK # These steps only have to be run once on each node
# If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks.
ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove % # ceph-volume setup can leave ceph-<UUID> directories in /dev and /dev/mapper (unnecessary clutter)
rm -rf /dev/ceph-*
rm -rf /dev/mapper/ceph--* # Inform the OS of partition table changes
partprobe $DISK

清除已安装的rook-ceph集群的相关教程结束。

《清除已安装的rook-ceph集群.doc》

下载本文的Word格式文档,以方便收藏与打印。