Ceph集群搭建及Kubernetes上实现动态存储(StorageClass)

2022-12-11,,,,

集群准备

ceph集群配置说明

 
节点名称 IP地址 配置 作用
ceph-moni-0 10.10.3.150

centos7.5

4C,16G,200Disk

管理节点,监视器 monitor
ceph-moni-1 10.10.3.151

centos7.5

4C,16G,200Disk

监视器 monitor
ceph-moni-2 10.10.3.152

centos7.5

4C,16G,200Disk

监视器 monitor
ceph-osd-0 10.10.3.153

centos7.5

4C,16G,200Disk

存储节点 osd
ceph-osd-1 10.10.3.154

centos7.5

4C,16G,200Disk

存储节点 osd
ceph-osd-2 10.10.3.155

centos7.5

4C,16G,200Disk

存储节点 osd

本文使用ceph-deploy安装配置集群,以 6 个节点—3 个 monitor 节点,3 个 osd 节点。

Ceph 集群的安装配置

安装依赖包(所有节点)

sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

配置cephd的yum源(所有节点)

vim /etc/yum.repos.d/ceph.repo

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/noarch/
enabled=
gpgcheck=
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority= [ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/$basearch
enabled=
gpgcheck=
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority= [ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/SRPMS
enabled=
gpgcheck=
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=

更新yum源(所有节点)

sudo yum update 

添加集群hosts信息(所有节点)

cat >> /etc/hosts <<EOF
10.10.3.150 ceph-moni-
10.10.3.151 ceph-moni-
10.10.3.152 ceph-moni-
10.10.3.153 ceph-osd-
10.10.3.154 ceph-osd-
10.10.3.155 ceph-osd-
EOF

创建用户,赋予 root 权限 设置sudo免密(所有节点)

 useradd -d /home/ceph -m ceph  && echo  | passwd --stdin ceph && echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph

在管理节点安装 ceph-deploy(管理节点)

sudo yum install ceph-deploy

配置 ssh 免密登录(管理节点)

su - ceph
ssh-keygen -t rsa
#传key到各节点
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-moni-
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-
ssh-copy-id -i ~/.ssh/id_rsa.pub ceph@ceph-osd-
#管理节点上更改~/.ssh/config
Host ceph-moni-
Hostname ceph-moni-
User ceph
Host ceph-moni-
Hostname ceph-moni-
User ceph
Host ceph-osd-
Hostname cceph-osd-
User ceph
Host ceph-osd-
Hostname cceph-osd-
User ceph
Host ceph-osd-
Hostname cceph-osd-
User ceph
#更改权限
sudo chmod ~/.ssh/config

创建管理节点

su - ceph
mkdir ceph-cluster
cd ceph-cluster
ceph-deploy new {initial-monitor-node(s)}
例如
ceph-deploy new ceph-moni- ceph-moni- ceph-moni-

在管理节点上,更改生成的 ceph 配置文件,添加以下内容

vim ceph.conf
#更改 osd 个数
osd pool default size =
#允许 ceph 集群删除 pool
[mon]
mon_allow_pool_delete = true

在管理节点上给集群所有节点安装 ceph

ceph-deploy install {ceph-node} [{ceph-node} ...]
例如:
ceph-deploy install ceph-moni- ceph-moni- ceph-moni- ceph-osd- ceph-osd- ceph-osd2

配置初始 monitor(s)、并收集所有密钥

ceph-deploy mon create-initial

在管理节点上登录到每个 osd 节点,创建 osd 节点的数据存储目录(所有osd节点)

ssh ceph-osd-
sudo mkdir /var/local/osd0
sudo chmod 777 -R /var/local/osd0
exit
ssh ceph-osd-
sudo mkdir /var/local/osd1
sudo chmod 777 -R /var/local/osd1
exit
ssh ceph-osd-
sudo mkdir /var/local/osd2
sudo chmod 777 -R /var/local/osd2
exit

使每个 osd 就绪(管理节点执行)

ceph-deploy osd prepare ceph-osd-:/var/local/osd0 ceph-osd-:/var/local/osd1 ceph-osd-:/var/local/osd2

激活每个 osd 节点(管理节点执行)

ceph-deploy osd activate ceph-osd-:/var/local/osd0 ceph-osd-:/var/local/osd1 ceph-osd-:/var/local/osd2

在管理节点把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,赋予 ceph.client.admin.keyring 有操作权限(所有节点)

ceph-deploy admin {manage-node} {ceph-node}
例如:
ceph-deploy admin ceph-moni- ceph-moni- ceph-moni- ceph-osd- ceph-osd- ceph-osd-
#所有节点执行
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

部署完成。查看集群状态

$ eph health
HEALTH_OK

客户端配置

因为我的kubernetes的操作系统是Ubuntu操作系统,Kubernetes 集群中的每个节点要想使用 Ceph,需要按照 Ceph 客户端来安装配置 Ceph,所有我这里有两种操作系统的安装方式。

Centos

添加 ceph 源

vim /etc/yum.repos.d/ceph.repo

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/noarch/
enabled=
gpgcheck=
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority= [ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/$basearch
enabled=
gpgcheck=
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority= [ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-kraken/el7/SRPMS
enabled=
gpgcheck=
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
priority=

安装 ceph client

yum update & yum install -y ceph

添加集群信息

cat >> /etc/hosts <<EOF
10.10.3.30 ceph-client01
10.10.3.150 ceph-moni-
10.10.3.151 ceph-moni-
10.10.3.152 ceph-moni-
10.10.3.153 ceph-osd-
10.10.3.154 ceph-osd-
10.10.3.155 ceph-osd-
EOF

拷贝集群配置信息和 admin 密钥

scp -r root@ceph-moni-:/etc/ceph/\{ceph.conf,ceph.client.admin.keyring\} /etc/ceph/

Ubuntu

配置源

wget -q -O- https://mirrors.aliyun.com/ceph/keys/release.asc | sudo apt-key add -; echo deb https://mirrors.aliyun.com/ceph/debian-kraken xenial main | sudo tee /etc/apt/sources.list.d/ceph.list

更新源

apt update  && apt -y dist-upgrade && apt -y autoremove

安装 ceph client

apt-get install ceph 

添加集群信息

cat >> /etc/hosts <<EOF
10.10.3.30 ceph-client01
10.10.3.150 ceph-moni-
10.10.3.151 ceph-moni-
10.10.3.152 ceph-moni-
10.10.3.153 ceph-osd-
10.10.3.154 ceph-osd-
10.10.3.155 ceph-osd-
EOF

拷贝集群配置信息和 admin 密钥

scp -r root@ceph-moni-:/etc/ceph/\{ceph.conf,ceph.client.admin.keyring\} /etc/ceph/

配置StorageClass

所有的k8s节点的node节点到要能访问到ceph的服务端,所以所有的node节点要安装客户端(ceph-common),我上面是直接安装ceph,也是可以的。

生成key文件

$ grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
QVFCWXB0RmIzK2dqTEJBQUtsYm4vaHU2NWZ2eHlaaGRnM2hwc1E9PQ==

配置访问ceph的secret

下面的key默认是default的Namespace,所有只能在default下使用,要想其他namespace下使用,需要在指定namespace下创建key,修改namespace即可。

$ vim ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: default
type: "kubernetes.io/rbd"
data:
key: QVFCWXB0RmIzK2dqTEJBQUtsYm4vaHU2NWZ2eHlaaGRnM2hwc1E9PQ==
$ kubectl apply -f ceph-secret.yaml
secret/ceph-secret created
$ kubectl get secret
NAME TYPE DATA AGE
ceph-secret kubernetes.io/rbd 4s
default-token-lplp6 kubernetes.io/service-account-token 50d
mysql-root-password Opaque 2d

配置ceph的存储类

$ vim ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: jax-ceph
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.10.3.150:,10.10.3.151:,10.10.3.152:
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: rbd
userId: admin
userSecretName: ceph-secret
$ kubectl apply -f ceph-storageclass.yaml
storageclass.storage.k8s.io/jax-ceph created
$ kubectl get storageclass
NAME              PROVISIONER          AGE
jax-ceph          kubernetes.io/rbd    1

到此动态存储创建完成

下面是一个在statefulset中的一个使用方法

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp-sts-svc
replicas:
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- containerPort:
name: web
volumeMounts:
- name: myappdata
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: myappdata
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "jax-ceph"
resources:
requests:
storage: 5Gi

Ceph集群搭建及Kubernetes上实现动态存储(StorageClass)的相关教程结束。

《Ceph集群搭建及Kubernetes上实现动态存储(StorageClass).doc》

下载本文的Word格式文档,以方便收藏与打印。