glusterfs安装配置

2022-12-08,,

目标:

原有的k8s的集群磁盘容量不够,数据迁移无法完成,数据迁移是物理机无法由于采购磁盘流程过程,申请虚拟机搭建glusterfs做分布式存储

磁盘规划:

# 查看盘符
$ lsblk # 然后创建lvm方便后续容量不够时可以扩容
# 将普通的分区加上PV属性
$ pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde # 创建卷组
$ vgcreate storage /dev/sdb /dev/sdc /dev/sdd /dev/sde 创建逻辑卷
$ lvcreate -n gfs-user-data -L 3T storage # 磁盘格式化
$ mkfs.ext4 /dev/storage/gfs-user-data # 设置可以自动永久挂载
$ echo "/dev/storage/gfs-user-data /glusterfs-user-data ext4 defaults 0 0" >> /etc/fstab # 自动挂载
$ mount -a

centos7 中glusterFS 安装使用

测试环境
这里用三个节点安装glusterFS,为防止脑裂情况的出现,一般要求三个以上的节点;
nodel 10.201.2.111
node2 10.201.2.150
node3 10.201.2.178

glusterFS 安装

yum install glusterfs-server glusterfs

离线环境中: 下载rpm包安装 安装包下载地址:http://mirror.centos.org/centos/7/storage/x86_64/gluster-4.1/

glusterfs-4.1.4-1.el7.x86_64.rpm
glusterfs-libs-4.1.4-1.el7.x86_64.rpm
glusterfs-api-4.1.4-1.el7.x86_64.rpm
glusterfs-server-4.1.4-1.el7.x86_64.rpm
glusterfs-cli-4.1.4-1.el7.x86_64.rpm
rpcbind-0.2.0-44.el7.x86_64.rpm
glusterfs-client-xlators-4.1.4-1.el7.x86_64.rpm
userspace-rcu-0.10.0-3.el7.x86_64.rpm
glusterfs-fuse-4.1.4-1.el7.x86_64.rpm
userspace-rcu-devel-0.10.0-3.el7.x86_64.rpm
yum install -y attr-2.4.46-13.el7.x86_64 userspace-rcu-0.10.0-3.el7.x86_64 glusterfs-libs-7.5-1.el7.x86_64  glusterfs-client-xlators-7.5-1.el7.x86_64 glusterfs-cli-7.5-1.el7.x86_64  glusterfs-7.5-1.el7.x86_64 psmisc-22.20-15.el7.x86_64 glusterfs-fuse-7.5-1.el7.x86_64 glusterfs-api-7.5-1.el7.x86_64  glusterfs-7.5-1.el7.x86_64 glusterfs-server-7.5-1.el7.x86_64 openssl-devel-1.0.2k-19.el7.x86_64 
安装rpm包:
yum install *.rpm

启动glusterFS

启动glusterfs服务的命令

## 启动glusterd服务
systemctl start glusterd
## 检测glusterd服务的状态
systemctl status glusterd

如果要开启防火墙,需要配置防火墙
 


# iptables
iptables -I INPUT -p all -s <ip-address> -j ACCEPT # firewalld
firewall-cmd --add-service=glusterfs --permanent && firewall-cmd --reload

glusterFS 配置

配置可信源 只需要在一个节点上配置它的可信源,(如果是用hostname配置,则需要在任意一个其他节点上配置第一个节点为可信源)。

# 在node1 节点上执行: gluster peer probe node2/node3
gluster peer probe 10.255.83.23
gluster peer probe 10.74.234.109
# 在node2 节点上执行: gluster peer probe node1
gluster peer probe 10.156.28.11

查看gluster集群状态

# 在node1 节点上查看集群节点状态
gluster peer status ## 显示信息如下:
Number of Peers: 2 Hostname: 10.255.83.23
Uuid: 0a4724ff-fdd5-4664-9084-51be940b1223
State: Peer in Cluster (Connected) Hostname: 10.74.234.109
Uuid: d9cfdb0e-8399-4fb2-90ba-5c0795f3a9c1
State: Peer in Cluster (Connected)

 
# 在node2 节点上查看集群节点状态

gluster peer status

#显示信息如下:
Number of Peers: 2 Hostname: yq01-face-p4-dev.yq01.baidu.com
Uuid: 2846828b-c6f1-4f56-8616-c8c53c32e763
State: Peer in Cluster (Connected)
Other names:
10.156.28.11 Hostname: 10.74.234.109
Uuid: d9cfdb0e-8399-4fb2-90ba-5c0795f3a9c1
State: Peer in Cluster (Connected)

 
# 在node3 节点上查看集群节点状态

Number of Peers: 2

Hostname: yq01-face-p4-dev.yq01.baidu.com
Uuid: 2846828b-c6f1-4f56-8616-c8c53c32e763
State: Peer in Cluster (Connected)
Other names:
10.156.28.11 Hostname: 10.255.83.23
Uuid: 0a4724ff-fdd5-4664-9084-51be940b1223
State: Peer in Cluster (Connected)

配置glusterfs 卷

# 在所有节点上创建一个目录data

mkdir -p /home/brick

 
# 在任意一个节点上执行,创建卷data

gluster volume create data IP地址01:/glusterfs-easydata/brick IP地址02:/glusterfs-easydata/brick IP地址02:/glusterfs-easydata/brick force
# 启动卷
gluster volume start data # 查看卷的信息
gluster volume info
# 命令显示的结果 Volume Name: data
Type: Replicate
Volume ID: 39a0d513-ba64-4fd5-a6d0-a030a79db639
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.156.28.11:/home/disk1/data
Brick2: 10.255.83.23:/home/disk1/data
Brick3: 10.74.234.109:/home/disk1/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

 

测试使用glusterfs 卷

# 将卷data 挂载到其中一个server上。 这里不需要一定挂载在server上,可以挂载在client上,需要安装gluster-client;
# 假设挂载到node1的 /home/disk1/mnt 目录下:
mount -t glusterfs 10.156.28.11:/data /gfs-user-data # 看看是否挂载成功
df -Th /gfs-user-data

 
 

glusterfs安装配置的相关教程结束。

《glusterfs安装配置.doc》

下载本文的Word格式文档,以方便收藏与打印。