记录一次k8s环境尝试过程(初始方案,现在已经做过很多完善,例如普罗米修斯)

2023-02-12,,,,

记录一次Team k8s环境搭建过程(初始方案,现在已经做过很多完善,例如普罗米修斯)

<!--
.wiz-editor-body .wiz-code-container{position: relative; padding:8px 0; margin: 5px 0;text-indent:0; text-align:left;}.CodeMirror {font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace; color: black; font-size: 10.5pt; font-size: 0.875rem}.wiz-editor-body .wiz-code-container .CodeMirror div {margin-top: 0; margin-bottom: 0;}.CodeMirror-lines {padding: 4px 0;}.CodeMirror pre {padding: 0 4px;}.CodeMirror pre.CodeMirror-line {min-height: 24px;}.CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler {background-color: white;}.CodeMirror-gutters {border-right: 1px solid #ddd; background-color: #f7f7f7; white-space: nowrap;}.CodeMirror-linenumbers {}.CodeMirror-linenumber {padding: 0 3px 0 5px; min-width: 20px; text-align: right; color: #999; white-space: nowrap;}.CodeMirror-guttermarker {color: black;}.CodeMirror-guttermarker-subtle {color: #999;}.CodeMirror-cursor {border-left: 1px solid black; border-right: none; width: 0;}.CodeMirror div.CodeMirror-secondarycursor {border-left: 1px solid silver;}.cm-fat-cursor .CodeMirror-cursor {width: auto; border: 0 !important; background: #7e7;}.cm-fat-cursor div.CodeMirror-cursors {z-index: 1;}.cm-fat-cursor-mark {background-color: rgba(20, 255, 20, 0.5);-webkit-animation: blink 1.06s steps(1) infinite;-moz-animation: blink 1.06s steps(1) infinite;animation: blink 1.06s steps(1) infinite;}.cm-animate-fat-cursor {width: auto; border: 0; -webkit-animation: blink 1.06s steps(1) infinite; -moz-animation: blink 1.06s steps(1) infinite; animation: blink 1.06s steps(1) infinite; background-color: #7e7;}@-moz-keyframes blink { 0% {} 50% { background-color: transparent; } 100% {}}@-webkit-keyframes blink { 0% {} 50% { background-color: transparent; } 100% {}}@keyframes blink { 0% {} 50% { background-color: transparent; } 100% {}}.CodeMirror-overwrite .CodeMirror-cursor {}.cm-tab { display: inline-block; text-decoration: inherit; }.CodeMirror-rulers {position: absolute; left: 0; right: 0; top: -50px; bottom: -20px; overflow: hidden;}.CodeMirror-ruler {border-left: 1px solid #ccc; top: 0; bottom: 0; position: absolute;}.cm-s-default .cm-header {color: blue;}.cm-s-default .cm-quote {color: #090;}.cm-negative {color: #d44;}.cm-positive {color: #292;}.cm-header, .cm-strong {font-weight: bold;}.cm-em {font-style: italic;}.cm-link {text-decoration: underline;}.cm-strikethrough {text-decoration: line-through;}.cm-s-default .cm-keyword {color: #708;}.cm-s-default .cm-atom {color: #219;}.cm-s-default .cm-number {color: #164;}.cm-s-default .cm-def {color: #00f;}.cm-s-default .cm-variable,.cm-s-default .cm-punctuation,.cm-s-default .cm-property,.cm-s-default .cm-operator {}.cm-s-default .cm-variable-2 {color: #05a;}.cm-s-default .cm-variable-3 {color: #085;}.cm-s-default .cm-comment {color: #a50;}.cm-s-default .cm-string {color: #a11;}.cm-s-default .cm-string-2 {color: #f50;}.cm-s-default .cm-meta {color: #555;}.cm-s-default .cm-qualifier {color: #555;}.cm-s-default .cm-builtin {color: #30a;}.cm-s-default .cm-bracket {color: #997;}.cm-s-default .cm-tag {color: #170;}.cm-s-default .cm-attribute {color: #00c;}.cm-s-default .cm-hr {color: #999;}.cm-s-default .cm-link {color: #00c;}.cm-s-default .cm-error {color: #f00;}.cm-invalidchar {color: #f00;}.CodeMirror-composing { border-bottom: 2px solid; }div.CodeMirror span.CodeMirror-matchingbracket {color: #0b0;}div.CodeMirror span.CodeMirror-nonmatchingbracket {color: #a22;}.CodeMirror-matchingtag { background: rgba(255, 150, 0, .3); }.CodeMirror-activeline-background {background: #e8f2ff;}.CodeMirror {position: relative; background: #f5f5f5;}.CodeMirror-scroll {overflow: hidden !important; margin-bottom: 0; margin-right: -30px; padding: 16px 30px 16px 0; outline: none; position: relative;}.CodeMirror-sizer {position: relative; border-right: 30px solid transparent;}.CodeMirror-vscrollbar, .CodeMirror-hscrollbar, .CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler {position: absolute; z-index: 6; display: none;}.CodeMirror-vscrollbar {right: 0; top: 0; overflow-x: hidden; overflow-y: scroll;}.CodeMirror-hscrollbar {bottom: 0; left: 0 !important; overflow-y: hidden; overflow-x: scroll;pointer-events: auto !important;outline: none;}.CodeMirror-scrollbar-filler {right: 0; bottom: 0;}.CodeMirror-gutter-filler {left: 0; bottom: 0;}.CodeMirror-gutters {position: absolute; left: 0; top: 0; min-height: 100%; z-index: 3;}.CodeMirror-gutter {white-space: normal; height: 100%; display: inline-block; vertical-align: top; margin-bottom: -30px;}.CodeMirror-gutter-wrapper {position: absolute; z-index: 4; background: none !important; border: none !important;}.CodeMirror-gutter-background {position: absolute; top: 0; bottom: 0; z-index: 4;}.CodeMirror-gutter-elt {position: absolute; cursor: default; z-index: 4;}.CodeMirror-gutter-wrapper ::selection { background-color: transparent }.CodeMirror-gutter-wrapper ::-moz-selection { background-color: transparent }.CodeMirror-lines {cursor: text; min-height: 1px;}.CodeMirror pre {-moz-border-radius: 0; -webkit-border-radius: 0; border-radius: 0; border-width: 0; background: transparent; font-family: inherit; font-size: inherit; margin: 0; white-space: pre; word-wrap: normal; line-height: inherit; color: inherit; z-index: 2; position: relative; overflow: visible; -webkit-tap-highlight-color: transparent; -webkit-font-variant-ligatures: contextual; font-variant-ligatures: contextual;}.CodeMirror-wrap pre {word-wrap: break-word; white-space: pre-wrap; word-break: normal;}.CodeMirror-linebackground {position: absolute; left: 0; right: 0; top: 0; bottom: 0; z-index: 0;}.CodeMirror-linewidget {position: relative; z-index: 2; padding: 0.1px;}.CodeMirror-widget {}.CodeMirror-rtl pre { direction: rtl; }.CodeMirror-code {outline: none;}.CodeMirror-scroll,.CodeMirror-sizer,.CodeMirror-gutter,.CodeMirror-gutters,.CodeMirror-linenumber {-moz-box-sizing: content-box; box-sizing: content-box;}.CodeMirror-measure {position: absolute; width: 100%; height: 0; overflow: hidden; visibility: hidden;}.CodeMirror-cursor {position: absolute; pointer-events: none;}.CodeMirror-measure pre { position: static; }div.CodeMirror-cursors {visibility: hidden; position: relative; z-index: 3;}div.CodeMirror-dragcursors {visibility: visible;}.CodeMirror-focused div.CodeMirror-cursors {visibility: visible;}.CodeMirror-selected { background: #d9d9d9; }.CodeMirror-focused .CodeMirror-selected { background: #d7d4f0; }.CodeMirror-crosshair { cursor: crosshair; }.CodeMirror-line::selection, .CodeMirror-line >--> span::selection, .CodeMirror-line > span > span::selection { background: #d7d4f0; }.CodeMirror-line::-moz-selection, .CodeMirror-line > span::-moz-selection, .CodeMirror-line > span > span::-moz-selection { background: #d7d4f0; }.cm-searching {background: #ffa; background: rgba(255, 255, 0, .4);}.cm-force-border { padding-right: .1px; }@media print { .CodeMirror div.CodeMirror-cursors {visibility: hidden;}}.cm-tab-wrap-hack:after { content: ""; }span.CodeMirror-selectedtext { background: none; }.CodeMirror-activeline-background, .CodeMirror-selected {transition: visibility 0ms 100ms;}.CodeMirror-blur .CodeMirror-activeline-background, .CodeMirror-blur .CodeMirror-selected {visibility:hidden;}.CodeMirror-blur .CodeMirror-matchingbracket {color:inherit !important;outline:none !important;text-decoration:none !important;}.CodeMirror-sizer {min-height:auto !important;}
-->
<!--
html, .wiz-editor-body {font-size: 12pt;}.wiz-editor-body {font-family: Helvetica, 'Hiragino Sans GB', '微软雅黑', 'Microsoft YaHei UI', SimSun, SimHei, arial, sans-serif;line-height: 1.7;margin: 0 auto;padding: 20px 16px;padding: 1.25rem 1rem;}.wiz-editor-body h1,.wiz-editor-body h2,.wiz-editor-body h3,.wiz-editor-body h4,.wiz-editor-body h5,.wiz-editor-body h6 {margin:20px 0 10px;margin:1.25rem 0 0.625rem;padding: 0;font-weight: bold;}.wiz-editor-body h1 {font-size:20pt;font-size:1.67rem;}.wiz-editor-body h2 {font-size:18pt;font-size:1.5rem;}.wiz-editor-body h3 {font-size:15pt;font-size:1.25rem;}.wiz-editor-body h4 {font-size:14pt;font-size:1.17rem;}.wiz-editor-body h5 {font-size:12pt;font-size:1rem;}.wiz-editor-body h6 {font-size:12pt;font-size:1rem;color: #777777;margin: 1rem 0;}.wiz-editor-body div,.wiz-editor-body p,.wiz-editor-body ul,.wiz-editor-body ol,.wiz-editor-body dl,.wiz-editor-body li {margin:8px 0;}.wiz-editor-body blockquote,.wiz-editor-body table,.wiz-editor-body pre,.wiz-editor-body code {margin:8px 0;}.wiz-editor-body .CodeMirror pre {margin:0;}.wiz-editor-body a {word-wrap: break-word;text-decoration-skip-ink: none;}.wiz-editor-body ul,.wiz-editor-body ol {padding-left:32px;padding-left:2rem;}.wiz-editor-body ol.wiz-list-level1 >--> li {list-style-type:decimal;}.wiz-editor-body ol.wiz-list-level2 > li {list-style-type:lower-latin;}.wiz-editor-body ol.wiz-list-level3 > li {list-style-type:lower-roman;}.wiz-editor-body li.wiz-list-align-style {list-style-position: inside; margin-left: -1em;}.wiz-editor-body blockquote {padding: 0 12px;}.wiz-editor-body blockquote > :first-child {margin-top:0;}.wiz-editor-body blockquote > :last-child {margin-bottom:0;}.wiz-editor-body img {border:0;max-width:100%;height:auto !important;margin:2px 0;}.wiz-editor-body table {border-collapse:collapse;border:1px solid #bbbbbb;}.wiz-editor-body td,.wiz-editor-body th {padding:4px 8px;border-collapse:collapse;border:1px solid #bbbbbb;min-height:28px;word-break:break-word;box-sizing: border-box;}.wiz-editor-body td > div:first-child {margin-top:0;}.wiz-editor-body td > div:last-child {margin-bottom:0;}.wiz-editor-body img.wiz-svg-image {box-shadow:1px 1px 4px #E8E8E8;}.wiz-hide {display:none !important;}
-->

Host:

hostname  OS  purpose ip
ub2-citst001.abc.com ubuntu16.04 docker registry 10.239.220.38
centos-k8s001.abc.com centos7.3 haproxy+keepalived+etcd(leader) 10.239.219.154
centos-k8s002.abc.com centos7.3 haproxy+keepalived+etcd 10.239.219.153
centos-k8s003.abc.com centos7.3 etcd+nginx+ELK(elasticsearch,logstash,kibana) 10.239.219.206
centos-k8s004.abc.com centos7.3 k8s master (kube-apiserver、kube-controller-manager、kube-scheduler) 10.239.219.207
centos-k8s005.abc.com centos7.3 k8s slave(kubeproxy,kubelet,docker,flanneld)+OSS service+ELK(elasticsearch+filebeat) 10.239.219.208
centos-k8s006.abc.com centos7.3 k8s slave(kubeproxy,kubelet,docker,flanneld)+mysql master+OSS service+ELK(elasticsearch+filebeat) 10.239.219.210
centos-k8s007.abc.com centos7.3 k8s slave(kubeproxy,kubelet,docker,flanneld)+mysql slave+OSS service+ELK(elasticsearch+filebeat) 10.239.219.209

 
集群搭建的过程大致可分为:
        一、docker 私有仓库的搭建
        二、etcd集群
        三、k8s集群的部署(1master 3slave)
        四、mysql (mycat)主从(k8s+docker 实现)
        五、webservice image build
        六、haproxy+keeplived(主主模式) 负载均衡
        七、mysql 自动备份
        八、ELK 日志管理
 
网络环境准备:
    1. 如果所在网络需要代理:

 
 
 
 
 

 
 
 

 

 
 

1

vi /etc/profile

 
 

    追加如下内容:

 
 
 
 
 

 
 
 

 

 
 

1

export http_proxy=http://ip or realm :port

2

export https_proxy=http://ip or realm :port

 
 

 
     ubuntu 和 从centos 的默认软件源很慢 我们改为国内网易163的源:
     ubuntu:

1.备份原来的源

 
 
 
 
 

 
 
 

 

 
 

1

sudo cp /etc/apt/sources.list /etc/apt/sources_init.list

 
 

将以前的源备份一下,以防以后可以用的。

2.更换源

 
 
 
 
 

 
 
 

 

 
 

1

sudo vi /etc/apt/sources.list

 
 

使用以下内容替换原内容   :wq 保存并退出
deb http://mirrors.163.com/ubuntu/ wily main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-security main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-updates main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-proposed main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-backports main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-security main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-updates main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-proposed main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-backports main restricted universe multiverse

 
 
 
 
 

 
 
 

 

 
 

1

deb http://mirrors.163.com/ubuntu/ wily main restricted universe multiverse

2

deb http://mirrors.163.com/ubuntu/ wily-security main restricted universe multiverse

3

deb http://mirrors.163.com/ubuntu/ wily-updates main restricted universe multiverse

4

deb http://mirrors.163.com/ubuntu/ wily-proposed main restricted universe multiverse

5

deb http://mirrors.163.com/ubuntu/ wily-backports main restricted universe multiverse

6

deb-src http://mirrors.163.com/ubuntu/ wily main restricted universe multiverse

7

deb-src http://mirrors.163.com/ubuntu/ wily-security main restricted universe multiverse

8

deb-src http://mirrors.163.com/ubuntu/ wily-updates main restricted universe multiverse

9

deb-src http://mirrors.163.com/ubuntu/ wily-proposed main restricted universe multiverse

10

deb-src http://mirrors.163.com/ubuntu/ wily-backports main restricted universe multiverse

 
 

3.更新

更新源

 
 
 
 
 

 
 
 

 

 
 

1

sudo apt-get update

 
 

Centos:
    mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
    cd /etc/yum.repos.d/
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo
    yum clean all 
    yum makecache
    yum -y update
 
一、docker 私有仓库的搭

    Ubuntu & centos docker 安装:

            请参考:https://docs.docker.com/install/linux/docker-ce/centos/      https://docs.docker.com/install/linux/docker-ce/ubuntu/
    2.    如果在sh的域,可以修改docker源加速:
            vim /etc/docker/daemon.json(没有则创建),写入:

 
 
 
 
 

 
 
 

 

 
 

1

{

2

    "registry-mirrors": ["http://hub-mirror.c.163.com"]

3

}

 
 

    3.    重启docker:    

 
 
 
 
 

 
 
 

 

 
 

1

systemctl restart docker

 
 

    4.   测试docker环境是否可用:  

 
 
 
 
 

 
 
 

 

 
 

1

docker pull hello-world   

 
 

        
            如果出现 “Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)”,需要docker 代理:
            1).创建docker.service.d目录:         

 
 
 
 
 

 
 
 

 

 
 

1

sudo mkdir -p /etc/systemd/system/docker.service.d

 
 

            2).创建代理文件:         

 
 
 
 
 

 
 
 

 

 
 

1

vim /etc/systemd/system/docker.service.d/http-proxy.conf

 
 

            3).向http-proxy.conf添加内容:

 
 
 
 
 

 
 
 

 

 
 

1

[Service]

2

Environment="HTTP_PROXY=xxx.xxx.xxx.xxx:port"

3

Environment="HTTPS_PROXY=xxx.xxx.xxx.xxx:port"

 
 

            4)systemctl daemon-reload
               service docker restart
            5).再次测试: docker pull hello-world ,如果得到类似如下结果,说明镜像已经下载

 
 
 
 
 

 
 
 

 

 
 

1

Using default tag: latest

2

latest: Pulling from library/hello-world

3

d1725b59e92d: Pull complete

4

Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788

5

Status: Downloaded newer image for hello-world:latest

 
 

            6). 查看已有镜像:

 
 
 
 
 

 
 
 

 

 
 

1

docker images

 
 

                发现hello-world在镜像列表 第一个docker 镜像就已经pull成功了!  
    5.    私有仓库搭建:
            为了方便保存并进行统一管理 我们的需要有一个私有的镜像仓库保存我们的镜像  以ub2-citst001作为docker镜像管理机器&私有仓库,集群内的机器所需要的镜像都从这台机器pull.
            1)  下载registry镜像:  

 
 
 
 
 

 
 
 

 

 
 

1

 docker pull index.tenxcloud.com/docker_library/registry

 
 

            2)   重命名registry镜像: 

 
 
 
 
 

 
 
 

 

 
 

1

docker tag daocloud.io/library/registry:latest registry

 
 

            3)  查看镜像: 

 
 
 
 
 

 
 
 

 

 
 

1

docker images

 
 

 
  

     4)  镜像仓库的存储地址在容器的/var/lib/registry 目录,我们需要在宿主机上创建一个文件夹来存放(映射)容器内/varlib/registry/的数据        
 

 
 
 
 
 

 
 
 

 

 
 

1

makedir -p  /docker/registry/

 
 

 

 
 
 
 
 

 
 
 

 

 
 

1

docker run -d -p 5000:5000 --name registry --restart=always --privileged=true -v /docker/registry:/var/lib/registry registry

 
 

                    参数说明:
                        -p 5000:5000  将宿主机的5000端口映射到容器5000端口
 
docker 
                        -restart=always  只要容器被关闭(任何情况下), docker 都会重启此容器
                        --privileged=true 给容器加权限&给与启动仓库的权限
                        -v 将宿主机目录映射到容器上 
         5) docker ps

           6)    设置镜像仓库的地址
                   vim /etc/docker/daemon.json,替换为:
{
"registry-mirrors": ["http://hub-mirror.c.163.com"],"insecure-registries":["宿主机的ip或域名:5000"]
}

 
 
 
 
 

 
 
 

 

 
 

1

{

2

"registry-mirrors": ["http://hub-mirror.c.163.com"],"insecure-registries":["宿主机的ip或域名:5000"]

3

}

 
 

          7)    重启docker  
systemctl restart docker

 
 
 
 
 

 
 
 

 

 
 

1

systemctl restart docker

 
 

          8)   测试:
                   要讲hello-world 上传到私有仓库:  先要重命名镜像:
                   docker tag hello-world ub2-citst001.abc.com:5000/hello-world
                  上传:  docker push ub2-citst001.abc.com:5000/hello-world
                  下载:  docker pull ub2-citst001.abc.com:5000/hello-world
                   在其他机器上测试:
                       编辑 /etc/docker/daemon.json (没有就创建)
{
"registry-mirrors": ["http://hub-mirror.c.163.com"],"insecure-registries":["私有仓库ip或者域名 :5000"]
}

 
 
 
 
 

 
 
 

 

 
 

1

{

2

"registry-mirrors": ["http://hub-mirror.c.163.com"],"insecure-registries":["私有仓库ip或者域名 :5000"]

3

}

 
 

systemctl daemon-reload
service docker restart

 
 
 
 
 

 
 
 

 

 
 

1

systemctl daemon-reload

2

service docker restart

 
 

二. etcd 集群搭建

etcd简介

etcd是一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法,etcd基于Go语言实现。

etcd是一个服务发现系统,具备以下的特点:

简单:安装配置简单,而且提供了HTTP API进行交互,使用也很简单

安全:支持SSL证书验证

快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作

可靠:采用raft算法,实现分布式系统数据的可用性和一致性

etcd应用场景

用于服务发现,服务发现(ServiceDiscovery)要解决的是分布式系统中最常见的问题之一,即在同一个分布式集群中的进程或服务如何才能找到对方并建立连接。

要解决服务发现的问题,需要具备下面三种必备属性。

- 一个强一致性、高可用的服务存储目录。

基于Ralf算法的etcd天生就是这样一个强一致性、高可用的服务存储目录。

一种注册服务和健康服务健康状况的机制。

etcd安装

分别在k8s001,k8s002,k8s003上安装etcd,组成etcd集群
可以直接在主机上安装,也可以通过docker安装部署
主机安装:
    1. 分别在三台机器上运行: yum install etcd -y
    2. yum安装的etcd默认配置文件在/etc/etcd/etcd.conf,修改这个文件
        centos-k8s001:

# [member]
# 节点名称
ETCD_NAME=centos-k8s001
# 数据存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
# 监听其他 Etcd 实例的地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
# 监听客户端地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
# 通知其他 Etcd 实例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://centos-k8s001:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
# 初始化集群内节点地址
ETCD_INITIAL_CLUSTER="centos-k8s001=http://centos-k8s001:2380,centos-k8s002=http://centos-k8s002:2380,centos-k8s003=http://centos-k8s003:2380"
# 初始化集群状态,new 表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
# 初始化集群 token
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
# 通知 客户端地址
ETCD_ADVERTISE_CLIENT_URLS="http://centos-k8s001:2379,http://centos-k8s001:4001"

 
 
 
 
 

 
 
 

 

 
 

1

# [member]

2

# 节点名称

3

ETCD_NAME=centos-k8s001

4

# 数据存放位置

5

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

6

#ETCD_WAL_DIR=""

7

#ETCD_SNAPSHOT_COUNT="10000"

8

#ETCD_HEARTBEAT_INTERVAL="100"

9

#ETCD_ELECTION_TIMEOUT="1000"

10

# 监听其他 Etcd 实例的地址

11

ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

12

# 监听客户端地址

13

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

14

#ETCD_MAX_SNAPSHOTS="5"

15

#ETCD_MAX_WALS="5"

16

#ETCD_CORS=""

17

#

18

#[cluster]

19

# 通知其他 Etcd 实例地址

20

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://centos-k8s001:2380"

21

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

22

# 初始化集群内节点地址

23

ETCD_INITIAL_CLUSTER="centos-k8s001=http://centos-k8s001:2380,centos-k8s002=http://centos-k8s002:2380,centos-k8s003=http://centos-k8s003:2380"

24

# 初始化集群状态,new 表示新建

25

ETCD_INITIAL_CLUSTER_STATE="new"

26

# 初始化集群 token

27

ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"

28

# 通知 客户端地址

29

ETCD_ADVERTISE_CLIENT_URLS="http://centos-k8s001:2379,http://centos-k8s001:4001"

 
 

centos-k8s002:

# [member]
ETCD_NAME=centos-k8s002
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://centos-k8s002:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="centos-k8s001=http://centos-k8s001:2380,centos-k8s002=http://centos-k8s002:2380,centos-k8s003=http://centos-k8s003:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://centos-k8s002:2379,http://centos-k8s002:4001"

 
 
 
 
 

 
 
 

 

 
 

1

# [member]

2

ETCD_NAME=centos-k8s002

3

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

4

#ETCD_WAL_DIR=""

5

#ETCD_SNAPSHOT_COUNT="10000"

6

#ETCD_HEARTBEAT_INTERVAL="100"

7

#ETCD_ELECTION_TIMEOUT="1000"

8

ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

9

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

10

#ETCD_MAX_SNAPSHOTS="5"

11

#ETCD_MAX_WALS="5"

12

#ETCD_CORS=""

13

#

14

#[cluster]

15


16

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://centos-k8s002:2380"

17

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

18

ETCD_INITIAL_CLUSTER="centos-k8s001=http://centos-k8s001:2380,centos-k8s002=http://centos-k8s002:2380,centos-k8s003=http://centos-k8s003:2380"

19

ETCD_INITIAL_CLUSTER_STATE="new"

20

ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"

21

ETCD_ADVERTISE_CLIENT_URLS="http://centos-k8s002:2379,http://centos-k8s002:4001"

 
 

 

centos-k8s003:

# [member]
ETCD_NAME=centos-k8s003
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://centos-k8s003:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="centos-k8s001=http://centos-k8s001:2380,centos-k8s002=http://centos-k8s002:2380,centos-k8s003=http://centos-k8s003:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://centos-k8s003:2379,http://centos-k8s003:4001"

 
 
 
 
 

 
 
 

 

 
 

1

# [member]

2

ETCD_NAME=centos-k8s003

3

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

4

#ETCD_WAL_DIR=""

5

#ETCD_SNAPSHOT_COUNT="10000"

6

#ETCD_HEARTBEAT_INTERVAL="100"

7

#ETCD_ELECTION_TIMEOUT="1000"

8

ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

9

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

10

#ETCD_MAX_SNAPSHOTS="5"

11

#ETCD_MAX_WALS="5"

12

#ETCD_CORS=""

13

#

14

#[cluster]

15

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://centos-k8s003:2380"

16

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

17

ETCD_INITIAL_CLUSTER="centos-k8s001=http://centos-k8s001:2380,centos-k8s002=http://centos-k8s002:2380,centos-k8s003=http://centos-k8s003:2380"

18

ETCD_INITIAL_CLUSTER_STATE="new"

19

ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"

20

ETCD_ADVERTISE_CLIENT_URLS="http://centos-k8s003:2379,http://centos-k8s003:4001"

 
 

3. 在各节点上重启etcd服务:

 
 
 
 
 

 
 
 

 

 
 

1

systemctl restart etcd

 
 

4.查看集群是否成功:

 
 
 
 
 

 
 
 

 

 
 

1

etcdctl member list

 
 

        如果看到如下输出,表示成功:
4cb07e7292111d83: name=etcd1 peerURLs=http://centos-k8s001:2380 clientURLs=http://centos-k8s001:2379,http://centos-k8s001:4001 isLeader=true
713da186acaefc5b: name=etcd2 peerURLs=http://centos-k8s002:2380 clientURLs=http://centos-k8s002:2379,http://centos-k8s002:4001 isLeader=false
fabaedd18a2da8a7: name=etcd3 peerURLs=http://centos-k8s003:2380 clientURLs=http://centos-k8s003:2379,http://centos-k8s003:4001 isLeader=false

 
 
 
 
 

 
 
 

 

 
 

1

4cb07e7292111d83: name=etcd1 peerURLs=http://centos-k8s001:2380 clientURLs=http://centos-k8s001:2379,http://centos-k8s001:4001 isLeader=true

2

713da186acaefc5b: name=etcd2 peerURLs=http://centos-k8s002:2380 clientURLs=http://centos-k8s002:2379,http://centos-k8s002:4001 isLeader=false

3

fabaedd18a2da8a7: name=etcd3 peerURLs=http://centos-k8s003:2380 clientURLs=http://centos-k8s003:2379,http://centos-k8s003:4001 isLeader=false

 
 

    docker安装部署:
        1.确保三台机器都安装了docker,并可从私有镜像仓库pull镜像:
            在ub2-citst001上拉取etcd的镜像:
docker pull quay.io/coreos/etcd

 
 
 
 
 

 
 
 

 

 
 

1

docker pull quay.io/coreos/etcd 

 
 

       2. 将镜像重命名后放到私有仓库:
docker tag quay.io/coreos/etcd ub2-citst001.abc.com:5000/quay.io/coreos/etcd
docker push ub2-citst001.abc.com:5000/quay.io/coreos/etcd

 
 
 
 
 

x

 
 

 

 
 

1

docker tag quay.io/coreos/etcd ub2-citst001.abc.com:5000/quay.io/coreos/etcd

2

docker push ub2-citst001.abc.com:5000/quay.io/coreos/etcd

 
 

       3.分别再centos-k8s001---centos-k8s003上运行以下命令,拉取etcd镜像:
docker pull ub2-citst001.abc.com:5000/quay.io/coreos/etcd

 
 
 
 
 

x

 
 

 

 
 

1

docker pull ub2-citst001.abc.com:5000/quay.io/coreos/etcd

 
 

       4.启动并配置etcd images
// centos-k8s001启动
docker run -d --name etcd -p 2379:2379 -p 2380:2380 -p 4001:4001 --restart=always --volume=etcd-data:/etcd-data ub2-citst001.abc.com:5000/quay.io/coreos/etcd /usr/local/bin/etcd --data-dir=/etcd-data --name etcd1 --initial-advertise-peer-urls http://centos-k8s001:2380 --listen-peer-urls http://0.0.0.0:2380 --advertise-client-urls http://centos-k8s001:2379,http://centos-k8s001:4001 --listen-client-urls http://0.0.0.0:2379 --initial-cluster-state new --initial-cluster-token docker-etcd --initial-cluster etcd1=http://centos-k8s001:2380,etcd2=http://centos-k8s002:2380,etcd3=http://centos-k8s003:2380

 
 
 
 
 

x

 
 

 

 
 

1

// centos-k8s001启动

2

docker run -d --name etcd    -p 2379:2379     -p 2380:2380   -p 4001:4001 --restart=always  --volume=etcd-data:/etcd-data  ub2-citst001.abc.com:5000/quay.io/coreos/etcd   /usr/local/bin/etcd     --data-dir=/etcd-data --name etcd1  --initial-advertise-peer-urls http://centos-k8s001:2380 --listen-peer-urls http://0.0.0.0:2380     --advertise-client-urls http://centos-k8s001:2379,http://centos-k8s001:4001 --listen-client-urls http://0.0.0.0:2379     --initial-cluster-state new     --initial-cluster-token docker-etcd     --initial-cluster etcd1=http://centos-k8s001:2380,etcd2=http://centos-k8s002:2380,etcd3=http://centos-k8s003:2380

 
 

//centos-k8s002启动
docker run -d --name etcd -p 2379:2379 -p 2380:2380 -p 4001:4001 --restart=always --volume=etcd-data:/etcd-data ub2-citst001.abc.com:5000/quay.io/coreos/etcd /usr/local/bin/etcd --data-dir=/etcd-data --name etcd2 --initial-advertise-peer-urls http://centos-k8s002:2380 --listen-peer-urls http://0.0.0.0:2380 --advertise-client-urls http://centos-k8s002:2379,http://centos-k8s002:4001 --listen-client-urls http://0.0.0.0:2379 --initial-cluster-state new --initial-cluster-token docker-etcd --initial-cluster etcd1=http://centos-k8s001:2380,etcd2=http://centos-k8s002:2380,etcd3=http://centos-k8s003:2380

 
 
 
 
 

x

 
 

 

 
 

1

//centos-k8s002启动

2

docker run -d --name etcd    -p 2379:2379     -p 2380:2380   -p 4001:4001 --restart=always  --volume=etcd-data:/etcd-data  ub2-citst001.abc.com:5000/quay.io/coreos/etcd   /usr/local/bin/etcd     --data-dir=/etcd-data --name etcd2  --initial-advertise-peer-urls http://centos-k8s002:2380 --listen-peer-urls http://0.0.0.0:2380     --advertise-client-urls http://centos-k8s002:2379,http://centos-k8s002:4001 --listen-client-urls http://0.0.0.0:2379     --initial-cluster-state new     --initial-cluster-token docker-etcd     --initial-cluster etcd1=http://centos-k8s001:2380,etcd2=http://centos-k8s002:2380,etcd3=http://centos-k8s003:2380

 
 

// centos-k8s003启动
docker run -d --name etcd -p 2379:2379 -p 2380:2380 -p 4001:4001 --restart=always --volume=etcd-data:/etcd-data ub2-citst001.abc.com:5000/quay.io/coreos/etcd /usr/local/bin/etcd --data-dir=/etcd-data --name etcd3 --initial-advertise-peer-urls http://centos-k8s003:2380 --listen-peer-urls http://0.0.0.0:2380 --advertise-client-urls http://centos-k8s003:2379,http://centos-k8s003:4001 --listen-client-urls http://0.0.0.0:2379 --initial-cluster-state new --initial-cluster-token docker-etcd --initial-cluster etcd1=http://centos-k8s001:2380,etcd2=http://centos-k8s002:2380,etcd3=http://centos-k8s003:2380

 
 
 
 
 

x

 
 

 

 
 

1

// centos-k8s003启动

2

docker run -d --name etcd    -p 2379:2379     -p 2380:2380   -p 4001:4001 --restart=always --volume=etcd-data:/etcd-data  ub2-citst001.abc.com:5000/quay.io/coreos/etcd   /usr/local/bin/etcd     --data-dir=/etcd-data --name etcd3  --initial-advertise-peer-urls http://centos-k8s003:2380 --listen-peer-urls http://0.0.0.0:2380     --advertise-client-urls http://centos-k8s003:2379,http://centos-k8s003:4001 --listen-client-urls http://0.0.0.0:2379     --initial-cluster-state new     --initial-cluster-token docker-etcd     --initial-cluster etcd1=http://centos-k8s001:2380,etcd2=http://centos-k8s002:2380,etcd3=http://centos-k8s003:2380

 
 

       5.进入任意一个etcd container,查看etcd集群状态
docker extc -it <container name or id> /bin/bash
// 此时已经进入容器 执行:
etcdctl member list
// 看到一下信息,表示etcd container 集群部署成功
4cb07e7292111d83: name=etcd1 peerURLs=http://centos-k8s001:2380 clientURLs=http://centos-k8s001:2379,http://centos-k8s001:4001 isLeader=true
713da186acaefc5b: name=etcd2 peerURLs=http://centos-k8s002:2380 clientURLs=http://centos-k8s002:2379,http://centos-k8s002:4001 isLeader=false
fabaedd18a2da8a7: name=etcd3 peerURLs=http://centos-k8s003:2380 clientURLs=http://centos-k8s003:2379,http://centos-k8s003:4001 isLeader=false

 
 
 
 
 

 
 
 

 

 
 

1

docker extc -it <container name or id> /bin/bash

2

// 此时已经进入容器 执行:

3

etcdctl member list

4

// 看到一下信息,表示etcd container 集群部署成功

5

4cb07e7292111d83: name=etcd1 peerURLs=http://centos-k8s001:2380 clientURLs=http://centos-k8s001:2379,http://centos-k8s001:4001 isLeader=true

6

713da186acaefc5b: name=etcd2 peerURLs=http://centos-k8s002:2380 clientURLs=http://centos-k8s002:2379,http://centos-k8s002:4001 isLeader=false

7

fabaedd18a2da8a7: name=etcd3 peerURLs=http://centos-k8s003:2380 clientURLs=http://centos-k8s003:2379,http://centos-k8s003:4001 isLeader=false

 
 

三.k8s集群的部署
      ETCD集群部署完成后,开始部署k8s集群,集群由1master + 3slave组成。
      master : centos-k8s004
      slave:  centos-k8s005, centos-k8s006, centos-k8s007
      集群所需组件:
       flannel 实现夸主机的容器网络的通信
  kube-apiserver 提供kubernetes集群的API调用
  kube-controller-manager 确保集群服务
  kube-scheduler 调度容器,分配到Node
  kubelet 在Node节点上按照配置文件中定义的容器规格启动容器
  kube-proxy 提供网络代理服务
       

•先决条件

如下操作在上图4台机器执行

1.确保系统已经安装epel-release源

# yum -y install epel-release

2.关闭防火墙服务和selinx,避免与docker容器的防火墙规则冲突。

# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0

•安装配置Kubernetes Master

如下操作在master上执行

1.使用yum安装etcd和kubernetes-master

# yum -y install kubernetes-master

2.编辑/etc/kubernetes/apiserver文件

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-k8s001:2379,http://centos-k8s002:2379,http://centos-k8s003:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""

 
 
 
 
 

 
 
 

 

 
 

1

# The address on the local server to listen to.

2

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

3

# The port on the local server to listen on.

4

#KUBE_API_PORT="--port=8080"

5

# Port minions listen on

6

KUBELET_PORT="--kubelet-port=10250"

7

# Comma separated list of nodes in the etcd cluster

8

KUBE_ETCD_SERVERS="--etcd-servers=http://centos-k8s001:2379,http://centos-k8s002:2379,http://centos-k8s003:2379"

9

# Address range to use for services

10

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

11

# default admission control policies

12

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

13

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

14

# Add your own!

15

KUBE_API_ARGS=""

 
 

3.启动etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服务,并设置开机启动。

 
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES ; done
//查看服务运行状态 使用 systemctl status ,例:
systemctl status kube-apiserver

 
 
 
 
 

 
 
 

 

 
 

1

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES ; done

2

//查看服务运行状态 使用 systemctl status ,例:

3

systemctl status kube-apiserver

 
 

4.在 centos-k8s001 的etcd中定义flannel网络

//如果是容器的方式部署etcd,先执行:
docker ps //找到etcd容器id
docker exec -it <容器id> /bin/sh //进入容器,然后执行下面命令
// 若直接在宿主机部署etcd,则直接执行:
etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}' //etcd中定义flannel

// 在其他etcd 节点查看flannel 网络配置:
// 如在cento-k8s002上查看:
etcdctl get /atomic.io/network/config
// 正确输出:
{"Network":"172.17.0.0/16"} //配置成功

 
 
 
 
 

 
 
 

 

 
 

1

//如果是容器的方式部署etcd,先执行:

2

docker ps      //找到etcd容器id

3

docker exec -it <容器id> /bin/sh  //进入容器,然后执行下面命令

4

// 若直接在宿主机部署etcd,则直接执行:

5

etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'   //etcd中定义flannel

6


7

// 在其他etcd 节点查看flannel 网络配置:

8

// 如在cento-k8s002上查看:

9

 etcdctl get /atomic.io/network/config

10

// 正确输出:

11

{"Network":"172.17.0.0/16"} //配置成功

 
 

•安装配置Kubernetes Node

如下操作在centos-k8s005,6,7上执行

1.使用yum安装flannel和kubernetes-node

# yum -y install flannel kubernetes-node

2.为flannel网络指定etcd服务,修改/etc/sysconfig/flanneld文件

FLANNEL_ETCD="http://centos-k8s001:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"

3.修改/etc/kubernetes/config文件

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-k8s004:8080"

4.按照如下内容修改对应node的配置文件/etc/kubernetes/kubelet

centos-k8s005:

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=centos-k8s005" #修改成对应Node的IP
KUBELET_API_SERVER="--api-servers=http://centos-k8s004:8080" #指定Master节点的API Server
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

centos-k8s006:

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=centos-k8s006" 
KUBELET_API_SERVER="--api-servers=http://centos-k8s004:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

centos-k8s007:

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=centos-k8s007"
KUBELET_API_SERVER="--api-servers=http://centos-k8s004:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

5.在所有Node节点上启动kube-proxy,kubelet,docker,flanneld等服务,并设置开机启动。

# for SERVICES in kube-proxy kubelet docker flanneld;do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES; done

•验证集群是否安装成功

在master centos-k8s004上执行如下命令

[root@centos-k8s004 ~]# kubectl get node
NAME            STATUS    AGE
centos-k8s005   Ready     38d
centos-k8s006   Ready     38d
centos-k8s007   Ready     37d

    四. mysql 主从(k8s+docker 实现)   
      我们首先需要build mysql master镜像和slave镜像:
       1.在ub2-citst001(所有的镜像都在此处build,pull和管理)的根目录下创建一个文件夹:mysql-build-file,并创建如下文件,构成一下形式:
        
`-- mysql-build_file
|-- master
| |-- docker-entrypoint.sh
| |-- Dockerfile
| `-- my.cnf
`-- slave
|-- docker-entrypoint.sh
|-- Dockerfile
`-- my.cnf

# master/docker-entrypoint.sh:
#!/bin/bash
MYSQL="mysql -uroot -proot"
sql="CREATE USER '$MYSQL_REPLICATION_USER'@'%' IDENTIFIED BY '$MYSQL_REPLICATION_PASSWORD'"
$MYSQL -e "$sql"
sql="GRANT REPLICATION SLAVE ON *.* TO '$MYSQL_REPLICATION_USER'@'%' IDENTIFIED BY '$MYSQL_REPLICATION_PASSWORD'"
$MYSQL -e "$sql"
sql="FLUSH PRIVILEGES"
$MYSQL -e "$sql"

# master/my.cnf :
[mysqld]
log-bin = mysql-bin
server-id = 1
character_set_server=utf8
log_bin_trust_function_creators=1
skip-name-resolve
binlog_format = mixed
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
auto-increment-increment = 2
auto-increment-offset = 1

# master/Dockerfile:
FROM mysql:5.6
ENV http_proxy http://child-prc.abc.com:913
ENV https_proxy https://child-prc.abc.com:913
COPY my.cnf /etc/mysql/mysql.cnf
COPY docker-entrypoint.sh /docker-entrypoint-initdb.d/

#slave/docker-entrypoint.sh:
#!/bin/bash
MYSQL="mysql -uroot -proot"
MYSQL_MASTER="mysql -uroot -proot -h$MYSQL_MASTER_SERVICE_HOST -P$MASTER_PORT"
sql="stop slave"
$MYSQL -e "$sql"
sql="SHOW MASTER STATUS"
result="$($MYSQL_MASTER -e "$sql")"
dump_data=/master-condition.log
echo -e "$result" > $dump_data
var=$(cat /master-condition.log | grep mysql-bin)
MASTER_LOG_FILE=$(echo $var | awk '{split($0,arr," ");print arr[1]}')
MASTER_LOG_POS=$(echo $var | awk '{split($0,arr," ");print arr[2]}')
sql="reset slave"
$MYSQL -e "$sql"
sql="CHANGE MASTER TO master_host='$MYSQL_MASTER_SERVICE_HOST', master_user='$MYSQL_REPLICATION_USER', master_password='$MYSQL_REPLICATION_PASSWORD', master_log_file='$MASTER_LOG_FILE', master_log_pos=$MASTER_LOG_POS, master_port=$MASTER_PORT"
$MYSQL -e "$sql"
sql="start slave"
$MYSQL -e "$sql"

#slave/my.cnf:
[mysqld]
log-bin = mysql-bin
#server-id 不能必须保证唯一,不能和其他mysql images冲突
server-id = 2
character_set_server=utf8
log_bin_trust_function_creators=1

#slave/Dockerfile:
FROM mysql:5.6
ENV http_proxy http://child-prc.abc.com:913
ENV https_proxy https://child-prc.abc.com:913
COPY my.cnf /etc/mysql/mysql.cnf
COPY docker-entrypoint.sh /docker-entrypoint-initdb.d/
RUN touch master-condition.log && chown -R mysql:mysql /master-condition.log

 
 
 
 
 

x

 
 

 

 
 

1

`-- mysql-build_file

2

    |-- master

3

    |   |-- docker-entrypoint.sh

4

    |   |-- Dockerfile

5

    |   `-- my.cnf

6

    `-- slave

7

        |-- docker-entrypoint.sh

8

        |-- Dockerfile

9

        `-- my.cnf

10


11

# master/docker-entrypoint.sh:

12

#!/bin/bash

13

MYSQL="mysql -uroot -proot"

14

sql="CREATE USER '$MYSQL_REPLICATION_USER'@'%' IDENTIFIED BY '$MYSQL_REPLICATION_PASSWORD'"

15

$MYSQL -e "$sql"

16

sql="GRANT REPLICATION SLAVE ON *.* TO '$MYSQL_REPLICATION_USER'@'%' IDENTIFIED BY '$MYSQL_REPLICATION_PASSWORD'"

17

$MYSQL -e "$sql"

18

sql="FLUSH PRIVILEGES"

19

$MYSQL -e "$sql"

20


21

# master/my.cnf :

22

[mysqld]

23

log-bin = mysql-bin

24

server-id = 1

25

character_set_server=utf8

26

log_bin_trust_function_creators=1

27

skip-name-resolve

28

binlog_format = mixed

29

relay-log = relay-bin

30

relay-log-index = slave-relay-bin.index

31

auto-increment-increment = 2

32

auto-increment-offset = 1

33


34

# master/Dockerfile:

35

FROM mysql:5.6

36

ENV http_proxy http://child-prc.abc.com:913

37

ENV https_proxy https://child-prc.abc.com:913

38

COPY my.cnf /etc/mysql/mysql.cnf

39

COPY docker-entrypoint.sh /docker-entrypoint-initdb.d/

40


41

#slave/docker-entrypoint.sh:

42

#!/bin/bash

43

MYSQL="mysql -uroot -proot"

44

MYSQL_MASTER="mysql -uroot -proot -h$MYSQL_MASTER_SERVICE_HOST -P$MASTER_PORT"

45

sql="stop slave"

46

$MYSQL -e "$sql"

47

sql="SHOW MASTER STATUS"

48

result="$($MYSQL_MASTER -e "$sql")"

49

dump_data=/master-condition.log

50

echo -e "$result" > $dump_data

51

var=$(cat /master-condition.log | grep mysql-bin)

52

MASTER_LOG_FILE=$(echo $var | awk '{split($0,arr," ");print arr[1]}')

53

MASTER_LOG_POS=$(echo $var | awk '{split($0,arr," ");print arr[2]}')

54

sql="reset slave"

55

$MYSQL -e "$sql"

56

sql="CHANGE MASTER TO master_host='$MYSQL_MASTER_SERVICE_HOST', master_user='$MYSQL_REPLICATION_USER', master_password='$MYSQL_REPLICATION_PASSWORD', master_log_file='$MASTER_LOG_FILE', master_log_pos=$MASTER_LOG_POS, master_port=$MASTER_PORT"

57

$MYSQL -e "$sql"

58

sql="start slave"

59

$MYSQL -e "$sql"

60


61

#slave/my.cnf:

62

[mysqld]

63

log-bin = mysql-bin 

64

#server-id 不能必须保证唯一,不能和其他mysql images冲突

65

server-id = 2

66

character_set_server=utf8

67

log_bin_trust_function_creators=1

68


69

#slave/Dockerfile:

70

FROM mysql:5.6

71

ENV http_proxy http://child-prc.abc.com:913

72

ENV https_proxy https://child-prc.abc.com:913

73

COPY my.cnf /etc/mysql/mysql.cnf

74

COPY docker-entrypoint.sh /docker-entrypoint-initdb.d/

75

RUN touch master-condition.log && chown -R mysql:mysql /master-condition.log

 
 

    2.build images of Dockerfile
cd /mysql-build_file/master
docker build -t ub2-citst001.abc.com:5000/mysql-master .
# output 输出如下说明成功build:
#Sending build context to Docker daemon 4.096kB
#Step 1/5 : FROM mysql:5.6
# ---> a46c2a2722b9
#Step 2/5 : ENV http_proxy http://child-prc.abc.com:913
# ---> Using cache
# ---> 873859820af7
#Step 3/5 : ENV https_proxy https://child-prc.abc.com:913
# ---> Using cache
# ---> b5391bed1bda
#Step 4/5 : COPY my.cnf /etc/mysql/mysql.cnf
# ---> Using cache
# ---> ccbdced047a3
#Step 5/5 : COPY docker-entrypoint.sh /docker-entrypoint-initdb.d/
# ---> Using cache
# ---> 81cfad9f0268
#Successfully built 81cfad9f0268
#Successfully tagged ub2-citst001.sh.abc.com:5000/mysql-master
cd /mysql-build_file/slave
docker build -t ub2-citst001.abc.com:5000/mysql-slave .

# 查看images
docker images

# 将images放到私有镜像仓库:
docker push ub2-citst001.abc.com:5000/mysql-slave
docker push ub2-citst001.abc.com:5000/mysql-master

 
 
 
 
 

x

 
 

 

 
 

1

cd /mysql-build_file/master

2

docker build -t ub2-citst001.abc.com:5000/mysql-master .

3

# output 输出如下说明成功build:

4

#Sending build context to Docker daemon  4.096kB

5

#Step 1/5 : FROM mysql:5.6

6

# ---> a46c2a2722b9

7

#Step 2/5 : ENV http_proxy http://child-prc.abc.com:913

8

# ---> Using cache

9

# ---> 873859820af7

10

#Step 3/5 : ENV https_proxy https://child-prc.abc.com:913

11

# ---> Using cache

12

# ---> b5391bed1bda

13

#Step 4/5 : COPY my.cnf /etc/mysql/mysql.cnf

14

# ---> Using cache

15

# ---> ccbdced047a3

16

#Step 5/5 : COPY docker-entrypoint.sh /docker-entrypoint-initdb.d/

17

# ---> Using cache

18

# ---> 81cfad9f0268

19

#Successfully built 81cfad9f0268

20

#Successfully tagged ub2-citst001.sh.abc.com:5000/mysql-master

21

cd /mysql-build_file/slave

22

docker build -t ub2-citst001.abc.com:5000/mysql-slave .

23


24

# 查看images

25

docker images

26


27

# 将images放到私有镜像仓库:

28

docker push ub2-citst001.abc.com:5000/mysql-slave

29

docker push ub2-citst001.abc.com:5000/mysql-master

 
 

    3. 在centos-k8s006 centos-k8s007上部署mysql cat
# centos-k8s004(k8s master)上操作:
#根目录创建一个文件夹:file_for_k8s 用于存放yaml文件或
mkdir -p /file_for_k8s/MyCat
cd /file_for_k8s/MyCat/
mkdir master slave
cd master
#创建配置mysql-master.yaml文件
touch mysql-master.yaml

#内容如下:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql-master
spec:
replicas: 1
selector:
matchLabels:
app: mysql-master
release: stabel
template:
metadata:
labels:
name: mysql-master
app: mysql-master
release: stabel
spec:
containers:
- name: mysql-master
image: ub2-citst001.abc.com:5000/mysql-master
volumeMounts:
- name: mysql-config
mountPath: /usr/data
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
- name: MYSQL_REPLICATION_USER
value: "slave"
- name: MYSQL_REPLICATION_PASSWORD
value: "slave"
ports:
- containerPort: 3306
#hostPort: 4000
name: mysql-master
volumes:
- name: mysql-config
hostPath:
path: /localdisk/NFS/mysqlData/master/
nodeSelector:
kubernetes.io/hostname: centos-k8s006

----------------------------------------------------------end-----------------------------------------------------------------

#创建mysql-slave-service.yaml
touch mysql-slave-service.yaml
#内容如下:
apiVersion: v1
kind: Service
metadata:
name: mysql-master
namespace: default
spec:
type: NodePort
selector:
app: mysql-master
release: stabel
ports:
- name: http
port: 3306
nodePort: 31306
targetPort: 3306
--------------------------------------------------------------end---------------------------------------------------------------------

cd ../slave/
#创建配置mysql-slave.yaml文件
touch mysql-slave.yaml
#内容如下:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql-slave
spec:
replicas: 1
selector:
matchLabels:
app: mysql-slave
release: stabel
template:
metadata:
labels:
app: mysql-slave
name: mysql-slave
release: stabel
spec:
containers:
- name: mysql-slave
image: ub2-citst001.abc.com:5000/mysql-slave
volumeMounts:
- name: mysql-config
mountPath: /usr/data
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
- name: MYSQL_REPLICATION_USER
value: "slave"
- name: MYSQL_REPLICATION_PASSWORD
value: "slave"
- name: MYSQL_MASTER_SERVICE_HOST
value: "mysql-master"
- name: MASTER_PORT
value: "3306"
ports:
- containerPort: 3306
name: mysql-slave
volumes:
- name: mysql-config
hostPath:
path: /localdisk/NFS/mysqlData/slave/
nodeSelector:
kubernetes.io/hostname: centos-k8s007

-----------------------------------------------------------------------end-------------------------------------------------------------------
#创建mysql-slave-service.yaml
touch mysql-slave-service.yaml
#内容如下:
apiVersion: v1
kind: Service
metadata:
name: mysql-slave
namespace: default
spec:
type: NodePort
selector:
app: mysql-slave
release: stabel
ports:
- name: http
port: 3306
nodePort: 31307
targetPort: 3306
-----------------------------------------------------------------------end-----------------------------------------------------------------

#创建mycay实例:
cd ../master/
kubectl create -f mysql-master.yaml
kubectl create -f mysql-master-service.yaml
cd ../slave/
kubectl create -f mysql-slave.yaml
kubectl create -f mysql-slave-service.yaml

# 查看deployment创建状态,centos-k8s004上执行
[root@centos-k8s004 slave]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mysql-master 1 1 1 1 39d3
mysql-slave 1 1 1 1 39d
# 查看service创建状态:
[root@centos-k8s004 slave]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.0.1 <none> 443/TCP 5d
mysql-master 192.168.45.209 <nodes> 3306:31306/TCP 5h
mysql-slave 192.168.230.192 <nodes> 3306:31307/TCP 5h
# AVAILABLE的值可能需要2~15s的时间才能与DESIRED的值同步,启动pod需要时间。如果长时间还是不同步或为0,可用一下命令查看详细状态:
[root@centos-k8s004 master]# kubectl describe pod
Name: mysql-master-4291032429-0fzsr
Namespace: default
Node: centos-k8s006/10.239.219.210
Start Time: Wed, 31 Oct 2018 18:56:06 +0800
Labels: name=mysql-master
pod-template-hash=4291032429
Status: Running
IP: 172.17.44.2
Controllers: ReplicaSet/mysql-master-4291032429
Containers:
master:
Container ID: docker://674de0971fe2aa16c7926f345d8e8b2386278b14dedd826653e7347559737e28
Image: ub2-citst001.abc.com:5000/mysql-master
Image ID: docker-pullable://ub2-citst001.abc.com:5000/mysql-master@sha256:bc286c1374a3a5f18ae56bd785a771ffe0fad15567d56f8f67a615c606fb4e0d
Port: 3306/TCP
State: Running
Started: Wed, 31 Oct 2018 18:56:07 +0800
Ready: True
Restart Count: 0
Volume Mounts:
/usr/data from mysql-config (rw)
Environment Variables:
MYSQL_ROOT_PASSWORD: root
MYSQL_REPLICATION_USER: slave
MYSQL_REPLICATION_PASSWORD: slave
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
mysql-config:
Type: HostPath (bare host directory volume)
Path: /localdisk/NFS/mysqlData/master/
QoS Class: BestEffort
Tolerations: <none>
No events.

Name: mysql-slave-3654103728-0sxsm
Namespace: default
Node: centos-k8s007/10.239.219.209
Start Time: Wed, 31 Oct 2018 18:56:19 +0800
Labels: name=mysql-slave
pod-template-hash=3654103728
Status: Running
IP: 172.17.16.2
Controllers: ReplicaSet/mysql-slave-3654103728
Containers:
slave:
Container ID: docker://d52f4f1e57d6fa6a7c04f1a9ba63fa3f0af778df69a3190c4f35f755f225fb50
Image: ub2-citst001.abc.com:5000/mysql-slave
Image ID: docker-pullable://ub2-citst001.abc.com:5000/mysql-slave@sha256:6a1c7cbb27184b966d2557bf53860daa439b7afda3d4aa5498844d4e66f38f47
Port: 3306/TCP
State: Running
Started: Fri, 02 Nov 2018 13:49:48 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 31 Oct 2018 18:56:20 +0800
Finished: Fri, 02 Nov 2018 13:49:47 +0800
Ready: True
Restart Count: 1
Volume Mounts:
/usr/data from mysql-config (rw)
Environment Variables:
MYSQL_ROOT_PASSWORD: root
MYSQL_REPLICATION_USER: slave
MYSQL_REPLICATION_PASSWORD: slave
MYSQL_MASTER_SERVICE_HOST: centos-k8s006
MASTER_PORT: 4000
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
mysql-config:
Type: HostPath (bare host directory volume)
Path: /localdisk/NFS/mysqlData/slave/
QoS Class: BestEffort
Tolerations: <none>
No events.

# 如果出现如下错误,请按照下面方法解决:
Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failede:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
#解决方法
问题是比较明显的,就是没有/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt文件,用ls -l查看之后发现是一个软链接,链接到/etc/rhsm/ca/redhat-uep.pem,但是这个文件不存在,使用yum search *rhsm*命令:
安装python-rhsm-certificates包:
[root@centos-k8s004 master]# yum install python-rhsm-certificates -y
这里又出现问题了:
python-rhsm-certificates <= 1.20.3-1 被 (已安裝) subscription-manager-rhsm-certificates-1.20.11-1.el7.centos.x86_64 取代
那么怎么办呢,我们直接卸载掉subscription-manager-rhsm-certificates包,使用yum remove subscription-manager-rhsm-certificates -y命令,然后下载python-rhsm-certificates包:
# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
然后手动安装该rpm包:
# rpm -ivh python-rhsm-certificates
这时发现/etc/rhsm/ca/redhat-uep.pem文件已存在
在node执行:
yum install *rhsm* -y
这时将/etc/docker/seccomp.json删除,再次重启即可,并执行:
#docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
这时将之前创建的rc、svc和pod全部删除重新创建,过一会就会发现pod启动成功

# 在centos-k8s006查看docker container,说明成功创建mysql-master实例,查看mysql-slave状态同理
[root@centos-k8s006 master]# docker ps
674de0971fe2 ub2-citst001.abc.com:5000/mysql-master "docker-entrypoint..." 5 weeks ago Up 5 weeks k8s_master.1fa78e47_mysql-master-4291032429-0fzsr_default_914f7535-dcfb-11e8-9eb8-005056a654f2_3462901b
220e4d37915d registry.access.redhat.com/rhel7/pod-infrastructure:latest "/usr/bin/pod" 5 weeks ago Up 5 weeks 0.0.0.0:4000->3306/tcp k8s_POD.62220e6f_mysql-master-4291032429-0fzsr_default_914f7535-dcfb-11e8-9eb8-005056a654f2_d0d62756

# 此时通过kubectl就可管理contianer
[root@centos-k8s006 master]# kubectl exec -it mysql-master-4291032429-0fzsr /bin/bash
root@mysql-master-4291032429-0fzsr:/#
# 在master创建一个数据库,看是否会同步到slave
root@mysql-master-4291032429-0fzsr:/# mysql -u root -p root
mysql>create database test_database charsetr='utf8';
mysql>show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| test_database |
| mysql |
| performance_schema |
+--------------------+
# 退出容器:
Ctrl + p && Ctrl + q
# 进入mysql-slave查看数据库:
[root@centos-k8s006 master]# kubectl exec -it mysql-slave-3654103728-0sxsm /bin/bash
root@mysql-slave-3654103728-0sxsm:/#mysql -u root -p root
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema | |
| mysql |
| performance_schema |
| test_database |
+--------------------+
# 可以看出,同步成功!

 
 
 
 
 

x

 
 

 

 
 

1

# centos-k8s004(k8s master)上操作:

2

#根目录创建一个文件夹:file_for_k8s  用于存放yaml文件或

3

mkdir -p /file_for_k8s/MyCat

4

cd /file_for_k8s/MyCat/

5

mkdir master slave

6

cd master

7

#创建配置mysql-master.yaml文件

8

touch mysql-master.yaml

9


10

#内容如下:

11

apiVersion: extensions/v1beta1

12

kind: Deployment

13

metadata:

14

  name: mysql-master

15

spec:

16

  replicas: 1

17

  selector:

18

    matchLabels:

19

      app: mysql-master

20

      release: stabel

21

  template:

22

    metadata:

23

      labels:

24

        name: mysql-master

25

        app: mysql-master

26

        release: stabel

27

    spec:

28

      containers:

29

      - name: mysql-master

30

        image: ub2-citst001.abc.com:5000/mysql-master

31

        volumeMounts:

32

        - name: mysql-config

33

          mountPath: /usr/data

34

        env:

35

        - name: MYSQL_ROOT_PASSWORD

36

          value: "root"

37

        - name: MYSQL_REPLICATION_USER

38

          value: "slave"

39

        - name: MYSQL_REPLICATION_PASSWORD

40

          value: "slave"

41

        ports:

42

        - containerPort: 3306

43

          #hostPort: 4000

44

          name: mysql-master

45

      volumes:

46

      - name: mysql-config

47

        hostPath:

48

          path: /localdisk/NFS/mysqlData/master/

49

      nodeSelector:

50

        kubernetes.io/hostname: centos-k8s006

51


52


53

----------------------------------------------------------end-----------------------------------------------------------------

54


55

#创建mysql-slave-service.yaml

56

touch mysql-slave-service.yaml

57

#内容如下:

58

apiVersion: v1

59

kind: Service

60

metadata:

61

  name: mysql-master

62

  namespace: default

63

spec:

64

  type: NodePort

65

  selector:

66

    app: mysql-master

67

    release: stabel

68

  ports:

69

  - name: http

70

    port: 3306

71

    nodePort: 31306

72

    targetPort: 3306

73

--------------------------------------------------------------end---------------------------------------------------------------------

74


75

cd ../slave/

76

#创建配置mysql-slave.yaml文件

77

touch mysql-slave.yaml

78

#内容如下:

79

apiVersion: extensions/v1beta1

80

kind: Deployment

81

metadata:

82

  name: mysql-slave

83

spec:

84

  replicas: 1

85

  selector:

86

    matchLabels:

87

      app: mysql-slave

88

      release: stabel

89

  template:

90

    metadata:

91

      labels:

92

        app: mysql-slave

93

        name: mysql-slave

94

        release: stabel

95

    spec:

96

      containers:

97

      - name: mysql-slave

98

        image: ub2-citst001.abc.com:5000/mysql-slave

99

        volumeMounts:

100

        - name: mysql-config

101

          mountPath: /usr/data

102

        env:

103

        - name: MYSQL_ROOT_PASSWORD

104

          value: "root"

105

        - name: MYSQL_REPLICATION_USER

106

          value: "slave"

107

        - name: MYSQL_REPLICATION_PASSWORD

108

          value: "slave"

109

        - name: MYSQL_MASTER_SERVICE_HOST

110

          value: "mysql-master"

111

        - name: MASTER_PORT

112

          value: "3306"

113

        ports:

114

        - containerPort: 3306

115

          name: mysql-slave

116

      volumes:

117

      - name: mysql-config

118

        hostPath:

119

          path: /localdisk/NFS/mysqlData/slave/

120

      nodeSelector:

121

        kubernetes.io/hostname: centos-k8s007

122


123

-----------------------------------------------------------------------end-------------------------------------------------------------------

124

#创建mysql-slave-service.yaml

125

touch mysql-slave-service.yaml

126

#内容如下:

127

apiVersion: v1

128

kind: Service

129

metadata:

130

  name: mysql-slave

131

  namespace: default

132

spec:

133

  type: NodePort

134

  selector:

135

    app: mysql-slave

136

    release: stabel

137

  ports:

138

  - name: http

139

    port: 3306

140

    nodePort: 31307

141

    targetPort: 3306

142

-----------------------------------------------------------------------end-----------------------------------------------------------------

143


144

#创建mycay实例:

145

cd ../master/

146

kubectl create -f mysql-master.yaml

147

kubectl create -f mysql-master-service.yaml

148

cd ../slave/

149

kubectl create -f mysql-slave.yaml

150

kubectl create -f mysql-slave-service.yaml

151


152

# 查看deployment创建状态,centos-k8s004上执行

153

[root@centos-k8s004 slave]# kubectl get deployment

154

NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

155

mysql-master   1         1         1            1           39d3

156

mysql-slave    1         1         1            1           39d

157

# 查看service创建状态:

158

[root@centos-k8s004 slave]# kubectl get svc

159

NAME           CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE

160

kubernetes     192.168.0.1       <none>        443/TCP          5d

161

mysql-master   192.168.45.209    <nodes>       3306:31306/TCP   5h

162

mysql-slave    192.168.230.192   <nodes>       3306:31307/TCP   5h

163

# AVAILABLE的值可能需要2~15s的时间才能与DESIRED的值同步,启动pod需要时间。如果长时间还是不同步或为0,可用一下命令查看详细状态:

164

[root@centos-k8s004 master]# kubectl describe pod

165

Name:           mysql-master-4291032429-0fzsr

166

Namespace:      default

167

Node:           centos-k8s006/10.239.219.210

168

Start Time:     Wed, 31 Oct 2018 18:56:06 +0800

169

Labels:         name=mysql-master

170

                pod-template-hash=4291032429

171

Status:         Running

172

IP:             172.17.44.2

173

Controllers:    ReplicaSet/mysql-master-4291032429

174

Containers:

175

  master:

176

    Container ID:       docker://674de0971fe2aa16c7926f345d8e8b2386278b14dedd826653e7347559737e28

177

    Image:              ub2-citst001.abc.com:5000/mysql-master

178

    Image ID:           docker-pullable://ub2-citst001.abc.com:5000/mysql-master@sha256:bc286c1374a3a5f18ae56bd785a771ffe0fad15567d56f8f67a615c606fb4e0d

179

    Port:               3306/TCP

180

    State:              Running

181

      Started:          Wed, 31 Oct 2018 18:56:07 +0800

182

    Ready:              True

183

    Restart Count:      0

184

    Volume Mounts:

185

      /usr/data from mysql-config (rw)

186

    Environment Variables:

187

      MYSQL_ROOT_PASSWORD:              root

188

      MYSQL_REPLICATION_USER:           slave

189

      MYSQL_REPLICATION_PASSWORD:       slave

190

Conditions:

191

  Type          Status

192

  Initialized   True

193

  Ready         True

194

  PodScheduled  True

195

Volumes:

196

  mysql-config:

197

    Type:       HostPath (bare host directory volume)

198

    Path:       /localdisk/NFS/mysqlData/master/

199

QoS Class:      BestEffort

200

Tolerations:    <none>

201

No events.

202


203


204

Name:           mysql-slave-3654103728-0sxsm

205

Namespace:      default

206

Node:           centos-k8s007/10.239.219.209

207

Start Time:     Wed, 31 Oct 2018 18:56:19 +0800

208

Labels:         name=mysql-slave

209

                pod-template-hash=3654103728

210

Status:         Running

211

IP:             172.17.16.2

212

Controllers:    ReplicaSet/mysql-slave-3654103728

213

Containers:

214

  slave:

215

    Container ID:       docker://d52f4f1e57d6fa6a7c04f1a9ba63fa3f0af778df69a3190c4f35f755f225fb50

216

    Image:              ub2-citst001.abc.com:5000/mysql-slave

217

    Image ID:           docker-pullable://ub2-citst001.abc.com:5000/mysql-slave@sha256:6a1c7cbb27184b966d2557bf53860daa439b7afda3d4aa5498844d4e66f38f47

218

    Port:               3306/TCP

219

    State:              Running

220

      Started:          Fri, 02 Nov 2018 13:49:48 +0800

221

    Last State:         Terminated

222

      Reason:           Completed

223

      Exit Code:        0

224

      Started:          Wed, 31 Oct 2018 18:56:20 +0800

225

      Finished:         Fri, 02 Nov 2018 13:49:47 +0800

226

    Ready:              True

227

    Restart Count:      1

228

    Volume Mounts:

229

      /usr/data from mysql-config (rw)

230

    Environment Variables:

231

      MYSQL_ROOT_PASSWORD:              root

232

      MYSQL_REPLICATION_USER:           slave

233

      MYSQL_REPLICATION_PASSWORD:       slave

234

      MYSQL_MASTER_SERVICE_HOST:        centos-k8s006

235

      MASTER_PORT:                      4000

236

Conditions:

237

  Type          Status

238

  Initialized   True

239

  Ready         True

240

  PodScheduled  True

241

Volumes:

242

  mysql-config:

243

    Type:       HostPath (bare host directory volume)

244

    Path:       /localdisk/NFS/mysqlData/slave/

245

QoS Class:      BestEffort

246

Tolerations:    <none>

247

No events.

248


249

# 如果出现如下错误,请按照下面方法解决:

250

Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failede:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"

251

#解决方法

252

问题是比较明显的,就是没有/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt文件,用ls -l查看之后发现是一个软链接,链接到/etc/rhsm/ca/redhat-uep.pem,但是这个文件不存在,使用yum search *rhsm*命令:

253

安装python-rhsm-certificates包:

254

[root@centos-k8s004 master]# yum install python-rhsm-certificates -y

255

这里又出现问题了:

256

python-rhsm-certificates <= 1.20.3-1 被 (已安裝) subscription-manager-rhsm-certificates-1.20.11-1.el7.centos.x86_64 取代

257

那么怎么办呢,我们直接卸载掉subscription-manager-rhsm-certificates包,使用yum remove subscription-manager-rhsm-certificates -y命令,然后下载python-rhsm-certificates包:

258

# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm

259

然后手动安装该rpm包:

260

# rpm -ivh python-rhsm-certificates

261

这时发现/etc/rhsm/ca/redhat-uep.pem文件已存在

262

在node执行:

263

yum install *rhsm* -y

264

这时将/etc/docker/seccomp.json删除,再次重启即可,并执行:

265

#docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

266

这时将之前创建的rc、svc和pod全部删除重新创建,过一会就会发现pod启动成功

267


268

# 在centos-k8s006查看docker container,说明成功创建mysql-master实例,查看mysql-slave状态同理

269

[root@centos-k8s006 master]# docker ps

270

674de0971fe2        ub2-citst001.abc.com:5000/mysql-master                   "docker-entrypoint..."   5 weeks ago         Up 5 weeks                                   k8s_master.1fa78e47_mysql-master-4291032429-0fzsr_default_914f7535-dcfb-11e8-9eb8-005056a654f2_3462901b

271

220e4d37915d        registry.access.redhat.com/rhel7/pod-infrastructure:latest    "/usr/bin/pod"           5 weeks ago         Up 5 weeks          0.0.0.0:4000->3306/tcp   k8s_POD.62220e6f_mysql-master-4291032429-0fzsr_default_914f7535-dcfb-11e8-9eb8-005056a654f2_d0d62756

272


273

# 此时通过kubectl就可管理contianer

274

[root@centos-k8s006 master]# kubectl exec -it mysql-master-4291032429-0fzsr /bin/bash

275

root@mysql-master-4291032429-0fzsr:/#

276

# 在master创建一个数据库,看是否会同步到slave

277

root@mysql-master-4291032429-0fzsr:/# mysql -u root -p root

278

mysql>create database test_database charsetr='utf8';

279

mysql>show databases;

280

+--------------------+

281

| Database           |

282

+--------------------+

283

| information_schema |

284

| test_database             |

285

| mysql              |

286

| performance_schema |

287

+--------------------+

288

# 退出容器:

289

Ctrl + p && Ctrl + q

290

# 进入mysql-slave查看数据库:

291

[root@centos-k8s006 master]# kubectl exec -it mysql-slave-3654103728-0sxsm /bin/bash

292

root@mysql-slave-3654103728-0sxsm:/#mysql -u root -p root

293

mysql> show databases;

294

+--------------------+

295

| Database           |

296

+--------------------+

297

| information_schema |           |

298

| mysql              |

299

| performance_schema |

300

| test_database      |

301

+--------------------+

302

# 可以看出,同步成功!

 
 

五. webservice部署.
       1.在有docker环境的机器上新建一个文件夹,进入后分别创建:
               requirements.txt (环境依赖的python包列表) :
asn1crypto==0.24.0
beautifulsoup4==4.4.1
certifi==2017.4.17
cffi==1.11.5
chardet==3.0.2
configparser==3.5.0
cryptography==2.3.1
defusedxml==0.5.0
Django==1.11.15
django-auth-ldap==1.2.6
djangorestframework==3.8.2
djangorestframework-xml==1.3.0
enum34==1.1.6
Ghost.py==0.2.3
gitdb2==2.0.0
GitPython==2.1.1
idna==2.5
ipaddress==1.0.22
jira==1.0.10
multi-key-dict==2.0.3
numpy==1.15.2
oauthlib==2.0.2
ordereddict==1.1
pbr==0.11.1
pyasn1==0.4.4
pyasn1-modules==0.2.2
pycparser==2.18
PyMySQL==0.9.2
python-ldap==3.1.0
pytz==2018.5
requests==2.18.1
requests-oauthlib==0.8.0
requests-toolbelt==0.8.0
scipy==1.1.0
selenium==3.4.3
simplejson==3.16.0
six==1.10.0
smmap2==2.0.1
threadpool==1.3.2
urllib3==1.21.1
python-jenkins==0.4.13
uWSGI

 
 
 
 
 

 
 
 ~~~~~~~~~~~~~
django

 

 
 

42

uWSGI
~~~~~~~~~~~~~~~~

 
 

                pip.conf(第三方pip源,用来网络加速)
[global]
timeout = 60
index-url = http://pypi.douban.com/simple
trusted-host = pypi.douban.com

 
 
 
 
 

 
 
 

 

 
 

1

[global]

2

timeout = 60

3

index-url = http://pypi.douban.com/simple

4

trusted-host = pypi.douban.com

 
 

start_script.sh(启动文件)
#!/bin/bash
touch /root/www2/oss2/log/touchforlogrotate
p4 sync
uwsgi --ini /root/uwsgi.ini
tail -f /dev/null

 
 
 
 
 

 
 
 

 

 
 

1

#!/bin/bash

2

touch /root/www2/oss2/log/touchforlogrotate

3

4

uwsgi --ini /root/uwsgi.ini

5

tail -f /dev/null

 
 

uwsgi.ini(uwsgi配置)
[uwsgi]
socket=0.0.0.0:8001
chdir=/root/www2/oss2/
master=true
processes=4
threads=2
module=oss2.wsgi
touch-logreopen = /root/www2/oss2/log/touchforlogrotate
daemonize = /root/www2/oss2/log/log.log
wsgi-file =/root/www2/oss2/website/wsgi.py
py-autoreload=1

 
 
 
 
 

 
 
 

 

 
 

1

[uwsgi]

2

socket=0.0.0.0:8001

3

chdir=/root/www2/oss2/

4

master=true

5

processes=4

6

threads=2

7

module=oss2.wsgi

8

touch-logreopen = /root/www2/oss2/log/touchforlogrotate

9

daemonize = /root/www2/oss2/log/log.log

10

wsgi-file =/root/www2/oss2/website/wsgi.py

11

py-autoreload=1

 
 

  Dockerfile
FROM centos

MAINTAINER by miaohenx

ENV http_proxy http://child-prc.abc.com:913/
ENV https_proxy https://child-prc.abc.com:913/
RUN mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup && curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo && yum makecache && yum -y install epel-release zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gcc make openldap-devel && curl -O https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tar.xz && tar -xvJf Python-3.6.6.tar.xz && cd Python-3.6.6 && ./configure prefix=/usr/local/python3 && make && make install && ln -s /usr/local/python3/bin/python3 /usr/bin/python3.6 && ln -s /usr/local/python3/bin/python3 /usr/bin/python3 && cd .. && rm -rf Python-3.6.*
RUN yum -y install python36-devel python36-setuptools && easy_install-3.6 pip && mkdir /root/.pip
COPY 1.txt /root/
COPY uwsgi.ini /root/
COPY www2 /root/www2/
COPY start_script.sh /root/
COPY pip.conf /root/.pip/
RUN pip3 install -r /root/1.txt && chmod +x /root/start_script.sh
EXPOSE 8000
ENTRYPOINT ["/root/start_script.sh"]

 
 
 
 
 

x

 
 

 

 
 

1

FROM centos

2


3

MAINTAINER by miaohenx

4


5

ENV http_proxy http://child-prc.abc.com:913/

6

ENV https_proxy https://child-prc.abc.com:913/

7

RUN mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup && curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo && yum makecache && yum -y install epel-release zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gcc make openldap-devel && curl -O https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tar.xz && tar -xvJf  Python-3.6.6.tar.xz && cd Python-3.6.6 && ./configure prefix=/usr/local/python3 && make && make install && ln -s /usr/local/python3/bin/python3 /usr/bin/python3.6 && ln -s /usr/local/python3/bin/python3 /usr/bin/python3 && cd .. &&  rm -rf Python-3.6.*

8

RUN yum -y install python36-devel python36-setuptools && easy_install-3.6 pip && mkdir /root/.pip

9

COPY 1.txt /root/

10

COPY uwsgi.ini /root/

11

COPY www2 /root/www2/

12

COPY start_script.sh /root/

13

COPY pip.conf /root/.pip/

14

RUN pip3 install -r /root/1.txt && chmod +x /root/start_script.sh

15

EXPOSE 8000

16

ENTRYPOINT ["/root/start_script.sh"]

 
 

在当前目录创建文件夹:www2,进入后将oss2 code放入www2下
回到Dockerfile文件所在目录,执行:
docker build -t oss-server . #注意末尾的.<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

docker build -t oss-server .          #注意末尾的.

 
 

 等待几分钟,image便会build成功,查看images:
docker images<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

docker images

 
 

六、haproxy+keepalived部署
       Nginx、LVS、HAProxy 负载均衡软件的优缺点:
一、Nginx的优点是:
1)工作在网络的7层之上,可以针对 http 应用做一些分流的策略,比如针对域名、目录结构,它的正则规则比 HAProxy 更为强大和灵活,这也是它目前广泛流
行的主要原因之一, Nginx 单凭这点可利用的场合就远多于 LVS 了。
2) Nginx 对网络稳定性的依赖非常小,理论上能 ping 通就就能进行负载功能,这个也是它的优势之一;相反 LVS 对网络稳定性依赖比较大;
3) Nginx 安装和配置比较简单,测试起来比较方便,它基本能把错误用日志打印出来。 LVS 的配置、测试就要花比较长的时间了, LVS 对网络依赖比较大。
4)可以承担高负载压力且稳定,在硬件不差的情况下一般能支撑几万次的并发量,负载度比 LVS 相对小些。
5) Nginx 可以通过端口检测到服务器内部的故障,比如根据服务器处理网页返回的状态码、超时等等,并且会把返回错误的请求重新提交到另一个节点,不过其中缺点就是不支持url来检测。
比如用户正在上传一个文件,而处理该上传的节点刚好在上传过程中出现故障, Nginx 会把上传切到另一台服务器重新处 理,而LVS就直接断掉了,如果是上传一个很大的文件或者很重要的文
件的话,用户可能会因此而不满。
6)Nginx 不仅仅是一款优秀的负载均衡器/反向代理软件,它同时也是功能强大的 Web 应用服务器。 LNMP 也是近几年非常流行的 web 架构,在高流量的环境中稳定性也很好。
7)Nginx 现在作为 Web 反向加速缓存越来越成熟了,速度比传统的 Squid 服务器更快,可以考虑用其作为反向代理加速器。
8)Nginx 可作为中层反向代理使用,这一层面 Nginx 基本上无对手,唯一可以对比 Nginx 的就只有 lighttpd 了,不过 lighttpd 目前还没有做到 Nginx 完全的功能,配置也不那么清晰易读,
社区资料也远远没 Nginx 活跃。
9) Nginx 也可作为静态网页和图片服务器,这方面的性能也无对手。还有 Nginx社区非常活跃,第三方模块也很多。

Nginx 的缺点是:
1)Nginx 仅能支持 http、 https 和 Email 协议,这样就在适用范围上面小些,这个是它的缺点。
2)对后端服务器的健康检查,只支持通过端口来检测,不支持通过 url 来检测。不支持 Session 的直接保持,但能通过 ip_hash 来解决。

二、LVS:使用 Linux 内核集群实现一个高性能、 高可用的负载均衡服务器,它具有很好的可伸缩性( Scalability)、可靠性( Reliability)和可管理性(Manageability)。
LVS 的优点是:
1)抗负载能力强、是工作在网络 4 层之上仅作分发之用, 没有流量的产生,这个特点也决定了它在负载均衡软件里的性能最强的,对内存和 cpu 资源消耗比较低。
2)配置性比较低,这是一个缺点也是一个优点,因为没有可太多配置的东西,所以并不需要太多接触,大大减少了人为出错的几率。
3)工作稳定,因为其本身抗负载能力很强,自身有完整的双机热备方案,如LVS+Keepalived,不过我们在项目实施中用得最多的还是 LVS/DR+Keepalived。
4)无流量, LVS 只分发请求,而流量并不从它本身出去,这点保证了均衡器 IO的性能不会收到大流量的影响。
5)应用范围比较广,因为 LVS 工作在 4 层,所以它几乎可以对所有应用做负载均衡,包括 http、数据库、在线聊天室等等。

LVS 的缺点是:
1)软件本身不支持正则表达式处理,不能做动静分离;而现在许多网站在这方面都有较强的需求,这个是 Nginx/HAProxy+Keepalived 的优势所在。
2)如果是网站应用比较庞大的话, LVS/DR+Keepalived 实施起来就比较复杂了,特别后面有 Windows Server 的机器的话,如果实施及配置还有维护过程就比较复杂了,相对而言,
Nginx/HAProxy+Keepalived 就简单多了。

三、HAProxy 的特点是:
1)HAProxy 也是支持虚拟主机的。
2)HAProxy 的优点能够补充 Nginx 的一些缺点,比如支持 Session 的保持,Cookie的引导;同时支持通过获取指定的 url 来检测后端服务器的状态。
3)HAProxy 跟 LVS 类似,本身就只是一款负载均衡软件;单纯从效率上来讲HAProxy 会比 Nginx 有更出色的负载均衡速度,在并发处理上也是优于 Nginx 的。
4)HAProxy 支持 TCP 协议的负载均衡转发,可以对 MySQL 读进行负载均衡,对后端的 MySQL 节点进行检测和负载均衡,大家可以用 LVS+Keepalived 对 MySQL主从做负载均衡。
5)HAProxy 负载均衡策略非常多, HAProxy 的负载均衡算法现在具体有如下8种:
1> roundrobin,表示简单的轮询,这个不多说,这个是负载均衡基本都具备的;
2> static-rr,表示根据权重,建议关注;
3> leastconn,表示最少连接者先处理,建议关注;
4> source,表示根据请求源 IP,这个跟 Nginx 的 IP_hash 机制类似,我们用其作为解决 session 问题的一种方法,建议关注;
5> ri,表示根据请求的 URI;
6> rl_param,表示根据请求的 URl 参数’balance url_param’ requires an URLparameter name;
7> hdr(name),表示根据 HTTP 请求头来锁定每一次 HTTP 请求;
8> rdp-cookie(name),表示根据据 cookie(name)来锁定并哈希每一次 TCP 请求。<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

一、Nginx的优点是:

2

1)工作在网络的7层之上,可以针对 http 应用做一些分流的策略,比如针对域名、目录结构,它的正则规则比 HAProxy 更为强大和灵活,这也是它目前广泛流

3

行的主要原因之一, Nginx 单凭这点可利用的场合就远多于 LVS 了。

4

2) Nginx 对网络稳定性的依赖非常小,理论上能 ping 通就就能进行负载功能,这个也是它的优势之一;相反 LVS 对网络稳定性依赖比较大;

5

3) Nginx 安装和配置比较简单,测试起来比较方便,它基本能把错误用日志打印出来。 LVS 的配置、测试就要花比较长的时间了, LVS 对网络依赖比较大。

6

4)可以承担高负载压力且稳定,在硬件不差的情况下一般能支撑几万次的并发量,负载度比 LVS 相对小些。

7

5) Nginx 可以通过端口检测到服务器内部的故障,比如根据服务器处理网页返回的状态码、超时等等,并且会把返回错误的请求重新提交到另一个节点,不过其中缺点就是不支持url来检测。

8

比如用户正在上传一个文件,而处理该上传的节点刚好在上传过程中出现故障, Nginx 会把上传切到另一台服务器重新处 理,而LVS就直接断掉了,如果是上传一个很大的文件或者很重要的文

9

件的话,用户可能会因此而不满。

10

6)Nginx 不仅仅是一款优秀的负载均衡器/反向代理软件,它同时也是功能强大的 Web 应用服务器。 LNMP 也是近几年非常流行的 web 架构,在高流量的环境中稳定性也很好。

11

7)Nginx 现在作为 Web 反向加速缓存越来越成熟了,速度比传统的 Squid 服务器更快,可以考虑用其作为反向代理加速器。

12

8)Nginx 可作为中层反向代理使用,这一层面 Nginx 基本上无对手,唯一可以对比 Nginx 的就只有 lighttpd 了,不过 lighttpd 目前还没有做到 Nginx 完全的功能,配置也不那么清晰易读,

13

社区资料也远远没 Nginx 活跃。

14

9) Nginx 也可作为静态网页和图片服务器,这方面的性能也无对手。还有 Nginx社区非常活跃,第三方模块也很多。

15


16

Nginx 的缺点是:

17

1)Nginx 仅能支持 http、 https 和 Email 协议,这样就在适用范围上面小些,这个是它的缺点。

18

2)对后端服务器的健康检查,只支持通过端口来检测,不支持通过 url 来检测。不支持 Session 的直接保持,但能通过 ip_hash 来解决。

19


20

二、LVS:使用 Linux 内核集群实现一个高性能、 高可用的负载均衡服务器,它具有很好的可伸缩性( Scalability)、可靠性( Reliability)和可管理性(Manageability)。

21

LVS 的优点是:

22

1)抗负载能力强、是工作在网络 4 层之上仅作分发之用, 没有流量的产生,这个特点也决定了它在负载均衡软件里的性能最强的,对内存和 cpu 资源消耗比较低。

23

2)配置性比较低,这是一个缺点也是一个优点,因为没有可太多配置的东西,所以并不需要太多接触,大大减少了人为出错的几率。

24

3)工作稳定,因为其本身抗负载能力很强,自身有完整的双机热备方案,如LVS+Keepalived,不过我们在项目实施中用得最多的还是 LVS/DR+Keepalived。

25

4)无流量, LVS 只分发请求,而流量并不从它本身出去,这点保证了均衡器 IO的性能不会收到大流量的影响。

26

5)应用范围比较广,因为 LVS 工作在 4 层,所以它几乎可以对所有应用做负载均衡,包括 http、数据库、在线聊天室等等。

27


28

LVS 的缺点是:

29

1)软件本身不支持正则表达式处理,不能做动静分离;而现在许多网站在这方面都有较强的需求,这个是 Nginx/HAProxy+Keepalived 的优势所在。

30

2)如果是网站应用比较庞大的话, LVS/DR+Keepalived 实施起来就比较复杂了,特别后面有 Windows Server 的机器的话,如果实施及配置还有维护过程就比较复杂了,相对而言,

31

Nginx/HAProxy+Keepalived 就简单多了。

32


33

三、HAProxy 的特点是:

34

1)HAProxy 也是支持虚拟主机的。

35

2)HAProxy 的优点能够补充 Nginx 的一些缺点,比如支持 Session 的保持,Cookie的引导;同时支持通过获取指定的 url 来检测后端服务器的状态。

36

3)HAProxy 跟 LVS 类似,本身就只是一款负载均衡软件;单纯从效率上来讲HAProxy 会比 Nginx 有更出色的负载均衡速度,在并发处理上也是优于 Nginx 的。

37

4)HAProxy 支持 TCP 协议的负载均衡转发,可以对 MySQL 读进行负载均衡,对后端的 MySQL 节点进行检测和负载均衡,大家可以用 LVS+Keepalived 对 MySQL主从做负载均衡。

38

5)HAProxy 负载均衡策略非常多, HAProxy 的负载均衡算法现在具体有如下8种:

39

1> roundrobin,表示简单的轮询,这个不多说,这个是负载均衡基本都具备的;

40

2> static-rr,表示根据权重,建议关注;

41

3> leastconn,表示最少连接者先处理,建议关注;

42

4> source,表示根据请求源 IP,这个跟 Nginx 的 IP_hash 机制类似,我们用其作为解决 session 问题的一种方法,建议关注;

43

5> ri,表示根据请求的 URI;

44

6> rl_param,表示根据请求的 URl 参数’balance url_param’ requires an URLparameter name;

45

7> hdr(name),表示根据 HTTP 请求头来锁定每一次 HTTP 请求;

46

8> rdp-cookie(name),表示根据据 cookie(name)来锁定并哈希每一次 TCP 请求。

 
 

keepalived主主模式部署:
       在centos-k8s001上进行如下操作:
yum install -y haproxy keepalived # 安装haproxy keepalived
vim /etc/keepalived/keepalived.conf # 配置keepalived,替换为以下内容

! Configuration File for keepalived

global_defs {
notification_email { #定义收件人邮箱
root@localhost
}
notification_email_from root@localhost #定义发件人邮箱
smtp_server 127.0.0.1 #定义邮件服务器地址
smtp_connect_timeout 30 #定有邮件服务器连接超时时长为30秒
router_id LVS_DEVEL #运行keepalive的机器的标识
}

vrrp_instance VI_1 { #定义VRRP实例,实例名自定义
state MASTER #指定当前节点的角色,master为主,backup为从
interface ens160 #直接HA监测的接口
virtual_router_id 51 #虚拟路由标识,在同一VRRP实例中,主备服务器ID必须一样
priority 100 #定义节点优先级,数字越大越优先,主服务器优先级高于从服务器
advert_int 1 #设置主备之间永不检查时间间隔,单位为秒
authentication { #设置主从之间验证类型和密码
auth_type PASS
auth_pass a23c7f32dfb519d6a5dc67a4b2ff8f5e

}
virtual_ipaddress {
10.239.219.157 #定义虚拟ip地址
}
}

vrrp_instance VI_2 {
state BACKUP
interface ens160
virtual_router_id 52
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 56f7663077966379d4106e8ee30eb1a5

}
virtual_ipaddress {
10.239.219.156
}
}
<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

yum install -y haproxy keepalived     # 安装haproxy keepalived

2

vim /etc/keepalived/keepalived.conf   # 配置keepalived,替换为以下内容

3


4


5

! Configuration File for keepalived

6


7

global_defs {

8

   notification_email {         #定义收件人邮箱

9

     root@localhost

10

   }

11

   notification_email_from root@localhost      #定义发件人邮箱

12

   smtp_server 127.0.0.1                              #定义邮件服务器地址

13

   smtp_connect_timeout 30                         #定有邮件服务器连接超时时长为30秒

14

   router_id LVS_DEVEL                                 #运行keepalive的机器的标识

15

}

16


17

vrrp_instance VI_1 {                      #定义VRRP实例,实例名自定义

18

    state MASTER                             #指定当前节点的角色,master为主,backup为从

19

    interface ens160                             #直接HA监测的接口

20

    virtual_router_id 51                    #虚拟路由标识,在同一VRRP实例中,主备服务器ID必须一样

21

    priority 100                                 #定义节点优先级,数字越大越优先,主服务器优先级高于从服务器

22

    advert_int 1                                #设置主备之间永不检查时间间隔,单位为秒

23

    authentication {                          #设置主从之间验证类型和密码

24

        auth_type PASS

25

        auth_pass a23c7f32dfb519d6a5dc67a4b2ff8f5e

26


27

    }

28

    virtual_ipaddress {

29

        10.239.219.157                      #定义虚拟ip地址

30

    }

31

}

32


33

vrrp_instance VI_2 {

34

    state BACKUP

35

    interface ens160

36

    virtual_router_id 52

37

    priority 99

38

    advert_int 1

39

    authentication {

40

        auth_type PASS

41

        auth_pass 56f7663077966379d4106e8ee30eb1a5

42


43

    }

44

    virtual_ipaddress {

45

        10.239.219.156

46

    }

47

}

48


 
 

    将keepalived.conf同步到centos-k8s002,并修改为以下内容
! Configuration File for keepalived
global_defs {
notification_email { #定义收件人邮箱
root@localhost
}
notification_email_from root@localhost #定义发件人邮箱
smtp_server 127.0.0.1 #定义邮件服务器地址
smtp_connect_timeout 30 #定有邮件服务器连接超时时长为30秒
router_id LVS_DEVEL #运行keepalive的机器的标识
}

vrrp_instance VI_1 { #定义VRRP实例,实例名自定义
state BACKUP #指定当前节点的角色,master为主,backup为从
interface ens160 #直接HA监测的接口
virtual_router_id 51 #虚拟路由标识,在同一VRRP实例中,主备服务器ID必须一样
priority 99 #定义节点优先级,数字越大越优先,主服务器优先级高于从服务器
advert_int 1 #设置主备之间永不检查时间间隔,单位为秒
authentication { #设置主从之间验证类型和密码
auth_type PASS
auth_pass a23c7f32dfb519d6a5dc67a4b2ff8f5e

}
virtual_ipaddress {
10.239.219.157 #定义虚拟ip地址
}
}

vrrp_instance VI_2 {
state MASTER
interface ens160
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 56f7663077966379d4106e8ee30eb1a5

}
virtual_ipaddress {
10.239.219.156
}
}<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

! Configuration File for keepalived

2

global_defs {

3

   notification_email {        #定义收件人邮箱

4

     root@localhost

5

   }

6

   notification_email_from root@localhost     #定义发件人邮箱

7

   smtp_server 127.0.0.1                      #定义邮件服务器地址

8

   smtp_connect_timeout 30                    #定有邮件服务器连接超时时长为30秒

9

   router_id LVS_DEVEL                        #运行keepalive的机器的标识

10

}

11


12

vrrp_instance VI_1 {                          #定义VRRP实例,实例名自定义

13

    state BACKUP                                #指定当前节点的角色,master为主,backup为从

14

    interface ens160                             #直接HA监测的接口

15

    virtual_router_id 51                            #虚拟路由标识,在同一VRRP实例中,主备服务器ID必须一样

16

    priority 99                                        #定义节点优先级,数字越大越优先,主服务器优先级高于从服务器

17

    advert_int 1                                           #设置主备之间永不检查时间间隔,单位为秒

18

    authentication {                                        #设置主从之间验证类型和密码

19

        auth_type PASS

20

        auth_pass a23c7f32dfb519d6a5dc67a4b2ff8f5e

21


22

    }

23

    virtual_ipaddress {

24

        10.239.219.157                            #定义虚拟ip地址

25

    }

26

}

27


28

vrrp_instance VI_2 {

29

    state MASTER

30

    interface ens160

31

    virtual_router_id 52

32

    priority 100

33

    advert_int 1

34

    authentication {

35

        auth_type PASS

36

        auth_pass 56f7663077966379d4106e8ee30eb1a5

37


38

    }

39

    virtual_ipaddress {

40

        10.239.219.156

41

    }

42

}

 
 

 
        在cento-k8s002&centos-k8s001修改/etc/haproxy/haproxy.cfg,替换为以下内容:
global #定义全局配置段
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2 #通过rsyslog将日志进行归档记录,在/etc/rsyslog.conf配置文件中,添加‘local2.* /var/log/haproxy',并且启用$ModLoad imudp,$UDPServerRun 514,$ModLoad imtcp,$InputTCPServerRun 514 此四项功能,最后重启rsyslog进程。
chroot /var/lib/haproxy #指定haproxy进程工作的目录
pidfile /var/run/haproxy.pid #指定pid文件
maxconn 4000 #最大并发连接数
user haproxy #运行haproxy的用户
group haproxy #运行haproxy的组
daemon #以守护进程的形式运行,即后台运行

# turn on stats unix socket
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults #默认配置端
mode http #工作模式,源码包编译默认为tcp
log global #记录全局日志
option httplog #详细记录http日志
option dontlognull #不记录健康检测的日志信息
option http-server-close #启用服务器端主动关闭功能
option forwardfor except 127.0.0.0/8 #传递client端IP至后端real server
option redispatch #基于cookie做会话保持时,后端对应存放session的服务器出现故障时,会话会被重定向至别的服务器
retries 3 #请求重传次数
timeout http-request 10s #断开客户端连接的时长
timeout queue 1m #一个请求在队列里的超时时长
timeout connect 10s #设定在haproxy转发至后端upstream server时等待的超时时长
timeout client 1m #client的一次非活动状态的超时时长
timeout server 1m #等待服务器端的非活动的超时时长
timeout http-keep-alive 10s #持久连接超时时长
timeout check 10s #检查请求连接的超时时长
maxconn 3000 #最大连接数

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend webserver *:8000 # OSS server
acl url_static path_beg -i /static /images /javascript /stylesheets #匹配path以/static,/images开始的,且不区分大小写
acl url_static path_end -i .jpg .gif .png .css .js .html
acl url_static hdr_beg(host) -i img. video. download. ftp. imgs. image.

acl url_dynamic path_end .php .jsp

use_backend static if url_static #满足名为url_static这条acl规则,则将请求转发至后端名为static的real server组中去
use_backend dynamic if url_dynamic
default_backend dynamic #如果上面所有acl规则都不满足,将请求转发到dynamic组中

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static #定义后端real server组,组名为static
balance roundrobin #支持动态权重修改,支持慢启动
server static_1 centos-k8s005:8000 check inter 3000 fall 3 rise 1 maxconn 30000
server static_2 centos-k8s006:8000 check inter 3000 fall 3 rise 1 maxconn 30000
server static_3 centos-k8s007:8000 check inter 3000 fall 3 rise 1 maxconn 30000
# server static_Error :8080 backup check #当此组中的所有server全部不能提供服务,才将请求调度至此server上
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------

backend dynamic
cookie cookie_name insert nocache #使用cookie实现session绑定,且不记录缓存
balance roundrobin
server dynamic1 centos-k8s005:8000 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic1
server dynamic2 centos-k8s006:8000 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic2
server dynamic3 centos-k8s007:8000 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic3 #定义dynamic组中的server,将此server命名为dynamic2,每隔3000ms检测一个健康状态,如果检测3次都失败,将此server剔除。在离线的状态下,只要检测1次成功,就让其上线,此server支持最大的并发连接数为1000,cookie的值为dynamic2

frontend kibana *:5602 #Kibana
acl url_static path_beg -i /static /images /javascript /stylesheets #匹配path以/static,/images开始的,且不区分大小写
acl url_static path_end -i .jpg .gif .png .css .js .html
acl url_static hdr_beg(host) -i img. video. download. ftp. imgs. image.

acl url_dynamic path_end .php .jsp

use_backend static2 if url_static #满足名为url_static这条acl规则,则将请求转发至后端名为static的real server组中去
use_backend dynamic2 if url_dynamic
default_backend dynamic2 #如果上面所有acl规则都不满足,将请求转发到dynamic组中

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static2 #定义后端real server组,组名为static
balance roundrobin #支持动态权重修改,支持慢启动
server static_1 centos-k8s003:5602 check inter 3000 fall 3 rise 1 maxconn 30000
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------

backend dynamic2
cookie cookie_name insert nocache #使用cookie实现session绑定,且不记录缓存
balance roundrobin
server dynamic1 centos-k8s003:5602 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic1

frontend kibana *:8080 # kubernetes-dashboard
acl url_static path_beg -i /static /images /javascript /stylesheets #匹配path以/static,/images开始的,且不区分大小写
acl url_static path_end -i .jpg .gif .png .css .js .html
acl url_static hdr_beg(host) -i img. video. download. ftp. imgs. image.

acl url_dynamic path_end .php .jsp

use_backend static3 if url_static #满足名为url_static这条acl规则,则将请求转发至后端名为static的real server组中去
use_backend dynamic3 if url_dynamic
default_backend dynamic3 #如果上面所有acl规则都不满足,将请求转发到dynamic组中

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static3 #定义后端real server组,组名为static
balance roundrobin #支持动态权重修改,支持慢启动
server static_1 centos-k8s004:8080 check inter 3000 fall 3 rise 1 maxconn 30000
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------

backend dynamic3
cookie cookie_name insert nocache #使用cookie实现session绑定,且不记录缓存
balance roundrobin
server dynamic1 centos-k8s004:8080 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic1

listen state # 使用单独输出,不需要frontedn调用:定义haproxy的状态统计页面
bind *:8001 # 监听的地址
mode http # http 7层工作模式:对应用层数据做深入分析,因此支持7层的过滤、处理、转换等机制
stats enable # 开启统计页面输出
stats hide-version # 隐藏状态页面版本号
stats uri /haproxyadmin?stats # 指定状态页的访问路径
stats auth admin:root # 基于用户名,密码验证。
stats admin if TRUE # 验证通过时运行登录。
acl num1 src 10.239.0.0/16 # 定义源地址为10.239.0.0/16网段的acl规则,将其命名为num1
tcp-request content accept if num1 # 如果满足此规则,则允许访问
tcp-request content reject # 拒绝其他所有的访问<wiz_code_mirror>

 
 
 
 
 

x

 
 

 

 
 

1

global                   #定义全局配置段

2

    # to have these messages end up in /var/log/haproxy.log you will

3

    # need to:

4

    #

5

    # 1) configure syslog to accept network log events.  This is done

6

    #    by adding the '-r' option to the SYSLOGD_OPTIONS in

7

    #    /etc/sysconfig/syslog

8

    #

9

    # 2) configure local2 events to go to the /var/log/haproxy.log

10

    #   file. A line like the following can be added to

11

    #   /etc/sysconfig/syslog

12

    #

13

    #    local2.*                       /var/log/haproxy.log

14

    #

15

    log         127.0.0.1 local2          #通过rsyslog将日志进行归档记录,在/etc/rsyslog.conf配置文件中,添加‘local2.*     /var/log/haproxy',并且启用$ModLoad imudp,$UDPServerRun 514,$ModLoad imtcp,$InputTCPServerRun 514 此四项功能,最后重启rsyslog进程。

16

    chroot      /var/lib/haproxy          #指定haproxy进程工作的目录

17

    pidfile     /var/run/haproxy.pid      #指定pid文件

18

    maxconn     4000                      #最大并发连接数

19

    user        haproxy                   #运行haproxy的用户

20

    group       haproxy                   #运行haproxy的组

21

    daemon                                #以守护进程的形式运行,即后台运行

22


23

    # turn on stats unix socket

24

    stats socket /var/lib/haproxy/stats

25


26

#---------------------------------------------------------------------

27

# common defaults that all the 'listen' and 'backend' sections will

28

# use if not designated in their block

29

#---------------------------------------------------------------------

30

defaults                                                #默认配置端

31

    mode                   http                         #工作模式,源码包编译默认为tcp

32

    log                       global                    #记录全局日志

33

    option                  httplog                     #详细记录http日志

34

    option                  dontlognull                 #不记录健康检测的日志信息

35

    option http-server-close                            #启用服务器端主动关闭功能

36

    option forwardfor       except 127.0.0.0/8          #传递client端IP至后端real server

37

    option                  redispatch                  #基于cookie做会话保持时,后端对应存放session的服务器出现故障时,会话会被重定向至别的服务器

38

    retries                 3                           #请求重传次数

39

    timeout http-request    10s                         #断开客户端连接的时长

40

    timeout queue           1m                          #一个请求在队列里的超时时长

41

    timeout connect         10s                         #设定在haproxy转发至后端upstream server时等待的超时时长

42

    timeout client          1m                          #client的一次非活动状态的超时时长

43

    timeout server          1m                          #等待服务器端的非活动的超时时长

44

    timeout http-keep-alive 10s                         #持久连接超时时长

45

    timeout check           10s                         #检查请求连接的超时时长

46

    maxconn                 3000                        #最大连接数

47


48

#---------------------------------------------------------------------

49

# main frontend which proxys to the backends

50

#---------------------------------------------------------------------

51

frontend  webserver *:8000                    # OSS server

52

    acl url_static       path_beg       -i /static /images /javascript /stylesheets       #匹配path以/static,/images开始的,且不区分大小写

53

    acl url_static       path_end       -i .jpg .gif .png .css .js .html

54

    acl url_static       hdr_beg(host)  -i img. video. download. ftp. imgs. image.

55


56

    acl url_dynamic      path_end       .php .jsp

57


58

    use_backend static          if url_static           #满足名为url_static这条acl规则,则将请求转发至后端名为static的real server组中去

59

    use_backend dynamic         if url_dynamic

60

    default_backend             dynamic                  #如果上面所有acl规则都不满足,将请求转发到dynamic组中

61


62

#---------------------------------------------------------------------

63

# static backend for serving up images, stylesheets and such

64

#---------------------------------------------------------------------

65

backend static                              #定义后端real server组,组名为static

66

    balance     roundrobin                  #支持动态权重修改,支持慢启动

67

    server      static_1 centos-k8s005:8000 check inter 3000 fall 3 rise 1 maxconn 30000

68

    server      static_2 centos-k8s006:8000 check inter 3000 fall 3 rise 1 maxconn 30000

69

    server      static_3 centos-k8s007:8000 check inter 3000 fall 3 rise 1 maxconn 30000

70

#    server      static_Error :8080 backup check         #当此组中的所有server全部不能提供服务,才将请求调度至此server上

71

#---------------------------------------------------------------------

72

# round robin balancing between the various backends

73

#---------------------------------------------------------------------

74


75


76

backend dynamic

77

    cookie cookie_name insert nocache   #使用cookie实现session绑定,且不记录缓存

78

    balance     roundrobin

79

    server  dynamic1 centos-k8s005:8000 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic1

80

    server  dynamic2 centos-k8s006:8000 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic2

81

    server  dynamic3 centos-k8s007:8000 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic3 #定义dynamic组中的server,将此server命名为dynamic2,每隔3000ms检测一个健康状态,如果检测3次都失败,将此server剔除。在离线的状态下,只要检测1次成功,就让其上线,此server支持最大的并发连接数为1000,cookie的值为dynamic2

82


83

frontend  kibana *:5602                 #Kibana

84

    acl url_static       path_beg       -i /static /images /javascript /stylesheets       #匹配path以/static,/images开始的,且不区分大小写

85

    acl url_static       path_end       -i .jpg .gif .png .css .js .html

86

    acl url_static       hdr_beg(host)  -i img. video. download. ftp. imgs. image.

87


88

    acl url_dynamic      path_end       .php .jsp

89


90

    use_backend static2          if url_static           #满足名为url_static这条acl规则,则将请求转发至后端名为static的real server组中去

91

    use_backend dynamic2         if url_dynamic

92

    default_backend             dynamic2                  #如果上面所有acl规则都不满足,将请求转发到dynamic组中

93


94

#---------------------------------------------------------------------

95

# static backend for serving up images, stylesheets and such

96

#---------------------------------------------------------------------

97

backend static2                              #定义后端real server组,组名为static

98

    balance     roundrobin                  #支持动态权重修改,支持慢启动

99

    server      static_1 centos-k8s003:5602 check inter 3000 fall 3 rise 1 maxconn 30000

100

#---------------------------------------------------------------------

101

# round robin balancing between the various backends

102

#---------------------------------------------------------------------

103


104


105

backend dynamic2

106

    cookie cookie_name insert nocache   #使用cookie实现session绑定,且不记录缓存

107

    balance     roundrobin

108

    server  dynamic1 centos-k8s003:5602 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic1

109


110


111

frontend  kibana *:8080               # kubernetes-dashboard

112

    acl url_static       path_beg       -i /static /images /javascript /stylesheets       #匹配path以/static,/images开始的,且不区分大小写

113

    acl url_static       path_end       -i .jpg .gif .png .css .js .html

114

    acl url_static       hdr_beg(host)  -i img. video. download. ftp. imgs. image.

115


116

    acl url_dynamic      path_end       .php .jsp

117


118

    use_backend static3          if url_static           #满足名为url_static这条acl规则,则将请求转发至后端名为static的real server组中去

119

    use_backend dynamic3         if url_dynamic

120

    default_backend             dynamic3                  #如果上面所有acl规则都不满足,将请求转发到dynamic组中

121


122

#---------------------------------------------------------------------

123

# static backend for serving up images, stylesheets and such

124

#---------------------------------------------------------------------

125

backend static3                              #定义后端real server组,组名为static

126

    balance     roundrobin                  #支持动态权重修改,支持慢启动

127

    server      static_1 centos-k8s004:8080 check inter 3000 fall 3 rise 1 maxconn 30000

128

#---------------------------------------------------------------------

129

# round robin balancing between the various backends

130

#---------------------------------------------------------------------

131


132


133

backend dynamic3

134

    cookie cookie_name insert nocache   #使用cookie实现session绑定,且不记录缓存

135

    balance     roundrobin

136

    server  dynamic1 centos-k8s004:8080 check inter 3000 fall 3 rise 1 maxconn 1000 cookie dynamic1

137


138


139


140

listen state                                            # 使用单独输出,不需要frontedn调用:定义haproxy的状态统计页面

141

    bind *:8001                                         # 监听的地址

142

    mode http                                           # http 7层工作模式:对应用层数据做深入分析,因此支持7层的过滤、处理、转换等机制

143

    stats enable                                        # 开启统计页面输出

144

    stats hide-version                                  # 隐藏状态页面版本号

145

    stats uri /haproxyadmin?stats                       # 指定状态页的访问路径

146

    stats auth admin:root                              # 基于用户名,密码验证。

147

    stats admin if TRUE                                 # 验证通过时运行登录。

148

    acl num1 src 10.239.0.0/16                       # 定义源地址为10.239.0.0/16网段的acl规则,将其命名为num1

149

    tcp-request content accept if num1                  # 如果满足此规则,则允许访问

150

    tcp-request content reject                          # 拒绝其他所有的访问

 
 

      说明:haproxy监听3个端口:
                8000 ------  OSS server            5602 ------ kibana             8080 ------  kubernetes-dashboard
     在centos-k8s001,centos-k8s002上分别执行如下命令:
systemctl enabel haproxy keepalived #haproxy和keepalived开机自启动
systemctl restart haproxy keepaliced # haproxy和keepalived重新启动<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

systemctl enabel haproxy keepalived     #haproxy和keepalived开机自启动

2

systemctl restart haproxy keepaliced    # haproxy和keepalived重新启动

 
 

七、mysql每天自动备份最近七天数据
        1.centos-k8s007拥有200G SSD,因此将mysql数据备份于此,路径:/localdisk/mysql_backup
            接下来就是需要一个自动备份的脚本:mysqlbackup.sh,内容如下:
#!/bin/bash
#设置mysql备份目录
folder=/localdisk/mysql_backup
cd $folder
day=`date +%Y%m%d`
rm -rf $day
mkdir $day
cd $day
#要备份的数据库地址
host=oss-sh.abc.com
#数据库端口
port=3306
#用户名
user=degpid_db_rel
#密码
password='91$ipfv'
#要备份的数据库
db=oss2_base_test
#数据要保留的天数
days=7
# 备份命令
mysqldump -h$host -P$port -u$user -p$password $db>backup.sql
# 压缩备份数据,节省磁盘空间,压缩后数据大小仅为压缩前的5%
zip backup.sql.zip backup.sql
rm backup.sql
cd ..
day=`date -d "$days days ago" +%Y%m%d`
# 删除固定天数前的数据
rm -rf $day<wiz_code_mirror>

 
 
 
 
 

x

 
 

 

 
 

1

#!/bin/bash

2

#设置mysql备份目录

3

folder=/localdisk/mysql_backup

4

cd $folder

5

day=`date +%Y%m%d`

6

rm -rf $day

7

mkdir $day

8

cd $day

9

#要备份的数据库地址

10

host=xxxxxx.abc.com

11

#数据库端口

12

port=3306

13

#用户名

14

user=xxxxxxxx

15

#密码

16

password=xxxxxxxxxx

17

#要备份的数据库

18

db=oss2_base_test

19

#数据要保留的天数

20

days=7

21

# 备份命令

22

mysqldump -h$host -P$port -u$user -p$password $db>backup.sql

23

# 压缩备份数据,节省磁盘空间,压缩后数据大小仅为压缩前的5%

24

zip backup.sql.zip backup.sql

25

rm backup.sql

26

cd ..

27

day=`date -d "$days days ago" +%Y%m%d`

28

# 删除固定天数前的数据

29

rm -rf $day

 
 

       自动备份的脚本有了,需要定时每天固定时间执行,就需要用到linux 定时任务:crontab,在命令行输入crontab -e,回车后进入定时任务编辑页面,操作和vi相同:
       写入以下内容后:wq退出就可以了:
00 02 * * * sh /root/mysqlbackup.sh #表示每天凌晨两点执行备份脚本<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

00 02 * * * sh /root/mysqlbackup.sh      #表示每天凌晨两点执行备份脚本

 
 

      这里介绍以下crontab命令:

*           *        *        *        *           command

minute   hour    day   month   week      command

分          时         天      月        星期       命令

 
八.ELK日志管理系统部署
        k8s集群环境下service将有多份副本,log的查看如果需要一台一台登录查看的话效率会很低,且难以管理。所以采用ELK搭建一套日志管理系统.

系统环境

System:centos 7(查询版本语句:lsb_release -a)
Java : jdk 1.8.0_144
Elasticsearch: 6.4.3
Kibana: 6.4.3
Logstash: 6.4.3
Filebeat: 6.4.3

 
“ELK”是三个开源项目的首字母缩写:Elasticsearch,Logstash和Kibana。Elasticsearch是一个搜索和分析引擎。Logstash是一个服务器端数据处理管道,它同时从多个源中提取数据,对其进行转换,然后将其发送到像Elasticsearch这样的“存储”。Kibana允许用户使用Elasticsearch中的图表和图形来可视化数据。
通过浏览器访问:http://hostshujia01.abc.com:8000/ 可以拿到以上组件,或访问elastic官网:https://www.elastic.co/products 获取组件
先确保centos 有java8环境:
# 查看java版本
java -vsersion
#output: openjdk version "1.8.0_161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
# 查看javac
javac -version
#output
javac 1.8.0_161

 
 
 
 
 

 
 
 

 

 
 

1

# 查看java版本

2

java -vsersion

3

#output: openjdk version "1.8.0_161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)

4

# 查看javac

5

javac -version

6

#output

7

javac 1.8.0_161

 
 

如果输出以上内容,说明有java环境,否则,安装下载的提供的rpm包,或从java 官方下载: https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
yum -y localinstall jdk-8u73-linux-x64.rpm<span id="mce_marker" data-mce-type="bookmark"></span><span id="__caret">_</span>

 
 
 
 
 

 
 
 

 

 
 

1

yum -y localinstall jdk-8u73-linux-x64.rpm

 
 

我们这里采用源码方式安装:
    1.部署Elasticsearch集群:
        node节点:centos-k8s003、centos-k8s005,centos-k8s006,centos-k8s007
        分别创建目录:
mkdir /elk/

 
 
 
 
 

 
 
 

 

 
 

1

mkdir /elk/

 
 

        将elasticsearch-6.4.3.tar.gz解压后放入/elk/
tar -zxvf elasticsearch-6.4.3.tar.gz
mv elasticsearch-6.4.3 /elk/

 
 
 
 
 

 
 
 

 

 
 

1

tar -zxvf elasticsearch-6.4.3.tar.gz

2

mv elasticsearch-6.4.3 /elk/

 
 

         修改配置文件:
cd /elk/elasticsearch-6.4.3/config
vim elasticsearch.yml

#修改一下几个参数:
cluster.name: oss-application #自定义,所有节点cluster.name一致
node.name: centos-k8s003 #自定义,建议用机器名 容易区分
network.host: centos-k8s003 #本机机器名
http.port: 9200
node.master: true #当前节点是否可以被选举为master节点,可以不选举master做保存数

node.data: true #当前节点是否存储数据,也可以不存储数据,只做master
discovery.zen.ping.unicast.hosts: [centos-k8s003, centos-k8s005, centos-k8s006, centos-k8s007] #elasticsearch集群其他节点

 
 
 
 
 

 
 
 

 

 
 

1

cd /elk/elasticsearch-6.4.3/config

2

vim elasticsearch.yml

3


4

#修改一下几个参数:

5

cluster.name: oss-application             #自定义,所有节点cluster.name一致

6

node.name: centos-k8s003                  #自定义,建议用机器名 容易区分

7

network.host: centos-k8s003               #本机机器名

8

http.port: 9200

9

node.master: true   #当前节点是否可以被选举为master节点,可以不选举master做保存数

10

11

node.data: true     #当前节点是否存储数据,也可以不存储数据,只做master

12

discovery.zen.ping.unicast.hosts: [centos-k8s003, centos-k8s005, centos-k8s006, centos-k8s007]       #elasticsearch集群其他节点

 
 

修改系统参数以确保系统有足够资源启动ES
 
设置内核参数 

vi /etc/sysctl.conf
# 增加以下参数
vm.max_map_count=655360
# 执行以下命令,确保生效配置生效:
sysctl -p

 
 
 
 
 

 
 
 

 

 
 

1

vi /etc/sysctl.conf 

2

# 增加以下参数 

3

vm.max_map_count=655360 

4

# 执行以下命令,确保生效配置生效: 

5

sysctl -p

 
 

 
设置资源参数 

vi /etc/security/limits.conf
# 修改
* soft nofile 65536
* hard nofile 131072
* soft nproc 65536
* hard nproc 131072

 
 
 
 
 

 
 
 

 

 
 

1

vi /etc/security/limits.conf 

2

# 修改 

3

* soft nofile 65536 

4

* hard nofile 131072 

5

* soft nproc 65536 

6

* hard nproc 131072 

 
 

添加启动用户,设置权限 
启动ElasticSearch5版本要非root用户,需要新建一个用户来启动ElasticSearch

useradd elk #创建用户elk
groupadd elk #创建组elk useradd
elk -g elk #将用户添加到组
chown -R elk:elk /elk/

 
 
 
 
 

 
 
 

 

 
 

1

 useradd elk #创建用户elk   

2

 groupadd elk #创建组elk useradd

3

 elk -g elk #将用户添加到组

4

chown -R elk:elk /elk/

 
 

         使用elk账户启动ElasticSearch
su elk
cd /elk/elasticsearch-6.4.3/bin/
./elk/elasticsearch -d #在后台启动

 
 
 
 
 

 
 
 

 

 
 

1

su elk

2

cd /elk/elasticsearch-6.4.3/bin/

3

./elk/elasticsearch -d       #在后台启动

 
 

      
  2.部署logstash-6.4.3 在centos-k8s003
     解压压缩包并移动到/elk/
unzip -d /elk/ logstash-6.4.3.zip

 
 
 
 
 

 
 
 

 

 
 

1

unzip -d /elk/ logstash-6.4.3.zip

 
 

      修改配置文件logstash-sample.conf
input {
beats {
type => "oss-server-log"
port => 5044
}
}

output {
elasticsearch {
hosts => ["http://centos-k8s003:9200","http://centos-k8s005:9200","http://centos-k8s006:9200","http://centos-k8s007:9200"] #elasticsearch服务地址
index => "%{[fields][source]}-%{[fields][alilogtype]}-%{+YYYY.MM.dd}" #索引格式
}
}<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

input {

2

  beats {

3

    type => "oss-server-log"   

4

    port => 5044

5

  }

6

}

7


8

output {

9

  elasticsearch {

10

    hosts => ["http://centos-k8s003:9200","http://centos-k8s005:9200","http://centos-k8s006:9200","http://centos-k8s007:9200"]      #elasticsearch服务地址

11

    index => "%{[fields][source]}-%{[fields][alilogtype]}-%{+YYYY.MM.dd}"        #索引格式

12

  }

13

}

 
 

        启动服务:
cd bin/
nohup ./logstash -f config/logstash-simple.conf &<wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

cd bin/

2

nohup ./logstash -f config/logstash-simple.conf &

 
 

    3.部署kibana 在centos-k8s003  
       解压压缩包到/elk/
tar -zxvf kibana-6.4.3-linux-x86_64.tar.gz
mv kibana-6.4.3-linux-x86_64 /elk/

 
 
 
 
 

 
 
 

 

 
 

1

tar -zxvf kibana-6.4.3-linux-x86_64.tar.gz

2

mv kibana-6.4.3-linux-x86_64 /elk/

 
 

      修改配置:
      /elk/kibana-6.4.3-linux-x86_64/config/kibana.yml
server.port: 5601 # 开启默认端口5601
server.host: centos-k8s003 #站点地址
elasticsearch.url: "http://centos-k8s003:9200 " #指向>elasticsearch服务的ip地址 <wiz_code_mirror>

 
 
 
 
 

 
 
 

 

 
 

1

server.port: 5601         # 开启默认端口5601 

2

server.host: centos-k8s003      #站点地址 

3

elasticsearch.url: "http://centos-k8s003:9200 "         #指向>elasticsearch服务的ip地址 

 
 

      启动kibana
cd /elk/kibana-6.4.3-linux-x86_64/bin/
nohup ./kibana &

 
 
 
 
 

 
 
 

 

 
 

1

cd /elk/kibana-6.4.3-linux-x86_64/bin/

2

nohup ./kibana  &

 
 

 

记录一次k8s环境尝试过程(初始方案,现在已经做过很多完善,例如普罗米修斯)的相关教程结束。

《记录一次k8s环境尝试过程(初始方案,现在已经做过很多完善,例如普罗米修斯).doc》

下载本文的Word格式文档,以方便收藏与打印。