mongodb集群配置分片集群

2022-12-02,,,

测试环境

操作系统:CentOS 7.2 最小化安装

主服务器IP地址:192.168.197.21 mongo01

从服务器IP地址:192.168.197.22 mongo02

从服务器IP地址:192.168.197.23 mongo03

关闭selinux,关闭防火墙。

Mongodb版本:mongodb-linux-x86_64-3.4.10.tgz

角色规划

服务器197.21

服务器197.22

服务器197.23

Mongos

Mongos

Mongos

Config server

Config server

Config server

Shard server1 主节点

Shard server1 从节点

Shard server1 仲裁

Shard server2仲裁

Shard server2 主节点

Shard server2 从节点

Shard server3 从节点

Shard server3 仲裁

Shard server3 主节点

端口分配:

安装mongodb(三个节点都执行)

上传好mongodb的包之后,先解压包,然后更改包名

[root@mongo01 software]# tar -zxvf mongodb-linux-x86_64-3.4.10.tgz -C /usr/local/

mongodb-linux-x86_64-3.4.10/README

mongodb-linux-x86_64-3.4.10/THIRD-PARTY-NOTICES

mongodb-linux-x86_64-3.4.10/MPL-2

mongodb-linux-x86_64-3.4.10/GNU-AGPL-3.0

mongodb-linux-x86_64-3.4.10/bin/mongodump

mongodb-linux-x86_64-3.4.10/bin/mongorestore

mongodb-linux-x86_64-3.4.10/bin/mongoexport

mongodb-linux-x86_64-3.4.10/bin/mongoimport

mongodb-linux-x86_64-3.4.10/bin/mongostat

mongodb-linux-x86_64-3.4.10/bin/mongotop

mongodb-linux-x86_64-3.4.10/bin/bsondump

mongodb-linux-x86_64-3.4.10/bin/mongofiles

mongodb-linux-x86_64-3.4.10/bin/mongooplog

mongodb-linux-x86_64-3.4.10/bin/mongoreplay

mongodb-linux-x86_64-3.4.10/bin/mongoperf

mongodb-linux-x86_64-3.4.10/bin/mongod

mongodb-linux-x86_64-3.4.10/bin/mongos

mongodb-linux-x86_64-3.4.10/bin/mongo

[root@mongo01 software]# cd /usr/local/

[root@mongo01 local]# mv mongodb-linux-x86_64-3.4.10 mongodb

分别在每台机器建立conf、mongos、config、shard1、shard2、shard3六个目录,因为mongos不存储数据,只需要建立日志文件目录即可。

[root@mongo01 local]# mkdir -p /usr/local/mongodb/conf

[root@mongo01 local]# mkdir -p /usr/local/mongodb/mongos/log

[root@mongo01 local]# mkdir -p /usr/local/mongodb/config/data

[root@mongo01 local]# mkdir -p /usr/local/mongodb/config/log

[root@mongo01 local]# mkdir -p /usr/local/mongodb/shard1/data

[root@mongo01 local]# mkdir -p /usr/local/mongodb/shard1/log

[root@mongo01 local]# mkdir -p /usr/local/mongodb/shard2/data

[root@mongo01 local]# mkdir -p /usr/local/mongodb/shard2/log

[root@mongo01 local]# mkdir -p /usr/local/mongodb/shard3/data

[root@mongo01 local]# mkdir -p /usr/local/mongodb/shard3/log

配置环境变量,然后使环境变量立即生效

[root@mongo01 local]# vi /etc/profile

export MONGODB_HOME=/usr/local/mongodb

export PATH=$MONGODB_HOME/bin:$PATH

[root@mongo01 local]# source /etc/profile

config server配置服务器(三个节点都要操作)

mongodb3.4以后要求配置服务器也创建副本集,不然集群搭建不成功。

添加配置文件

[root@mongo01 local]# vi /usr/local/mongodb/conf/config.conf

## 配置文件内容

pidfilepath = /usr/local/mongodb/config/log/configsrv.pid

dbpath = /usr/local/mongodb/config/data

logpath = /usr/local/mongodb/config/log/congigsrv.log

logappend = true

bind_ip = 0.0.0.0

port = 21000

fork = true

#declare this is a config db of a cluster;

configsvr = true

#副本集名称

replSet=configs

#设置最大连接数

maxConns=20000

启动三台服务器上的config server

[root@mongo01 local]# mongod -f /usr/local/mongodb/conf/config.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2266

child process started successfully, parent exiting

登录任意一台配置服务器,初始化配置副本集

[root@mongo01 local]# mongo --port 21000

MongoDB shell version v3.4.10

connecting to: mongodb://127.0.0.1:21000/

MongoDB server version: 3.4.10

Welcome to the MongoDB shell.

For interactive help, type "help".

For more comprehensive documentation, see

http://docs.mongodb.org/

Questions? Try the support group

http://groups.google.com/group/mongodb-user

Server has startup warnings:

2019-02-14T00:57:26.863-0500 I CONTROL [initandlisten]

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten]

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten]

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten]

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T00:57:26.864-0500 I CONTROL [initandlisten]

> config = {"_id" : "configs","members" : [{"_id" : 0,"host" : "192.168.197.21:21000"},{"_id" : 1,"host" : "192.168.197.22:21000"},{"_id" : 2,"host" : "192.168.197.23:21000"}]}

{

"_id" : "configs",

"members" : [

{

"_id" : 0,

"host" : "192.168.197.21:21000"

},

{

"_id" : 1,

"host" : "192.168.197.22:21000"

},

{

"_id" : 2,

"host" : "192.168.197.23:21000"

}

]

}

> rs.initiate(config)

{ "ok" : 1 }

配置第一个分片副本集

配置文件,三台服务器上都需要创建shard1.conf文件,写入配置,启动shard1 server服务

[root@mongo01 local]# vi /usr/local/mongodb/conf/shard1.conf

#配置文件内容

#——————————————–

pidfilepath = /usr/local/mongodb/shard1/log/shard1.pid

dbpath = /usr/local/mongodb/shard1/data

logpath = /usr/local/mongodb/shard1/log/shard1.log

logappend = true

bind_ip = 0.0.0.0

port = 27001

fork = true

#打开web监控

httpinterface=true

rest=true

#副本集名称

replSet=shard1

#declare this is a shard db of a cluster;

shardsvr = true

#设置最大连接数

maxConns=20000

启动三台服务器上的shard1 server

[root@mongo01 local]# mongod -f /usr/local/mongodb/conf/shard1.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2384

child process started successfully, parent exiting

登陆任意一台服务器,初始化副本集

[root@mongo01 local]# mongo --port 27001

MongoDB shell version v3.4.10

connecting to: mongodb://127.0.0.1:27001/

MongoDB server version: 3.4.10

Server has startup warnings:

2019-02-14T01:22:22.273-0500 I CONTROL [initandlisten]

2019-02-14T01:22:22.273-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.

2019-02-14T01:22:22.273-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.

2019-02-14T01:22:22.273-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2019-02-14T01:22:22.273-0500 I CONTROL [initandlisten]

2019-02-14T01:22:22.274-0500 I CONTROL [initandlisten]

2019-02-14T01:22:22.274-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2019-02-14T01:22:22.274-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T01:22:22.274-0500 I CONTROL [initandlisten]

2019-02-14T01:22:22.274-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2019-02-14T01:22:22.274-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T01:22:22.274-0500 I CONTROL [initandlisten]

> use admin

switched to db admin

> config = {"_id" : "shard1","members" : [{"_id" : 0,"host" : "192.168.197.21:27001"},{"_id" : 1,"host" : "192.168.197.22:27001"},{"_id" : 2,"host" : "192.168.197.23:27001",arbiterOnly: true}]}

{

"_id" : "shard1",

"members" : [

{

"_id" : 0,

"host" : "192.168.197.21:27001"

},

{

"_id" : 1,

"host" : "192.168.197.22:27001"

},

{

"_id" : 2,

"host" : "192.168.197.23:27001",

"arbiterOnly" : true

}

]

}

> rs.initiate(config);

{ "ok" : 1 }

配置第二个分片副本集

配置文件,三台服务器上都需要创建shard2.conf文件,写入配置,启动shard2 server服务

[root@mongo01 local]# vi /usr/local/mongodb/conf/shard2.conf

#配置文件内容

#——————————————–

pidfilepath = /usr/local/mongodb/shard2/log/shard2.pid

dbpath = /usr/local/mongodb/shard2/data

logpath = /usr/local/mongodb/shard2/log/shard2.log

logappend = true

bind_ip = 0.0.0.0

port = 27002

fork = true

#打开web监控

httpinterface=true

rest=true

#副本集名称

replSet=shard2

#declare this is a shard db of a cluster;

shardsvr = true

#设置最大连接数

maxConns=20000

启动三台服务器上的shard2 server

[root@mongo01 local]# mongod -f /usr/local/mongodb/conf/shard2.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2474

child process started successfully, parent exiting

登陆任意一台服务器,初始化副本集

[root@mongo02 local]# mongo --port 27002

MongoDB shell version v3.4.10

connecting to: mongodb://127.0.0.1:27002/

MongoDB server version: 3.4.10

Welcome to the MongoDB shell.

For interactive help, type "help".

For more comprehensive documentation, see

http://docs.mongodb.org/

Questions? Try the support group

http://groups.google.com/group/mongodb-user

Server has startup warnings:

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten]

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten]

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten]

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten]

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T01:40:03.566-0500 I CONTROL [initandlisten]

> use admin

switched to db admin

> config = {"_id" : "shard2","members" : [{"_id" : 0,"host" : "192.168.197.21:27002",arbiterOnly: true},{"_id" : 1,"host" : "192.168.197.22:27002"},{"_id" : 2,"host" : "192.168.197.23:27002"}]}

{

"_id" : "shard2",

"members" : [

{

"_id" : 0,

"host" : "192.168.197.21:27002",

"arbiterOnly" : true

},

{

"_id" : 1,

"host" : "192.168.197.22:27002"

},

{

"_id" : 2,

"host" : "192.168.197.23:27002"

}

]

}

> rs.initiate(config);

{ "ok" : 1 }

配置第三个分片副本集

件配置文件,三台服务器上都需要创建shard1.conf文件,写入配置,启动shard1 server服务

[root@mongo01 local]# vi /usr/local/mongodb/conf/shard3.conf

#配置文件内容

#——————————————–

pidfilepath = /usr/local/mongodb/shard3/log/shard3.pid

dbpath = /usr/local/mongodb/shard3/data

logpath = /usr/local/mongodb/shard3/log/shard3.log

logappend = true

bind_ip = 0.0.0.0

port = 27003

fork = true

#打开web监控

httpinterface=true

rest=true

#副本集名称

replSet=shard3

#declare this is a shard db of a cluster;

shardsvr = true

#设置最大连接数

maxConns=20000

启动三台服务器上的shard3 server

[root@mongo01 local]# mongod -f /usr/local/mongodb/conf/shard3.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2537

child process started successfully, parent exiting

登陆任意一台服务器,初始化副本集

[root@mongo01 local]# mongod -f /usr/local/mongodb/conf/shard3.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2537

child process started successfully, parent exiting

[root@mongo01 local]# mongo --port 27003

MongoDB shell version v3.4.10

connecting to: mongodb://127.0.0.1:27003/

MongoDB server version: 3.4.10

Server has startup warnings:

2019-02-14T02:09:13.007-0500 I CONTROL [initandlisten]

2019-02-14T02:09:13.007-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.

2019-02-14T02:09:13.007-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.

2019-02-14T02:09:13.007-0500 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2019-02-14T02:09:13.007-0500 I CONTROL [initandlisten]

2019-02-14T02:09:13.008-0500 I CONTROL [initandlisten]

2019-02-14T02:09:13.008-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2019-02-14T02:09:13.008-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T02:09:13.008-0500 I CONTROL [initandlisten]

2019-02-14T02:09:13.008-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2019-02-14T02:09:13.008-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never'

2019-02-14T02:09:13.008-0500 I CONTROL [initandlisten]

> use admin

switched to db admin

> config = {"_id" : "shard3","members" : [{"_id" : 0,"host" : "192.168.197.21:27003"},{"_id" : 1,"host" : "192.168.197.22:27003",arbiterOnly: true},{"_id" : 2,"host" : "192.168.197.23:27003"}]}

{

"_id" : "shard3",

"members" : [

{

"_id" : 0,

"host" : "192.168.197.21:27003"

},

{

"_id" : 1,

"host" : "192.168.197.22:27003",

"arbiterOnly" : true

},

{

"_id" : 2,

"host" : "192.168.197.23:27003"

}

]

}

> rs.initiate(config);

{ "ok" : 1 }

配置路由服务器mongos(三台服务器均配置)

[root@mongo01 local]# vi /usr/local/mongodb/conf/mongos.conf

#内容

pidfilepath = /usr/local/mongodb/mongos/log/mongos.pid

logpath = /usr/local/mongodb/mongos/log/mongos.log

logappend = true

bind_ip = 0.0.0.0

port = 20000

fork = true

个或者3个 configs为配置服务器的副本集名字

configdb = configs/192.168.197.21:21000,192.168.197.22:21000,192.168.197.23:21000

#设置最大连接数

maxConns=20000

启动三台服务器的mongos server

[root@mongo01 local]# mongos -f /usr/local/mongodb/conf/mongos.conf

about to fork child process, waiting until server is ready for connections.

forked process: 2627

child process started successfully, parent exiting

启用分片

目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。

登陆任意一台mongos

[root@mongo01 local]# mongo --port 20000

MongoDB shell version v3.4.10

connecting to: mongodb://127.0.0.1:20000/

MongoDB server version: 3.4.10

Server has startup warnings:

2019-02-14T02:19:38.619-0500 I CONTROL [main]

2019-02-14T02:19:38.619-0500 I CONTROL [main] ** WARNING: Access control is not enabled for the database.

2019-02-14T02:19:38.619-0500 I CONTROL [main] ** Read and write access to data and configuration is unrestricted.

2019-02-14T02:19:38.619-0500 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended.

2019-02-14T02:19:38.619-0500 I CONTROL [main]

mongos> use admin

switched to db admin

mongos> sh.addShard("shard1/192.168.197.21:27001,192.168.197.22:27001,192.168.197.23:27001")

{ "shardAdded" : "shard1", "ok" : 1 }

mongos> sh.addShard("shard2/192.168.197.21:27002,192.168.197.22:27002,192.168.197.23:27002")

{ "shardAdded" : "shard2", "ok" : 1 }

mongos> sh.addShard("shard3/192.168.197.21:27003,192.168.197.22:27003,192.168.197.23:27003")

{ "shardAdded" : "shard3", "ok" : 1 }

mongos> sh.status()

--- Sharding Status ---

sharding version: {

"_id" : 1,

"minCompatibleVersion" : 5,

"currentVersion" : 6,

"clusterId" : ObjectId("5c6504e0b49b12008ede6ffe")

}

shards:

{ "_id" : "shard1", "host" : "shard1/192.168.197.21:27001,192.168.197.22:27001", "state" : 1 }

{ "_id" : "shard2", "host" : "shard2/192.168.197.22:27002,192.168.197.23:27002", "state" : 1 }

{ "_id" : "shard3", "host" : "shard3/192.168.197.21:27003,192.168.197.23:27003", "state" : 1 }

active mongoses:

"3.4.10" : 3

autosplit:

Currently enabled: yes

balancer:

Currently enabled: yes

Currently running: no

NaN

Failed balancer rounds in last 5 attempts: 0

Migration Results for the last 24 hours:

No recent migrations

databases:

对对应的库实现自动分片

#指定testdb库分片生效

mongos> db.runCommand( { enablesharding :"testdb"});

{ "ok" : 1 }

#定义启用分片,hashed算法的方式平均分到副本集上

mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: "hashed"} } )

{ "collectionsharded" : "testdb.table1", "ok" : 1 }

测试

连接到数据库,使用testdb库,然后插入10W条数据,然后查看分片情况

[root@mongo01 local]# mongo 127.0.0.1:20000

MongoDB shell version v3.4.10

connecting to: 127.0.0.1:20000

MongoDB server version: 3.4.10

Server has startup warnings:

2019-02-14T02:19:38.619-0500 I CONTROL [main]

2019-02-14T02:19:38.619-0500 I CONTROL [main] ** WARNING: Access control is not enabled for the database.

2019-02-14T02:19:38.619-0500 I CONTROL [main] ** Read and write access to data and configuration is unrestricted.

2019-02-14T02:19:38.619-0500 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended.

2019-02-14T02:19:38.619-0500 I CONTROL [main]

#使用testdb库

mongos> use testdb

switched to db testdb

#测试插入10W条数据

mongos> for (var i = 1; i <= 100000; i++)db.table1.save({id:i,"test1":"testval1"});

WriteResult({ "nInserted" : 1 })

#查看表状态

mongos> db.table1.stats();

{

"sharded" : true,

"capped" : false,

"ns" : "testdb.table1",

"count" : 100000,

"size" : 5400000,

"storageSize" : 1798144,

"totalIndexSize" : 4202496,

"indexSizes" : {

"_id_" : 1032192,

"id_hashed" : 3170304

},

"avgObjSize" : 54,

"nindexes" : 2,

"nchunks" : 6,

"shards" : {

"shard1" : {

"ns" : "testdb.table1",

"size" : 1822770,

"count" : 33755,

"avgObjSize" : 54,

"storageSize" : 606208,

"capped" : false,

"wiredTiger" : {

"metadata" : {

"formatVersion" : 1

},

"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

"type" : "file",

"uri" : "statistics:table:collection-14-7145832333757066760",

"LSM" : {

"bloom filter false positives" : 0,

"bloom filter hits" : 0,

"bloom filter misses" : 0,

"bloom filter pages evicted from cache" : 0,

"bloom filter pages read into cache" : 0,

"bloom filters in the LSM tree" : 0,

"chunks in the LSM tree" : 0,

"highest merge generation in the LSM tree" : 0,

"queries that could have benefited from a Bloom filter that did not exist" : 0,

"sleep for LSM checkpoint throttle" : 0,

"sleep for LSM merge throttle" : 0,

"total size of bloom filters" : 0

},

"block-manager" : {

"allocations requiring file extension" : 80,

"blocks allocated" : 87,

"blocks freed" : 6,

"checkpoint size" : 569344,

"file allocation unit size" : 4096,

"file bytes available for reuse" : 20480,

"file magic number" : 120897,

"file major version number" : 1,

"file size in bytes" : 606208,

"minor version number" : 0

},

"btree" : {

"btree checkpoint generation" : 68,

"column-store fixed-size leaf pages" : 0,

"column-store internal pages" : 0,

"column-store variable-size RLE encoded values" : 0,

"column-store variable-size deleted values" : 0,

"column-store variable-size leaf pages" : 0,

"fixed-record size" : 0,

"maximum internal page key size" : 368,

"maximum internal page size" : 4096,

"maximum leaf page key size" : 2867,

"maximum leaf page size" : 32768,

"maximum leaf page value size" : 67108864,

"maximum tree depth" : 3,

"number of key/value pairs" : 0,

"overflow pages" : 0,

"pages rewritten by compaction" : 0,

"row-store internal pages" : 0,

"row-store leaf pages" : 0

},

"cache" : {

"bytes currently in the cache" : 4570737,

"bytes read into cache" : 0,

"bytes written from cache" : 2125514,

"checkpoint blocked page eviction" : 0,

"data source pages selected for eviction unable to be evicted" : 0,

"hazard pointer blocked page eviction" : 0,

"in-memory page passed criteria to be split" : 0,

"in-memory page splits" : 0,

"internal pages evicted" : 0,

"internal pages split during eviction" : 0,

"leaf pages split during eviction" : 0,

"modified pages evicted" : 0,

"overflow pages read into cache" : 0,

"overflow values cached in memory" : 0,

"page split during eviction deepened the tree" : 0,

"page written requiring lookaside records" : 0,

"pages read into cache" : 0,

"pages read into cache requiring lookaside entries" : 0,

"pages requested from the cache" : 33755,

"pages written from cache" : 80,

"pages written requiring in-memory restoration" : 0,

"tracked dirty bytes in the cache" : 0,

"unmodified pages evicted" : 0

},

"cache_walk" : {

"Average difference between current eviction generation when the page was last considered" : 0,

"Average on-disk page image size seen" : 0,

"Clean pages currently in cache" : 0,

"Current eviction generation" : 0,

"Dirty pages currently in cache" : 0,

"Entries in the root page" : 0,

"Internal pages currently in cache" : 0,

"Leaf pages currently in cache" : 0,

"Maximum difference between current eviction generation when the page was last considered" : 0,

"Maximum page size seen" : 0,

"Minimum on-disk page image size seen" : 0,

"On-disk page image sizes smaller than a single allocation unit" : 0,

"Pages created in memory and never written" : 0,

"Pages currently queued for eviction" : 0,

"Pages that could not be queued for eviction" : 0,

"Refs skipped during cache traversal" : 0,

"Size of the root page" : 0,

"Total number of pages currently in cache" : 0

},

"compression" : {

"compressed pages read" : 0,

"compressed pages written" : 76,

"page written failed to compress" : 0,

"page written was too small to compress" : 4,

"raw compression call failed, additional data available" : 0,

"raw compression call failed, no additional data available" : 0,

"raw compression call succeeded" : 0

},

"cursor" : {

"bulk-loaded cursor-insert calls" : 0,

"create calls" : 4,

"cursor-insert key and value bytes inserted" : 1915461,

"cursor-remove key bytes removed" : 0,

"cursor-update value bytes updated" : 0,

"insert calls" : 33755,

"next calls" : 1,

"prev calls" : 1,

"remove calls" : 0,

"reset calls" : 33757,

"restarted searches" : 0,

"search calls" : 0,

"search near calls" : 0,

"truncate calls" : 0,

"update calls" : 0

},

"reconciliation" : {

"dictionary matches" : 0,

"fast-path pages deleted" : 0,

"internal page key bytes discarded using suffix compression" : 326,

"internal page multi-block writes" : 0,

"internal-page overflow keys" : 0,

"leaf page key bytes discarded using prefix compression" : 0,

"leaf page multi-block writes" : 4,

"leaf-page overflow keys" : 0,

"maximum blocks required for a page" : 70,

"overflow values written" : 0,

"page checksum matches" : 89,

"page reconciliation calls" : 8,

"page reconciliation calls for eviction" : 0,

"pages deleted" : 0

},

"session" : {

"object compaction" : 0,

"open cursor count" : 4

},

"transaction" : {

"update conflicts" : 0

}

},

"nindexes" : 2,

"totalIndexSize" : 1437696,

"indexSizes" : {

"_id_" : 348160,

"id_hashed" : 1089536

},

"ok" : 1

},

"shard2" : {

"ns" : "testdb.table1",

"size" : 1789722,

"count" : 33143,

"avgObjSize" : 54,

"storageSize" : 598016,

"capped" : false,

"wiredTiger" : {

"metadata" : {

"formatVersion" : 1

},

"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

"type" : "file",

"uri" : "statistics:table:collection-14-449387002177215808",

"LSM" : {

"bloom filter false positives" : 0,

"bloom filter hits" : 0,

"bloom filter misses" : 0,

"bloom filter pages evicted from cache" : 0,

"bloom filter pages read into cache" : 0,

"bloom filters in the LSM tree" : 0,

"chunks in the LSM tree" : 0,

"highest merge generation in the LSM tree" : 0,

"queries that could have benefited from a Bloom filter that did not exist" : 0,

"sleep for LSM checkpoint throttle" : 0,

"sleep for LSM merge throttle" : 0,

"total size of bloom filters" : 0

},

"block-manager" : {

"allocations requiring file extension" : 78,

"blocks allocated" : 85,

"blocks freed" : 6,

"checkpoint size" : 557056,

"file allocation unit size" : 4096,

"file bytes available for reuse" : 24576,

"file magic number" : 120897,

"file major version number" : 1,

"file size in bytes" : 598016,

"minor version number" : 0

},

"btree" : {

"btree checkpoint generation" : 38,

"column-store fixed-size leaf pages" : 0,

"column-store internal pages" : 0,

"column-store variable-size RLE encoded values" : 0,

"column-store variable-size deleted values" : 0,

"column-store variable-size leaf pages" : 0,

"fixed-record size" : 0,

"maximum internal page key size" : 368,

"maximum internal page size" : 4096,

"maximum leaf page key size" : 2867,

"maximum leaf page size" : 32768,

"maximum leaf page value size" : 67108864,

"maximum tree depth" : 3,

"number of key/value pairs" : 0,

"overflow pages" : 0,

"pages rewritten by compaction" : 0,

"row-store internal pages" : 0,

"row-store leaf pages" : 0

},

"cache" : {

"bytes currently in the cache" : 4488480,

"bytes read into cache" : 0,

"bytes written from cache" : 2083980,

"checkpoint blocked page eviction" : 0,

"data source pages selected for eviction unable to be evicted" : 0,

"hazard pointer blocked page eviction" : 0,

"in-memory page passed criteria to be split" : 0,

"in-memory page splits" : 0,

"internal pages evicted" : 0,

"internal pages split during eviction" : 0,

"leaf pages split during eviction" : 0,

"modified pages evicted" : 0,

"overflow pages read into cache" : 0,

"overflow values cached in memory" : 0,

"page split during eviction deepened the tree" : 0,

"page written requiring lookaside records" : 0,

"pages read into cache" : 0,

"pages read into cache requiring lookaside entries" : 0,

"pages requested from the cache" : 33143,

"pages written from cache" : 78,

"pages written requiring in-memory restoration" : 0,

"tracked dirty bytes in the cache" : 0,

"unmodified pages evicted" : 0

},

"cache_walk" : {

"Average difference between current eviction generation when the page was last considered" : 0,

"Average on-disk page image size seen" : 0,

"Clean pages currently in cache" : 0,

"Current eviction generation" : 0,

"Dirty pages currently in cache" : 0,

"Entries in the root page" : 0,

"Internal pages currently in cache" : 0,

"Leaf pages currently in cache" : 0,

"Maximum difference between current eviction generation when the page was last considered" : 0,

"Maximum page size seen" : 0,

"Minimum on-disk page image size seen" : 0,

"On-disk page image sizes smaller than a single allocation unit" : 0,

"Pages created in memory and never written" : 0,

"Pages currently queued for eviction" : 0,

"Pages that could not be queued for eviction" : 0,

"Refs skipped during cache traversal" : 0,

"Size of the root page" : 0,

"Total number of pages currently in cache" : 0

},

"compression" : {

"compressed pages read" : 0,

"compressed pages written" : 73,

"page written failed to compress" : 0,

"page written was too small to compress" : 5,

"raw compression call failed, additional data available" : 0,

"raw compression call failed, no additional data available" : 0,

"raw compression call succeeded" : 0

},

"cursor" : {

"bulk-loaded cursor-insert calls" : 0,

"create calls" : 3,

"cursor-insert key and value bytes inserted" : 1880577,

"cursor-remove key bytes removed" : 0,

"cursor-update value bytes updated" : 0,

"insert calls" : 33143,

"next calls" : 1,

"prev calls" : 1,

"remove calls" : 0,

"reset calls" : 33145,

"restarted searches" : 0,

"search calls" : 0,

"search near calls" : 0,

"truncate calls" : 0,

"update calls" : 0

},

"reconciliation" : {

"dictionary matches" : 0,

"fast-path pages deleted" : 0,

"internal page key bytes discarded using suffix compression" : 279,

"internal page multi-block writes" : 0,

"internal-page overflow keys" : 0,

"leaf page key bytes discarded using prefix compression" : 0,

"leaf page multi-block writes" : 3,

"leaf-page overflow keys" : 0,

"maximum blocks required for a page" : 68,

"overflow values written" : 0,

"page checksum matches" : 67,

"page reconciliation calls" : 8,

"page reconciliation calls for eviction" : 0,

"pages deleted" : 0

},

"session" : {

"object compaction" : 0,

"open cursor count" : 3

},

"transaction" : {

"update conflicts" : 0

}

},

"nindexes" : 2,

"totalIndexSize" : 1368064,

"indexSizes" : {

"_id_" : 339968,

"id_hashed" : 1028096

},

"ok" : 1

},

"shard3" : {

"ns" : "testdb.table1",

"size" : 1787508,

"count" : 33102,

"avgObjSize" : 54,

"storageSize" : 593920,

"capped" : false,

"wiredTiger" : {

"metadata" : {

"formatVersion" : 1

},

"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

"type" : "file",

"uri" : "statistics:table:collection-14--7663219281092140169",

"LSM" : {

"bloom filter false positives" : 0,

"bloom filter hits" : 0,

"bloom filter misses" : 0,

"bloom filter pages evicted from cache" : 0,

"bloom filter pages read into cache" : 0,

"bloom filters in the LSM tree" : 0,

"chunks in the LSM tree" : 0,

"highest merge generation in the LSM tree" : 0,

"queries that could have benefited from a Bloom filter that did not exist" : 0,

"sleep for LSM checkpoint throttle" : 0,

"sleep for LSM merge throttle" : 0,

"total size of bloom filters" : 0

},

"block-manager" : {

"allocations requiring file extension" : 79,

"blocks allocated" : 86,

"blocks freed" : 7,

"checkpoint size" : 557056,

"file allocation unit size" : 4096,

"file bytes available for reuse" : 20480,

"file magic number" : 120897,

"file major version number" : 1,

"file size in bytes" : 593920,

"minor version number" : 0

},

"btree" : {

"btree checkpoint generation" : 27,

"column-store fixed-size leaf pages" : 0,

"column-store internal pages" : 0,

"column-store variable-size RLE encoded values" : 0,

"column-store variable-size deleted values" : 0,

"column-store variable-size leaf pages" : 0,

"fixed-record size" : 0,

"maximum internal page key size" : 368,

"maximum internal page size" : 4096,

"maximum leaf page key size" : 2867,

"maximum leaf page size" : 32768,

"maximum leaf page value size" : 67108864,

"maximum tree depth" : 3,

"number of key/value pairs" : 0,

"overflow pages" : 0,

"pages rewritten by compaction" : 0,

"row-store internal pages" : 0,

"row-store leaf pages" : 0

},

"cache" : {

"bytes currently in the cache" : 4483534,

"bytes read into cache" : 0,

"bytes written from cache" : 2091988,

"checkpoint blocked page eviction" : 0,

"data source pages selected for eviction unable to be evicted" : 0,

"hazard pointer blocked page eviction" : 0,

"in-memory page passed criteria to be split" : 0,

"in-memory page splits" : 0,

"internal pages evicted" : 0,

"internal pages split during eviction" : 0,

"leaf pages split during eviction" : 0,

"modified pages evicted" : 0,

"overflow pages read into cache" : 0,

"overflow values cached in memory" : 0,

"page split during eviction deepened the tree" : 0,

"page written requiring lookaside records" : 0,

"pages read into cache" : 0,

"pages read into cache requiring lookaside entries" : 0,

"pages requested from the cache" : 33102,

"pages written from cache" : 79,

"pages written requiring in-memory restoration" : 0,

"tracked dirty bytes in the cache" : 0,

"unmodified pages evicted" : 0

},

"cache_walk" : {

"Average difference between current eviction generation when the page was last considered" : 0,

"Average on-disk page image size seen" : 0,

"Clean pages currently in cache" : 0,

"Current eviction generation" : 0,

"Dirty pages currently in cache" : 0,

"Entries in the root page" : 0,

"Internal pages currently in cache" : 0,

"Leaf pages currently in cache" : 0,

"Maximum difference between current eviction generation when the page was last considered" : 0,

"Maximum page size seen" : 0,

"Minimum on-disk page image size seen" : 0,

"On-disk page image sizes smaller than a single allocation unit" : 0,

"Pages created in memory and never written" : 0,

"Pages currently queued for eviction" : 0,

"Pages that could not be queued for eviction" : 0,

"Refs skipped during cache traversal" : 0,

"Size of the root page" : 0,

"Total number of pages currently in cache" : 0

},

"compression" : {

"compressed pages read" : 0,

"compressed pages written" : 75,

"page written failed to compress" : 0,

"page written was too small to compress" : 4,

"raw compression call failed, additional data available" : 0,

"raw compression call failed, no additional data available" : 0,

"raw compression call succeeded" : 0

},

"cursor" : {

"bulk-loaded cursor-insert calls" : 0,

"create calls" : 4,

"cursor-insert key and value bytes inserted" : 1878240,

"cursor-remove key bytes removed" : 0,

"cursor-update value bytes updated" : 0,

"insert calls" : 33102,

"next calls" : 1,

"prev calls" : 1,

"remove calls" : 0,

"reset calls" : 33104,

"restarted searches" : 0,

"search calls" : 0,

"search near calls" : 0,

"truncate calls" : 0,

"update calls" : 0

},

"reconciliation" : {

"dictionary matches" : 0,

"fast-path pages deleted" : 0,

"internal page key bytes discarded using suffix compression" : 292,

"internal page multi-block writes" : 0,

"internal-page overflow keys" : 0,

"leaf page key bytes discarded using prefix compression" : 0,

"leaf page multi-block writes" : 4,

"leaf-page overflow keys" : 0,

"maximum blocks required for a page" : 68,

"overflow values written" : 0,

"page checksum matches" : 74,

"page reconciliation calls" : 8,

"page reconciliation calls for eviction" : 0,

"pages deleted" : 0

},

"session" : {

"object compaction" : 0,

"open cursor count" : 4

},

"transaction" : {

"update conflicts" : 0

}

},

"nindexes" : 2,

"totalIndexSize" : 1396736,

"indexSizes" : {

"_id_" : 344064,

"id_hashed" : 1052672

},

"ok" : 1

}

},

"ok" : 1

}

,分到3个分片,各自分片数量为: shard1 "count" : 33755,shard2 "count" : 33143,shard3 "count" : 33102。已经成功了!

后期运维

mongodb的启动顺序是,先启动配置服务器,在启动分片,最后启动mongos.

mongod -f /usr/local/mongodb/conf/config.conf

mongod -f /usr/local/mongodb/conf/shard1.conf

mongod -f /usr/local/mongodb/conf/shard2.conf

mongod -f /usr/local/mongodb/conf/shard3.conf

mongod -f /usr/local/mongodb/conf/mongos.conf

关闭时,直接killall杀掉所有进程

killall mongod

killall mongos

部署过程中遇到的错误

初始化副本集错误:

副本集初始化之前的配置有误,并且已经执行过rs.initiate(config);命令了,当再次执行初始化副本集命令的时候报错如下:

shard1:SECONDARY> rs.initiate(config);

{

"info" : "try querying local.system.replset to see current configuration",

"ok" : 0,

"errmsg" : "already initialized",

"code" : 23,

"codeName" : "AlreadyInitialized"

}

提示已经初始化,这时候可以使用强制初始化命令,命令如下:

rs.reconfig(config, {force: true})

强制初始化依然报错,报错如下:

shard1:SECONDARY> rs.reconfig(config, {force: true})

{

"ok" : 0,

"errmsg" : "New and old configurations differ in the setting of the arbiterOnly field for member 192.168.197.23:27001; to make this change, remove then re-add the member",

"code" : 103,

"codeName" : "NewReplicaSetConfigurationIncompatible"

}

的为主节点,登陆主节点上执行删除操作(如果主节点是自己那么不可以再自己身上操作删除自己)

rs.remove('192.168.197.23:27001')

删除之后在使用以下命令,增加节点,需要在主节点上操作

rs.add("192.168.197.23:27001");

如果增加的节点是仲裁节点的话需要使用以下命令

rs.addArb("192.168.197.23:27001");

增加节点之后重新定义配置,使用以下命令(根据实际情况调整命令IP地址,端口和shard号)

config = {"_id" : "shard3","members" : [{"_id" : 0,"host" : "192.168.197.21:27003"},{"_id" : 1,"host" : "192.168.197.22:27003",arbiterOnly: true},{"_id" : 2,"host" : "192.168.197.23:27003"}]}

重新定义配置后,使用以下命令强制初始化

rs.reconfig(config, {force: true})

自定义分片参数

#定义启用分片,hashed算法的方式平均分到副本集上

db.runCommand( { shardcollection : "testdb.table1",key : {id: "hashed"} } )

mongodb集群配置分片集群的相关教程结束。

《mongodb集群配置分片集群.doc》

下载本文的Word格式文档,以方便收藏与打印。