搭建mongodb分布式集群(分片集群+keyfile安全认证以及用户权限)
2020-01-02 12:56:37
介绍:
分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。通过一个名为mongos的路由进程进行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是解决磁盘空间的问题,对于写入有可能会变差(+++里面的说明+++),查询则尽量避免跨分片查询。使用分片的时机:
Shard Server: mongod 实例,用于存储实际的数据块,实际生产环境中一个shard server角色可由几台机器组个一个relica set承担,防止主机单点故障;
Config Server: mongod 实例,存储了整个 Cluster Metadata,其中包括 chunk 信息。
Route Server: mongos 实例,前端路由,客户端由此接入,且让整个集群看上去像单一数据库,前端应用可以透明使用
主机规划:
IP地址 | 实例(端口) | 实例(端口) | 实例(端口) | 实例(端口) | 实例(端口) |
192.168.2.3 | mongos(27107) | configsvr(20000) | shard1(27018) | shard2(27019) | shard3(27020) |
192.168.2.4 | mongos(27107) | configsvr(20000) | shard1(27018) | shard2(27019) | shard3(27020) |
192.168.2.5 | mongos(27107) | configsvr(20000) | shard1(27018) | shard2(27019) | shard3(27020) |
目录创建:
在每台服务器创建对应的目录
1 mkdir -p /data/mongodb/{share1,share2,share3}/db
2 mkdir -p /data/mongodb/mongos/db
3 mkdir -p /data/mongodb/configsvr/db
4 mkdir -p /data/mongodb/{conf,logs}
创建配置文件:
1 touch /data/mongodb/conf/configsvr.conf
2 touch /data/mongodb/conf/mongos.conf
3 touch /data/mongodb/conf/shard1.conf
4 touch /data/mongodb/conf/shard2.conf
5 touch /data/mongodb/conf/shard3.conf
配置文件详情:
官网连接 :https://docs.mongodb.com/manual/reference/configuration-options/
configsvr.conf
1 systemLog:
2 destination: file
3 logAppend: true
4 path: /data/mongodb/logs/configsvr.log
5 storage:
6 dbPath: /data/mongodb/configsvr/db
7 journal:
8 enabled: true
9 processManagement:
10 fork: true
11 pidFilePath: /data/mongodb/configsvr/configsvr.pid
12 net:
13 port: 20000
14 bindIp: 0.0.0.0
15 replication:
16 replSetName: config
17 sharding:
18 clusterRole: configsvr
shard1.conf
1 systemLog:
2 destination: file
3 logAppend: true
4 path: /data/mongodb/logs/shard1.log
5 storage:
6 dbPath: /data/mongodb/shard1/db
7 journal:
8 enabled: true
9 processManagement:
10 fork: true
11 pidFilePath: /data/mongodb/shard1/shard1.pid
12 net:
13 port: 27018
14 bindIp: 0.0.0.0
15 replication:
16 replSetName: shard1
17 sharding:
18 clusterRole: shardsvr
shard2.conf
1 systemLog:
2 destination: file
3 logAppend: true
4 path: /data/mongodb/logs/shard2.log
5 storage:
6 dbPath: /data/mongodb/shard2/db
7 journal:
8 enabled: true
9 processManagement:
10 fork: true
11 pidFilePath: /data/mongodb/shard2/shard2.pid
12 net:
13 port: 27019
14 bindIp: 0.0.0.0
15 replication:
16 replSetName: shard2
17 sharding:
18 clusterRole: shardsvr
shard3.conf
1 systemLog:
2 destination: file
3 logAppend: true
4 path: /data/mongodb/logs/shard3.log
5 storage:
6 dbPath: /data/mongodb/shard3/db
7 journal:
8 enabled: true
9 processManagement:
10 fork: true
11 pidFilePath: /data/mongodb/shard3/shard3.pid
12 net:
13 port: 27020
14 bindIp: 0.0.0.0
15 replication:
16 replSetName: shard3
17 sharding:
18 clusterRole: shardsvr
mongos.conf
1 systemLog:
2 destination: file
3 logAppend: true
4 path: /data/mongodb/logs/mongos.log
5 processManagement:
6 fork: true
7 pidFilePath: /data/mongodb/mongos/mongos.pid
8 net:
9 port: 27017
10 bindIp: 0.0.0.0
11 sharding:
12 configDB: config/192.168.2.3:20000,192.168.2.4:20000,192.168.2.5:20000
启动命令:
1 /data/mongodb/bin/mongod -f /data/mongodb/config/configser.conf
2 /data/mongodb/bin/mongod -f /data/mongodb/config/shard1.conf
3 /data/mongodb/bin/mongod -f /data/mongodb/config/shard2.conf
4 /data/mongodb/bin/mongod -f /data/mongodb/config/shard3.conf
5 /data/mongodb/bin/mongod -f /data/mongodb/config/mongos.conf
停止命令:
killall mongod #停止config 和shard
killall mongos #停止 mongos
连接 shard1
/data/mongodb/bin/mongo 192.168.2.3:27018
执行
1 rs.initiate({
2 "_id":"shard1",
3 "members":[
4 {
5 "_id":0,
6 "host":"192.168.2.3:27018"
7 },
8 {
9 "_id":1,
10 "host":"192.168.2.4:27018"
11 },
12 {
13 "_id":2,
14 "host":"192.168.2.5:27018"
15 }
16 ]
17 })
连接 shard2
/data/mongodb/bin/mongo 192.168.2.3:27019
执行
1 rs.initiate({
2 "_id":"shard2",
3 "members":[
4 {
5 "_id":0,
6 "host":"192.168.2.3:27019"
7 },
8 {
9 "_id":1,
10 "host":"192.168.2.4:27019"
11 },
12 {
13 "_id":2,
14 "host":"192.168.2.5:27019"
15 }
16 ]
17 })
连接 shard3
/data/mongodb/bin/mongo 192.168.2.3:27020
执行
1 rs.initiate({
2 "_id":"shard3",
3 "members":[
4 {
5 "_id":0,
6 "host":"192.168.2.3:27020"
7 },
8 {
9 "_id":1,
10 "host":"192.168.2.4:27020"
11 },
12 {
13 "_id":2,
14 "host":"192.168.2.5:27020"
15 }
16 ]
17 })
连接 config
/data/mongodb/bin/mongo 192.168.2.3:20000
1 rs.initiate({
2 "_id":"config",
3 "members":[
4 {
5 "_id":0,
6 "host":"192.168.2.3:20000"
7 },
8 {
9 "_id":1,
10 "host":"192.168.2.4:20000"
11 },
12 {
13 "_id":2,
14 "host":"192.168.2.5:20000"
15 }
16 ]
17 })
连接 mongos 添加路由
/data/mongodb/bin/mongo 192.168.2.3:27017
1 sh.addShard("shard1/192.168.2.3:27018,192.168.2.4:27018,192.168.2.5:27018")
2 sh.addShard("shard2/192.168.2.3:27019,192.168.2.4:27019,192.168.2.5:27019")
3 sh.addShard("shard3/192.168.2.3:27020,192.168.2.4:27020,192.168.2.5:27020")
查看状态
mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5e0995613452582938deff4f") } shards: { "_id" : "shard1", "host" : "shard1/192.168.2.3:27018,192.168.2.4:27018,192.168.2.5:27018", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.2.3:27019,192.168.2.4:27019,192.168.2.5:27019", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.2.3:27020,192.168.2.4:27020,192.168.2.5:27020", "state" : 1 } active mongoses: "3.6.3" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 5 Last reported error: Could not find host matching read preference { mode: "primary" } for set shard1 Time of Reported error: Mon Dec 30 2019 14:30:41 GMT+0800 (CST) Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "dfcx_test", "primary" : "shard2", "partitioned" : false } { "_id" : "test", "primary" : "shard1", "partitioned" : false }
初始化用户
接入其中一个mongos实例,添加管理员用户
1 use admin 2 db.createUser({user:'admin',pwd:'admin',roles:['clusterAdmin','dbAdminAnyDatabase','userAdminAnyDatabase','readWriteAnyDatabase']})
# 查看用户 在 admin库
3 db.system.users.find().pretty() #授权库账号 4 use df_test 5 db.createUser({user:'df_test',pwd:'admin',roles:['readWrite']})
#修改权限
6 db.updateUser("usertest",{roles:[ {role:"read",db:"testDB"} ]})
注:updateuser它是完全替换之前的值,如果要新增或添加roles而不是代替它
#修改密码
7 db.updateUser("usertest",{pwd:"changepass1"});
role角色:
数据库用户角色:read、readWrite;
数据库管理角色:dbAdmin、dbOwner、userAdmin;
集群管理角色:clusterAdmin、clusterManager、clusterMonitor、hostManager;
备份恢复角色:backup、restore;
所有数据库角色:readAnyDatabase、readWriteAnyDatabase、userAdminAnyDatabase、dbAdminAnyDatabase
超级用户角色:root
内部角色:__system
角色说明:
read:允许用户读取指定数据库
readWrite:允许用户读写指定数据库
dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile
userAdmin:允许用户向system.users集合写入,可以找指定数据库里创建、删除和管理用户
clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制集相关函数的管理权限。
readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读权限
readWriteAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限
userAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的userAdmin权限
dbAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的dbAdmin权限。
root:只在admin数据库中可用。超级账号,超级权限
dbOwner: readWrite + dbAdmin + dbAdmin
数据操作
在案例中,创建appuser用户、为数据库实例df_test启动分片。
1 use df_test 2 db.createUser({user:'appuser',pwd:'AppUser@01',roles:[{role:'dbOwner',db:'df_test'}]})
3 sh.enableSharding("df_test") #开启分片
创建集合userid,为其执行分片初始化。
1 use df_test 2 db.createCollection("users") 3 db.users.ensureIndex({userid:1}) 创建索引, 4 sh.shardCollection("dfcx_test.users",{userid:1}) 同时为集合指定片键 5 sh.shardCollection("dfcx_test.users.users", {users:"hashed"}, false, { numInitialChunks: 4} ) (添加参数,可以执行4 和5任意一个)
插入测试数据
1 mongos> for(var i=1;i<1000000;i++) db.users.insert({userid:i,username:"HSJ"+i,city:"beijing"}) 2 mongos> for(var i=1;i<1000000;i++) db.users.insert({userid:i,username:"HSJ"+i,city:"tianjing"})
查看状态
sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5e0995613452582938deff4f") } shards: { "_id" : "shard1", "host" : "shard1/192.168.2.3:27018,192.168.2.4:27018,192.168.2.5:27018", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.2.3:27019,192.168.2.4:27019,192.168.2.5:27019", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.2.3:27020,192.168.2.4:27020,192.168.2.5:27020", "state" : 1 } active mongoses: "3.6.3" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: yes Collections with active migrations: dfcx_test.users started at Thu Jan 02 2020 14:35:34 GMT+0800 (CST) Failed balancer rounds in last 5 attempts: 4 Last reported error: Could not find host matching read preference { mode: "primary" } for set shard1 Time of Reported error: Mon Dec 30 2019 14:32:41 GMT+0800 (CST) Migration Results for the last 24 hours: 1 : Success databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "dfcx_test", "primary" : "shard2", "partitioned" : true } dfcx_test.users shard key: { "userid" : 1 } unique: false balancing: true chunks: shard1 1 shard2 3 { "userid" : { "$minKey" : 1 } } -->> { "userid" : 2 } on : shard1 Timestamp(2, 0) { "userid" : 2 } -->> { "userid" : 500002 } on : shard2 Timestamp(2, 1) { "userid" : 500002 } -->> { "userid" : 750003 } on : shard2 Timestamp(1, 3) { "userid" : 750003 } -->> { "userid" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 4) { "_id" : "test", "primary" : "shard1", "partitioned" : false }
mongodb 生产开启 keyfile安全认证以及用户权限
创建副本集认证key文件
1、创建key文件: 注意,三个节点必须要用同一份keyfile,在一台机器生成,拷贝到另外两台,并且修改成 600 的文件属性
1 openssl rand -base64 90 -out ./keyfile 2 cp keyfile /data/mongodb/conf、 3 chmod 600 /data/mongodb/keyfile
2.在每个配置文件里添加配置
config 和shard 添加:
security:
keyFile: /data/mongodb/conf/keyfile
authorization: enabled
mongos 添加
1 security: 2 keyFile: /data/mongodb/conf/keyfile
3.重启集群
隐藏节点-延迟节点
由于我没有部署测试过在完成看到这个比较好收藏查看
查看文档:https://www.cnblogs.com/kevingrace/p/8178549.html
文章参考连接:
https://www.jianshu.com/p/f021f1f3c60b