• MongoDB sharding分片


    MongoDB sharding分片

    有了副本集为什么要用分片?
    分片(一个数据 存放在三个地方 共同存储为一个整体 类似于raid0的存储方式)
    1、副本集利用率不高
    2、主库的读写压力大
    优点:资源利用率高了
    读写压力负载均衡
    横向水平扩展
    缺点:
    理想状态下需要的机器比较多
    配置和运维都变的及其复杂
    一定要提前规划好,一旦建立后再想改变架构就变得困难了

    在这里插入图片描述

    分片的原理
    1、用户mongos(访问代理),提供一个统一的入口,本身没有数据目录,不存放数据;只是做了一个逻辑整合(类似于mycat);但是mongos容易出现单节点故障,解决方法:多几个mongos。
    2、mongos将请求发送给config(存储后端节点分片的信息) ,config负责连接后端节点分片集群;config 也容易出现config单节点故障,所以也需要配置多个。
    3、后端的MongoDB集群进行交叉分片等,即使down掉一台机器,从库会自动顶上,实现高可用

    分片的一些概念

    1、路由器服务-mongos
    不要求副本集,每个mongos都是独立的,配置一模一样
    mongos没有数据目录,不存放数据
    路由服务,提供代理,替用户向后去请求shard分片的数据
    
    2、分片配置信息服务器-config
    config服务器在4.x之后强制要求必须是副本集
    保存数据分配在哪个shard上
    保存所有shard的配置信息
    提供给mongos查询服务
    
    3、片键-shard-key
    数据存放在哪个shard的分区规则
    片键就是索引
    
    4、数据节点-shard
    负责处理数据的节点,每个shard都是分片集群的一部分
    

    片键的分类
    1、hash片键
    数据类型

    id name   host  sex 
    1   zhang  SH    boy
     2   ya     BJ    boy 
     3   yaya   SZ    girl
    

    以id作为片键
    索引: id

    hash计算 shard1 
     2  hash计算  shard2 
     3  hash计算  shard3
    

    hash片键特点

    足够随机 足够平均
    

    区间分片
    数据类型

    id name   host  sex 
    1   zhang  SH    boy
     2   ya     BJ    boy 
     3   yaya   SZ    girl
    

    如果以ID作为片键

    id 
    1-100     shard1 
    101-200   shard2 
    201-300   shard3 
    300+正无穷 shard
    

    以host作为片键

    SH shard1 
    BJ shard2 
    SZ shard3
    

    集群规划
    IP地址以及端口规划

    db01		10.0.0.51	
    			Shard1_Master	  28100
    			Shard3_Slave	  28200
    			Shard2_Arbiter	  28300
    			Config Server	  40000
    			mongos Server	  60000
    		
    db02		10.0.0.52	
    			Shard2_Master	  28100
    			Shard1_Slave	  28200
    			Shard3_Arbiter	  28300
    			Config Server	  40000
    			mongos Server	  60000
    		
    db03		10.0.0.53	
    			Shard3_Master	  28100
    			Shard2_Slave	  28200
    			Shard1_Arbiter	  28300
    			Config Server	  40000
    			mongos Server	  60000
    

    目录规划

    /opt/master/{conf,log,pid} 
    /opt/slave/{conf,log,pid} 
    /opt/arbiter/{conf,log,pid} 
    /opt/config/{conf,log,pid} 
    /opt/mongos/{conf,log,pid}
    
    

    数据目录

    /data/master 
    /data/slave 
    /data/arbiter 
    /data/confi
    
    

    副本集的原理

    1、用户mongos(访问代理),提供一个统一的入口,本身没有数据目录,不存放数据;只是做了一个逻辑整合(类似于mycat);但是mongos容易出现单节点故障,解决方法:多几个mongos。
    2、mongos将请求发送给config(存储后端节点分片的信息) ,config负责连接后端节点分片集群;config 也容易出现config单节点故障,所以也需要配置多个。
    3、后端的MongoDB集群进行交叉分片等,即使down掉一台机器,从库会自动顶上,实现高可用
    
    

    搭建步骤

    1、搭建部署shard副本集
    2、搭建部署config副本集
    3、搭建mongos
    4、添加分片成员
    5、数据库启动分片功能
    6、集合设置分片
    7、写入测试数据
    8、检查分片效果
    9、安装使用图形化工具
    
    

    1、搭建部署shard副本集
    环境准备

    1、db01
    pkill mongo
    rm -rf /opt/mongo_2*
    rm -rf /data/mongo_2*
    rsync -avz /opt/mongodb* 10.0.0.52:/opt/
    rsync -avz /opt/mongodb* 10.0.0.53:/opt/
    
    

    2、db02 db03操作

    echo 'export PATH=$PATH:/opt/mongodb/bin' >> /etc/profile
    source /etc/profile
    
    

    3、创建目录(01 02 03都操作)

    mkdir -p /opt/master/{conf,log,pid}
    mkdir -p /opt/slave/{conf,log,pid}
    mkdir -p /opt/arbiter/{conf,log,pid}
    
    mkdir -p /data/master
    mkdir -p /data/slave
    mkdir -p /data/arbiter
    
    

    4、db01的副本集配置

    ###master配置
    cat >/opt/master/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/master/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/master/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/master/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28100
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard1
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    ###slave配置
    cat >/opt/slave/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/slave/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/slave/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/slave/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28200
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard3
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    ###arbiter配置
    cat >/opt/arbiter/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/arbiter/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/arbiter/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/arbiter/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28300
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard2
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    cd /opt
    tree master slave arbiter
    
    

    db02

    ###master配置
    cat >/opt/master/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/master/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/master/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/master/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28100
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard2
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    ###slave配置
    cat >/opt/slave/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/slave/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/slave/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/slave/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28200
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard1
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    ###arbiter配置
    cat >/opt/arbiter/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/arbiter/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/arbiter/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/arbiter/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28300
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard3
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    

    db03

    ###master配置
    cat >/opt/master/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/master/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/master/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/master/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28100
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard3
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    ###slave配置
    cat >/opt/slave/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/slave/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/slave/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/slave/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28200
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard2
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    ###arbiter配置
    cat >/opt/arbiter/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/arbiter/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/arbiter/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/arbiter/pid/mongodb.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 28300
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      oplogSizeMB: 1024 
      replSetName: shard1
    
    sharding:
      clusterRole: shardsvr
    EOF
    
    

    三台机器都优化

    echo "never"  > /sys/kernel/mm/transparent_hugepage/enabled
    echo "never"  > /sys/kernel/mm/transparent_hugepage/defrag
    
    

    三台都启动

    mongod -f /opt/master/conf/mongod.conf 
    mongod -f /opt/slave/conf/mongod.conf 
    mongod -f /opt/arbiter/conf/mongod.conf
    ps -ef|grep mongod
    
    

    初始化副本集

    ## db01 master节点初始化副本集
    mongo --port 28100
    rs.initiate()
    等一下变成PRIMARY
    rs.add("10.0.0.52:28200")
    rs.addArb("10.0.0.53:28300")
    rs.status()
    
    
    ### db02 master节点初始化副本集
    mongo --port 28100
    rs.initiate()
    等一下变成PRIMARY
    rs.add("10.0.0.53:28200")
    rs.addArb("10.0.0.51:28300")
    rs.status()
    
    ## db03 master节点初始化副本集
    mongo --port 28100
    rs.initiate()
    等一下变成PRIMARY
    rs.add("10.0.0.51:28200")
    rs.addArb("10.0.0.52:28300")
    rs.status()
    
    

    2、搭建部署config副本集
    三台机器都操作

    ## 1.创建目录-三台机器都操作
    mkdir -p /opt/config/{conf,log,pid}
    mkdir -p /data/config/
    
    
    ## 2.创建配置文件-三台机器都操作
    cat >/opt/config/conf/mongod.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/config/log/mongodb.log
    
    storage:
      journal:
        enabled: true
      dbPath: /data/config/
      directoryPerDB: true
    
      wiredTiger:
        engineConfig:
          cacheSizeGB: 0.5
          directoryForIndexes: true
        collectionConfig:
          blockCompressor: zlib
        indexConfig:
          prefixCompression: true
    
    processManagement:
      fork: true
      pidFilePath: /opt/config/pid/mongod.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 40000
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    replication:
      replSetName: configset
    
    sharding:
      clusterRole: configsvr
    EOF
    
    
    ## 3.启动-三台机器都操作
    mongod -f /opt/config/conf/mongod.conf
    
    
    ## 4.初始化副本集-db01操作即可
    mongo --port 40000
    rs.initiate()
    等一下变成PRIMARY
    rs.add("10.0.0.52:40000")
    rs.add("10.0.0.53:40000")
    rs.status()
    
    

    3、搭建mongos
    三台都操作

    ## 1.创建目录-三台机器都操作
    mkdir -p /opt/mongos/{conf,log,pid}
    
    ## 2.创建配置文件-三台机器都操作
    cat >/opt/mongos/conf/mongos.conf<<EOF   
    systemLog:
      destination: file 
      logAppend: true 
      path: /opt/mongos/log/mongos.log
    
    processManagement:
      fork: true
      pidFilePath: /opt/mongos/pid/mongos.pid
      timeZoneInfo: /usr/share/zoneinfo
    
    net:
      port: 60000
      bindIp: 127.0.0.1,$(ifconfig eth0|awk 'NR==2{print $2}')
    
    sharding:
      configDB: 
        configset/10.0.0.51:40000,10.0.0.52:40000,10.0.0.53:40000
    EOF
    
    ## 3.启动-三台机器都操作
    mongos -f /opt/mongos/conf/mongos.conf
    
    ## 4.登陆mongos-db01操作
    mongo --port 60000
    
    

    4、添加分片成员

    ## 1.登陆mongos添加shard成员信息-db01操作(每一条都要一条一条的执行)
    mongo --port 60000
    use admin
    db.runCommand({addShard:'shard1/10.0.0.51:28100,10.0.0.52:28200,10.0.0.53:28300'})
    db.runCommand({addShard:'shard2/10.0.0.52:28100,10.0.0.53:28200,10.0.0.51:28300'})
    db.runCommand({addShard:'shard3/10.0.0.53:28100,10.0.0.51:28200,10.0.0.52:28300'})
    
    ## 2.查看分片成员信息
    db.runCommand({ listshards : 1 })
    
    

    5、数据库启动分片功能
    (在db01操作即可)
    1、数据库开启分片

    mongo --port 60000
    use admin
    db.runCommand( { enablesharding : "oldboy" } )
    
    

    2、创建索引

    use oldboy
    db.hash.ensureIndex( { id: "hashed" } )  如果返回错误 就多执行几遍  只到ok1
    use admin
    sh.shardCollection( "oldboy.hash",{ id: "hashed" } )
    
    

    3、集合开启分片
    测试 (各个节点都执行)

    mongo --port 28100
    use oldboy
    db.hash.count()
    
    
    [root@db01 ~]# mongo  --port 60000
    mongos> use oldboy
    switched to db oldboy
    mongos> for(i=1;i<10000;i++){db.hash.insert({"id":i,"name":"BJ","age":18});}
    WriteResult({ "nInserted" : 1 })
    
    另外其他db02  db03节点
    [root@db01 ~]# mongo --port 28100
    shard1:PRIMARY> use oldboy
    switched to db oldboy
    shard1:PRIMARY> db.hash.count()
    0
    
    
    等执行完
    db01执行
    mongos> use oldboy
    switched to db oldboy
    mongos> db.hash.count()
    9999
    
    

    命令

    mongod -f /opt/config/conf/config.conf
    mongod -f /opt/arbiter/conf/mongod.conf
    mongod -f /opt/master/conf/mongod.conf
    mongod -f /opt/slave/conf/mongod.conf
    mongos -f /opt/mongos/conf/mongos.conf
    
    

    要点:
    知道搭建步骤
    知道config mongos master arbiter slave的作用即可
    具体命令不要研究

    掌握概念
    熟悉步骤
    搭建流程
    配置文件不需要记住
    操作命令不需要记住

  • 相关阅读:
    select count(*) as total from(select count(*) from tab_cb_casim group by `card_no`) as cai;
    GROUP BY关键字与WITH ROLLUP一起使用
    用HTML5播放IPCamera视频
    三,ESP8266 SPI(基于Lua脚本语言)
    二,ESP8266 GPIO和SPI和定时器和串口(基于Lua脚本语言)
    一,ESP8266下载和刷固件(基于Lua脚本语言)
    AT24C02使用详解
    C#上位机串口控制12864显示
    关于STM32 IAP
    2-LPC1778之GPIO
  • 原文地址:https://www.cnblogs.com/liushiya/p/13532413.html
Copyright © 2020-2023  润新知