• MyCat 学习笔记 第十一篇.数据分片 之 分片数据查询 ( select * from table_name limit 100000,100 )


    1 环境说明

    VM 模拟3台MYSQL 5.6 服务器

       VM1 192.168.31.187:3307

       VM2 192.168.31.212:3307

       VM3 192.168.31.150:  3307

    MYCAT 1.5 服务部署在宿主机上

      MYCAT 192.168.31.207 :8806【SQL执行端口】 / 9066【管理端口】

    2 应用场景

    2.0 MYCAT配置

    schema.xml

    <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100">
      <table name="t_demo_travel_record" dataNode="dn1,dn2,dn3" rule="auto-sharding-long" />
      <table name="t_demo_travel_record_child" dataNode="dn1,dn2,dn3" rule="auto-sharding-long" />
    </schema>

    <dataNode name="dn1" dataHost="vm1" database="test" />
    <dataNode name="dn2" dataHost="vm2" database="test" />
    <dataNode name="dn3" dataHost="vm3" database="test" />

    <dataHost name="vm1" maxCon="1000" minCon="10" balance="0"
      writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
      <heartbeat>select user()</heartbeat>
      <writeHost host="vm1M1" url="192.168.31.187:3307" user="root" password="root123"></writeHost>
    </dataHost>

    <dataHost name="vm2" maxCon="1000" minCon="10" balance="0"
      writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
      <heartbeat>select user()</heartbeat>
      <writeHost host="vm2M1" url="192.168.31.212:3307" user="root" password="root123"></writeHost>

    </dataHost>

    <dataHost name="vm3" maxCon="1000" minCon="10" balance="0"
      writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
      <heartbeat>select user()</heartbeat>
      <writeHost host="vm3M1" url="192.168.31.150:3307" user="root" password="root123"></writeHost>

    </dataHost>

    rule.xml

    <tableRule name="auto-sharding-long">
      <rule>
        <columns>id</columns>
        <algorithm>rang-long</algorithm>
      </rule>
    </tableRule>

    <function name="rang-long"
      class="org.opencloudb.route.function.AutoPartitionByLong">
      <property name="mapFile">autopartition-long.txt</property>
      <property name="defaultNode">0</property>
    </function>

    autopartition-long.txt

    # range start-end ,data node index
    # K=1000,M=10000.
    0-500M=0
    500M-1000M=1
    1000M-1500M=2

    2.1 模拟在3个数据库上保存了200多万条记录,验证下数据库查询的响应。

    物理库上数据情况

    VM1 192.168.31.187:3307  保存了74.8万条记录

    mysql> SELECT min(id),max(id),count(1) FROM test.t_demo_travel_record;

    +---------+---------+----------+

    | min(id) | max(id) | count(1) |

    +---------+---------+----------+

    |   10000 | 5000000 |   748002 |

    +---------+---------+----------+

    1 row in set (0.16 sec)

     

    VM2 192.168.31.212:3307  保存了74.9万条记录

    mysql> SELECT min(id),max(id),count(1) FROM test.t_demo_travel_record;

    +---------+----------+----------+

    | min(id) | max(id)  | count(1) |

    +---------+----------+----------+

    | 5000001 | 10000000 |   749500 |

    +---------+----------+----------+

    1 row in set (0.17 sec)

     

     VM3 192.168.31.150:  3307 比VM2少一条记录

    mysql> SELECT min(id),max(id),count(1) FROM test.t_demo_travel_record;

    +----------+----------+----------+

    | min(id)  | max(id)  | count(1) |

    +----------+----------+----------+

    | 10000001 | 14991498 |   749499 |

    +----------+----------+----------+

    1 row in set (0.17 sec)

     MYCAT 192.168.31.207:  8806  一共224.7万条记录,最大记录从 VM1 结点获取,最大记录从 VM3 结点获取

    mysql> SELECT min(id),max(id),count(1) FROM t_demo_travel_record;

    +-------+----------+---------+

    | MIN0  | MAX1     | COUNT2  |

    +-------+----------+---------+

    | 10000 | 14991498 | 2247001 |

    +-------+----------+---------+

    1 row in set (0.31 sec)

    在MYCAT中进行不指定排序的分页查询 ,从第100万条记录开始取100,浩时1.5秒,好久~~

    mysql> select * from t_demo_travel_record where id between 4999980 and 14999980 limit 1000000,100;

    +----------+------------------+

    | id       | context          |

    +----------+------------------+

    | 13341025 | context_13341025 |

    | 13341026 | context_13341026 |

    | 13341027 | context_13341027 |

    | 13341028 | context_13341028 |

    | 13341029 | context_13341029 |

    ....

    |  8320686 | context_8320686  |

    |  8320687 | context_8320687  |

    |  8320688 | context_8320688  |

    |  8320689 | context_8320689  |

    +----------+------------------+

    100 rows in set (1.50 sec)

     

    看下日志,MYCAT是把 limit 1000000,100 改为 limit 0 , 1000100 往物理库中发送,速度一个字:不慢才怪。

    02/02 23:41:47.958  DEBUG [$_NIOREACTOR-2-RW] (ServerQueryHandler.java:56) -ServerConnection [id=12, schema=TESTDB, host=192.168.31.207, user=test,txIsolation=3, autocommit=true, schema=TESTDB]select * from t_demo_travel_record where id between 4999980 and 14999980 limit 1000000,100

    02/02 23:41:47.958  DEBUG [$_NIOREACTOR-2-RW] (EnchachePool.java:70) -SQLRouteCache hit cache ,key:TESTDBselect * from t_demo_travel_record where id between 4999980 and 14999980 limit 1000000,100

    02/02 23:41:47.958  DEBUG [$_NIOREACTOR-2-RW] (NonBlockingSession.java:113) -ServerConnection [id=12, schema=TESTDB, host=192.168.31.207, user=test,txIsolation=3, autocommit=true, schema=TESTDB]select * from t_demo_travel_record where id between 4999980 and 14999980 limit 1000000,100, route={

       1 -> dn1{SELECT *

    FROM t_demo_travel_record

    WHERE id BETWEEN 4999980 AND 14999980

    LIMIT 0, 1000100}

       2 -> dn2{SELECT *

    FROM t_demo_travel_record

    WHERE id BETWEEN 4999980 AND 14999980

    LIMIT 0, 1000100}

       3 -> dn3{SELECT *

    FROM t_demo_travel_record

    WHERE id BETWEEN 4999980 AND 14999980

    LIMIT 0, 1000100}

    } rrs 

    02/02 23:41:47.958  DEBUG [$_NIOREACTOR-2-RW] (MultiNodeQueryHandler.java:82) -execute mutinode query select * from t_demo_travel_record where id between 4999980 and 14999980 limit 1000000,100

    02/02 23:41:47.958  DEBUG [$_NIOREACTOR-2-RW] (MultiNodeQueryHandler.java:97) -has data merge logic 

     

    02/02 23:41:47.961  DEBUG [$_NIOREACTOR-3-RW] (MultiNodeQueryHandler.java:241) -on row end reseponse MySQLConnection [id=19, lastTime=1454427707947, user=root, schema=test, old shema=test, borrowed=true, fromSlaveDB=false, threadId=36, charset=utf8, txIsolation=3, autocommit=true, attachment=dn1{SELECT *

    FROM t_demo_travel_record

    WHERE id BETWEEN 4999980 AND 14999980

    LIMIT 0, 1000100}, respHandler=org.opencloudb.mysql.nio.handler.MultiNodeQueryHandler@24e67429, host=192.168.31.187, port=3307, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]

     

    这里进行数据合并,好在使用了auto-sharding-long模式,如果是sharding-by-mod模式的话,数据是非连续离散在不同数据结点中,那合并起来就更慢了。

    在大数据量分片分页查询时,MYCAT 可能会存在效率与内存占用问题。

    02/02 23:41:49.392  DEBUG [BusinessExecutor6] (DataMergeService.java:296) -prepare mpp merge result for select * from t_demo_travel_record where id between 4999980 and 14999980 limit 1000000,100

    02/02 23:41:49.393  DEBUG [BusinessExecutor6] (MultiNodeQueryHandler.java:324) -last packet id:105

     

  • 相关阅读:
    手游渠道分成的那些坑来等你跳
    [手游新项目历程]-43-sql关键字解决
    一些相似词的区别
    程序员之间的“笑料”
    程序员之间的“笑料”
    2014游戏圈员工跳槽必看
    2014游戏圈员工跳槽必看
    游戏应该怎么做-美术
    游戏应该怎么做-美术
    [手游新项目历程]-44-gdb
  • 原文地址:https://www.cnblogs.com/kaye0110/p/5178779.html
Copyright © 2020-2023  润新知