• 005_elasticsearch的数据导入导出


    一、全部备份和导入

    安装:
    git clone https://github.com/taskrabbit/elasticsearch-dump.git
    cd elasticsearch-dump
    npm install elasticdump -g

    sudo yum install npm

    (1)创建备份路径
    mkdir /data/es_data_backup
    (2)迁移原机器上的所有索引到目标机器
    #把原始索引的mapping结构和数据导出
    elasticdump --input=http://10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump-mapping.json --all=true --type=mapping
    elasticdump --input=http://10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump.json --all=true --type=data
    
    #mapping结构和数据导入新的cluster节点
    elasticdump --input=/data/es_data_backup/cmdb_dump-mapping.json --output=http://10.200.57.118:9200/ --bulk=true
    elasticdump --input=/data/es_data_backup/cmdb_dump.json --output=http://10.200.57.118:9200/ --bulk=true
    

    二、指定库备份和导入

    curl -XGET '192.168.11.10:9200/_cat/indices?v&pretty' .  #查看都有哪些索引
    health status index pri rep docs.count docs.deleted store.size pri.store.size
    green open jyall-test 5 1 18908740 2077368 25gb 12.5gb
    
    
    # Backup index data to a file:
    elasticdump --input=http://10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters_mapping.json --type=mapping
    
    elasticdump --input=http://10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters.json --type=data
    #或者采用gzip的方式,这种方式亲测节省10多倍的空间,导入时gunzip ele_nginx_clusters.json.gz后再进行导入
    #Backup and index to a gzip using stdout:
    elasticdump --input=http://10.200.57.118:9200/ele_nginx_clusters --output=$ | gzip > /data/es_data_backup/ele_nginx_clusters.json.gz
    
    
    导入:
    elasticdump --input=/data/es_data_backup/ele_nginx_clusters_mapping.json --output=http://10.200.57.118:9200/ --bulk=true
    elasticdump --input=/data/es_data_backup/ele_nginx_clusters.json --output=http://10.200.57.118:9200/ --bulk=true
    

    三、导出遇到的报错及问题

    (1)报错如下:
    Thu, 26 Apr 2018 09:14:49 GMT | Error Emitted => read ECONNRESET
    Thu, 26 Apr 2018 09:14:49 GMT | Total Writes: 19800
    Thu, 26 Apr 2018 09:14:49 GMT | dump ended with error (get phase) => Error: read ECONNRESET
    (2)
    <1>
    It sounds like your issue is being caused by the elasticdump opening too many sockets to your elasticsearch cluster. You can use the --maxSockets option to limit the number of sockets opened.
    elasticdump --input http://192.168.2.222:9200/index1 --output http://192.168.2.222:9200/index2 --type=data --maxSockets=5
    
    Reference:
    https://stackoverflow.com/questions/33248267/dump-ended-with-error-set-phase-error-read-econnreset
    https://github.com/nodejs/node/issues/10563
    

    Reference:

    https://www.zhangluya.com/?p=543

    https://github.com/taskrabbit/elasticsearch-dump

  • 相关阅读:
    博客园 markdown 表格
    python 读取文件时报错UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 8: illegal multibyte sequence
    在notepad++中markdown高亮并实时预览
    浏览器地址栏无法直接使用Google搜索问题
    百度文库付费文档免费下载
    交易系统开发小结
    思维认知-读mindhacks杂记
    id-aes128-GCM 加解密example
    如何以bean方式注册map
    如何实现跨域cookie共享
  • 原文地址:https://www.cnblogs.com/itcomputer/p/8945322.html
Copyright © 2020-2023  润新知