• Hadoop HA和Hbase HA


    Hadoop Hbase HA

    保证所有的服务器时间都相同

    一、Hadoop HA

    HDFS HA

    /root/hd/hadoop-2.8.4/etc/hadoop 下是所有hadoop配置文件

    1、core-site.xml
    <configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
    </property>
    <property>
    <name>ha.zookeeper.quorum</name>
    <value>hsiehchou123:2181,hsiehchou124:2181</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/root/hd/hadoop-2.8.4/tmp</value>:
    </property>
    </configuration>
    2、hdfs-site.xml
    <configuration>
    <property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
    </property>
    <property>
    <name>dfs.ha.namenodes.mycluster</name>
    <value>nn1,nn2</value>
    </property>
    <property>
    <name>dfs.namenode.rpc-address.mycluster.nn1</name>
    <value>hsiehchou121:8020</value>
    </property>
    <property>
    <name>dfs.namenode.rpc-address.mycluster.nn2</name>
    <value>hsiehchou122:8020</value>
    </property>
    <property>
    <name>dfs.namenode.http-address.mycluster.nn1</name>
    <value>hsiehchou121:50070</value>
    </property>
    <property>
    <name>dfs.namenode.http-address.mycluster.nn2</name>
    <value>hsiehchou122:50070</value>
    </property>
    <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://hsiehchou123:8485;hsiehchou124:8485/mycluster</value>
    </property>
    <property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
    </property>
    <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_dsa</value>
    </property>
    <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
    </property>
    </configuration>

    NameNode节点一般配置2台;qjournal—— journal节点一般配置3台
    我这里开始只有四台,所以,journal节点我只分配了两台

    3、yarn-site.xml
    <configuration>
    <!-- Site specific YARN configuration properties -->
    <!-- Site specific YARN configuration properties -->
    <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
    </property>
    <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
    </property>
    <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yarncluster</value>
    </property>
    <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
    </property>
    <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>hsiehchou121</value>
    </property>
    <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>hsiehchou122</value>
    </property>
    <property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>hsiehchou123,hsiehchou124</value>
    </property>
    <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>32768</value>
    </property>
    <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>32768</value>
    </property>
    <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>4096</value>
    </property>
    <property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>24</value>
    </property>
    <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
    </property>
    <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/yarn-logs</value>
    </property>
    </configuration>

    scp -r hadoop/ hsiehchou122:/root/hd/hadoop-2.8.4/etc
    scp -r hadoop/ hsiehchou123:/root/hd/hadoop-2.8.4/etc
    scp -r hadoop/ hsiehchou124:/root/hd/hadoop-2.8.4/etc

    配置好后,分发到所有节点,启动zookeeper后
    start-all.sh 即可启动所有

    二、Hbase HA

    修改配置文件,分发到所有几点,启动即可
    注意:要启动两个master,其中一个需要手动启动

    注意:Hbase安装时,需要对应Hadoop版本

    hbase hbase-2.1.4 对应 hadoop 2.8.4

    通常情况下,把Hadoop core-site hdfs-site 拷贝到hbase conf下

    修改 hbase-env.sh
    修改 hbase-site.xml

    1、hbase-env.sh

    export JAVA_HOME=/root/hd/jdk1.8.0_192

    export HBASE_MANAGES_ZK=false
    关闭hbase自带的zookeeper,使用集群zookeeper

    2、hbase-site.xml
    <configuration>
    <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    </property>
    <property>
    <name>hbase.rootdir</name>
    <value>hdfs://mycluster/hbase</value>
    </property>
    <property>
    <name>hbase.zookeeper.quorum</name>
    <value>hsiehchou123,hsiehchou124</value>
    </property>
    <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
    </property>
    <property>
    <name>zookeeper.session.timeout</name>
    <value>120000</value>
    </property>
    <property>
    <name>hbase.zookeeper.property.tickTime</name>
    <value>6000</value>
    </property>
    </configuration>

    启动hbbase
    需要从另一台服务器上单独启动master
    ./hbase-daemon.sh start master

    通过以下网站可以看到信息
    http://192.168.116.122:16010/master-status

  • 相关阅读:
    Codeforces Round #408 (Div. 2) C
    Codeforces Round #408 (Div. 2) B
    Codeforces Round #408 (Div. 2) A
    我眼中的Linux设备树(五 根节点)
    我眼中的Linux设备树(四 中断)
    我眼中的Linux设备树(三 属性)
    我眼中的Linux设备树(二 节点)
    我眼中的Linux设备树(一 概述)
    为什么你应该(从现在开始就)写博客
    LCD正向扫描和反向扫描
  • 原文地址:https://www.cnblogs.com/hsiehchou/p/10767857.html
Copyright © 2020-2023  润新知