• CentOS部署Hadoop集群坏境


    切换至root超级管理员账户,然后修改host文件,执行以下命令:

    vim /etc/hosts

    保存文件,然后重启一下服务器。至此,关于服务器的网络我们已经配置完成了

    ************************************************************************************************

    https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.9.2/hadoop-2.9.2.tar.gz

    192.168.123.8


    hadoop hadoop
    /usr/hadoop
    chmod 777 /usr/hadoop


    ************************************************

    ####/usr/java/jdk1.8.0_191-amd64

    vi /home/hadoop/.bashrc
    export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64

    export HADOOP_INSTALL=/usr/hadoop/hadoop-2.9.2
    export PATH=$PATH:$HADOOP_INSTALL/bin
    export PATH=$PATH:$HADOOP_INSTALL/sbin
    export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_HOME=$HADOOP_INSTALL
    export HADOOP_HDFS_HOME=$HADOOP_INSTALL
    export YARN_HOME=HADOOP_INSTALL
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"

    ***********************************************************************************************
    /usr/hadoop/hadoop-2.9.2/etc/hadoop/core-site.xml

    mkdir tmp


    <!-- HDFS file path -->
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://192.168.123.8:9000</value>
    </property>

    <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>file:/usr/hadoop/hadoop-2.9.2/tmp</value>
    <description>Abasefor other temporary directories.</description>
    </property>

    /usr/hadoop/hadoop-2.9.2/etc/hadoop/hdfs-site.xml

    mkdir dfs
    mkdir dfs/name
    mkdir dfs/data


    <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>192.168.123.8:9001</value>
    </property>

    <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/hadoop/hadoop-2.9.2/dfs/name</value>
    </property>

    <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/hadoop/hadoop-2.9.2/dfs/data</value>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>

    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>


    cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml

    vim etc/hadoop/mapred-site.xml


    <configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    <property>
    <name>mapreduce.jobhistory.address</name>
    <value>192.168.123.8:10020</value>
    </property>
    <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>192.168.123.8:19888</value>
    </property>
    </configuration>

    vim etc/hadoop/yarn-site.xml


    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
    <name>yarn.resourcemanager.address</name>
    <value>192.168.123.8:8032</value>
    </property>
    <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>192.168.123.8:8030</value>
    </property>
    <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>192.168.123.8:8035</value>
    </property>
    <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>192.168.123.8:8033</value>
    </property>
    <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>192.168.123.8:8088</value>
    </property>

    **********************************************************************
    hadoop-env.sh 和 yarn-env.sh 在开头添加如下java环境变量:

    export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64

    vim etc/hadoop/hadoop-env.sh

    export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64

    chown -R hadoop:hadoop /usr/hadoop/hadoop-2.9.2/

    bin/hadoop namenode -format

    ************************************************
    首先,启动HDFS

    sbin/start-dfs.sh
    然后,查看状态

    bin/hadoop dfsadmin -report

    验证本地库是否加载成功:hadoop checknative


    ********************************************************
    ####/usr/hadoop/hadoop-2.9.2/sbin


    sbin/start-dfs.sh ####HDFS
    http://192.168.123.8:50070/dfshealth.html#tab-overview

    sbin/start-yarn.sh ###集群启动
    http://192.168.123.8:8088/cluster/nodes

    bin/hadoop dfsadmin -report ###查看状态

    sbin/start-all.sh

  • 相关阅读:
    Android Market google play store帐号注册方法流程 及发布应用注意事项【转载】
    cocos2d-x 调用第三方so文件
    ios cocos2d-x 多点触摸
    linux文件分割(将大的日志文件分割成小的)
    Linux 统计某个字符串出现的次数
    scapy基础-网络数据包结构
    mac 下idea光标问题
    mac ox终端显示 bogon的问题
    hibernate和mybatis的区别
    memcached知识点梳理
  • 原文地址:https://www.cnblogs.com/smallfa/p/10102969.html
Copyright © 2020-2023  润新知