• 基于yarn计算框架和高可用性DFS Hadoop 2.3启动修正(完整)


    192.168.81.132 -> hadoop1 namenode:   

    192.168.81.130 ->  hadoop2 datanode1:         

    192.168.81.129 -> hadoop3 datanode2;

    192.168.81.131 -> hadoop4 datanode3;

    一、创建账号

    1.所有节点创建用户   

    useradd hadoop   

    passwd hadoop

    2.所有节点创建目录  

    mkdir -p /home/hadoop/source  

    mkdir -p /home/hadoop/tools

    3.Slave节点创建目录  

    mkdir -p /hadoop/hdfs  

    mkdir -p /hadoop/tmp  

    mkdir -p /hadoop/log  

    chmod -R 777 /hadoop

    二、修改主机名

    所有节点修改

    1.vim /etc/sysconfig/network ,

    修改 HOSTNAME=hadoopx

    2.vim /etc/hosts

    192.168.81.132   hadoop1

    192.168.81.130   hadoop2

    192.168.81.129   hadoop3

    192.168.81.131   hadoop4

    3.执行 hostname hadoopx

    4.重新登录,即可

    三、免密码登录

    注意:非root用户免密码登录,需要执行 chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys 如果不修改权限,非root用户无法免密码登录

    四、安装JDK(略)

    五、配置环境变量  

    1. /etc/profile

    export JAVA_HOME=/usr/java/jdk1.6.0_27
    export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:/lib/dt.jar
    export PATH=$JAVA_HOME/bin:$PATH
    export HADOOP_HOME=/home/hadoop/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin
    export PATH=$PATH:$HADOOP_HOME/sbin
    export HADOOP_MAPARED_HOME=${HADOOP_HOME}
    export HADOOP_COMMON_HOME=${HADOOP_HOME}
    export HADOOP_HDFS_HOME=${HADOOP_HOME}
    export YARN_HOME=${HADOOP_HOME}
    export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
    export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
    export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
    export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

    2. hadoop-env.sh  

    在末尾添加 export JAVA_HOME=/usr/java/jdk1.6.0_27

    六、Hadoop 2.3安装  

    1.core-site.xml文件配置

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/tmp</value>
     <description>A base for other temporary directories.</description>
     </property>
     <property>
     <name>fs.defaultFS</name>
     <value>hdfs://192.168.81.132:9000</value>
    </property>
    
    </configuration>
    

     2.hdfs-site.xml文件配置

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
    <property>
     <name>dfs.replication</name>
     <value>3</value>
    </property>
    
    <property>
     <name>dfs.namenode.name.dir</name>
     <value>file:/hadoop/hdfs/name</value>
     <final>true</final>
    </property>
    
    <property>
     <name>dfs.dataname.data.dir</name>
     <value>file:/hadoop/hdfs/data</value>
     <final>true</final>
    </property>
    
    <property>
       <name>dfs.namenode.secondary.http-address</name>
       <value>192.168.81.132:9001</value>
    </property>
    
     <property>
       <name>dfs.webhdfs.enabled</name>
       <value>true</value>
      </property>
    
    </configuration>
    

     3.mapred-site.xml文件配置

    <configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    <property>
    <name>mapreduce.jobhistory.address</name>
    <value>192.168.81.132:10020</value>
    </property>
    <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>192.168.81.132:19888</value>
    </property>
    </configuration>
    

     4.yarn-site.xml文件配置

    <?xml version="1.0"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    <configuration>
    
    <!-- Site specific YARN configuration properties -->
    
    <property>
     <name>yarn.resourcemanager.address</name>
     <value>192.168.81.132:18040</value>
    </property>
    
    <property>
     <name>yarn.resourcemanager.scheduler.address</name>
     <value>192.168.81.132:18030</value>
    </property>
    
    <property>
     <name>yarn.resourcemanager.webapp.address</name>
     <value>192.168.81.132:18088</value>
    </property>
    
    <property>
     <name>yarn.resourcemanager.resource-tracker.address</name>
     <value>192.168.81.132:18025</value>
    </property>
    
    <property>
     <name>yarn.resourcemanager.admin.address</name>
     <value>192.168.81.132:18141</value>
    </property>
    
    <property>  
        <name>yarn.nodemanager.aux-services</name>  
        <value>mapreduce_shuffle</value>  
      </property>
    
    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    </configuration>
    

     七、验证

    1.格式化HDFS
       hadoop namenode -format
    2.启动HDFS
      start-dfs.sh
    3.启动任务管理器
      start-yarn.sh
    4.启动httpfs
      httpfs.sh start
    5.NameNode  验证进程
     NameNode
     Bootstrap
     SecondaryNameNode
     ResourceManager
    6.DataNode 验证进程
     DataNode
     NodeManager
    7.测试HDFS读写,JOB的运行
    hadoop jar hadoop-mapreduce-examples-2.3.0.jar wordcount hdfs://192.168.81.132:9000/input hdfs://192.168.81.132:9000/output

    9.WEB浏览集群

    下一步,验证NameNode的双主。

  • 相关阅读:
    多层交换概述
    多层交换MLS笔记2
    多层交换MLS笔记1
    RSTP Proposal-Agreement
    RSTP Note
    保护STP
    优化STP
    Cisco STP Note
    25、C++的顶层const和底层const
    43、如何用代码判断大小端存储
  • 原文地址:https://www.cnblogs.com/bobsoft/p/3628469.html
Copyright © 2020-2023  润新知