• Hadoop2.0安装之非HA版


    主要步骤跟Hadoop1.0(1.0安装地址)一致,主要在配置这块有更改

    安装

    # 设置JAVA_HOME
    export JAVA_HOME="/usr/local/src/jdk1.8.0_181/"
    
    • 修改./etc/hadoop/yarn-env.sh
    # 设置JAVA_HOME
    JAVA_HOME="/usr/local/src/jdk1.8.0_181/"
    
    • 修改./etc/hadoop/slaves
    slave1
    slave2
    
    • 修改./etc/hadoop/core-site.xml
    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://master:9000</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>file:/usr/local/src/hadoop-2.6.5/tmp</value>
        </property>
    </configuration>
    
    • 修改./etc/hadoop/hdfs-site.xml
    <configuration>
        <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>master:9001</value>
        </property>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/usr/local/src/hadoop-2.6.5/dfs/name</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/usr/local/src/hadoop-2.6.5/dfs/data</value>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
        </property>
    </configuration>
    
    • 修改./etc/hadoop/mapred-site.xml
    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.address</name>
            <value>slave1:10020</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>slave1:19888</value>
        </property>
    </configuration>
    
    • 修改./etc/hadoop/yarn-site.xml
    <configuration>
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
            <name>yarn.resourcemanager.address</name>
            <value>master:8032</value>
        </property>
        <property>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>master:8030</value>
        </property>
        <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>master:8035</value>
        </property>
        <property>
            <name>yarn.resourcemanager.admin.address</name>
            <value>master:8033</value>
        </property>
        <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>master:8088</value>
        </property>
        <property>
            <name>yarn.log-aggregation-enable</name>
            <value>true</value>
        </property>
        <property>
            <name>yarn.log-aggregation.retain-seconds</name>
            <value>259200</value>
        </property>
        <property>
            <name>yarn.log.server.url</name>
            <value>http://slave1:19888/jobhistory/logs</value>
        </property>
        <property>
            <name>yarn.nodemanager.vmem-pmem-ratio</name>
            <value>4.0</value>
        </property>
    </configuration>
    
    • 和Hadoop1.0一样,第一次启动前,需要格式化hdfs:./bin/hadoop namenode -format

    • 启动:./sbin/start-all.sh

    • 使用:跟Hadoop1.0一样,使用./bin/hadoop命令

    • 关闭:./sbin/stop-all.sh

    提交MapReduce任务

    基本上没什么变化,除了Hadoop streaming地址变了

    [wadeyu@master mr_count]$ cat run.sh
    HADOOP_CMD=/usr/local/src/hadoop-2.6.5/bin/hadoop
    HADOOP_STREAMING_JAR=/usr/local/src/hadoop-2.6.5/share/hadoop/tools/lib/hadoop-streaming-2.6.5.jar
    
    INPUT_FILE=/data/The_Man_of_Property.txt
    OUTPUT_DIR=/output/wc
    
    $HADOOP_CMD fs -rmr -skipTrash $OUTPUT_DIR
    
    $HADOOP_CMD jar $HADOOP_STREAMING_JAR 
        -input $INPUT_FILE 
        -output $OUTPUT_DIR 
        -mapper "python map.py" 
        -reducer "python red.py" 
        -file ./map.py 
        -file ./red.py
    

    参考资料

    【0】八斗学院内部培训资料

  • 相关阅读:
    自定义predicate来对List进行去重
    ParameterizedType
    万级TPS亿级流水-中台账户系统架构设计
    【Linux】【压测】基于python Locust库实现自动化压测实践
    家里宽带ADSL是城域网IP以及公网IP的具体区别和联系
    fiddler如何使用Xposed+JustTrustMe来突破SSL Pinning
    幽门螺旋杆菌检查方法那么多,到底选择哪个?
    apache (httpd)不支持中文路径问题
    皮肤瘙痒症、干燥瘙痒、荨麻疹和丘疹性荨麻疹区别和联系
    Redmi K30和Redmi K30 5G和Redmi K30 5G极速版和Redmi K30i 5G
  • 原文地址:https://www.cnblogs.com/wadeyu/p/9696044.html
Copyright © 2020-2023  润新知