• Ubuntu上hadoop集群安装[转]


    hadoop-2.6.0集群环境搭建

    一、主机规划

                1、准备4台Ubuntu 14.04 64-bit 虚拟机,一台充当resourcemanager和namenode,另外三台充当nodemanager和datanode。由于需要实现主机间ssh无密码访问,主机IP采用静态配置。配置如下:

    namenode   ip:192.168.1.110

    datanode1  ip:192.168.1.111

    datanode2  ip:192.168.1.112

    datanode3  ip:192.168.1.113

    分别修改每一台的主机名和hosts文件

    $sudo vim /etc/hostname

    $sudo vim /etc/hosts

    2、新建用户组和用户

    $sudo groupadd clsuter

    $sudo useradd -m -s /bin/bash -g clsuter -G sudo  hadoop

    $sudo passwd hadoop

    注销当前用户以hadoop用户登陆,主要是方便之后使用gedit编辑器修改配置文件。

    3、安装ssh并配置无密码访问,依次执行下面的命令:

    $ sudo apt-get install ssh

    $ sudo apt-get install rsync

    $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

    $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys


    4、配置namenode无密码访问datanode1

    在datanode1上切换到/home/hadoop/.ssh执行

    $ scp hadoop@namenode:/home/hadoop/.ssh/id_dsa.pub ./namenode_dsa.pub

    $ cat namenode_dsa.pub >>authorized_keys

    在namenode上执行:

    $ ssh hadoop@datanode1(第一次需要输入密码,之后便可无密码访问)

    同上分别配置datanode2 和datanode3

    二、安装jdk和hadoop-2.6.0

    1、安装jdk

    到Oracle官网下载jdk-8u25-linux-x64.tar.gz将其拷贝到/usr目录,执行:$ sudo tar -zxf /usr/jdk-8u25-linux-x64.tar.gz

    2、安装hadoop-2.6.0

    http://hadoop.apache.org/下载hadoop-2.6.0.tar.gz拷贝到/home/hadoop目录,执行:$ tar -zxf /home/hdoop/hadoop-2.6.0.tar.gz

    3、配置环境变量

    执行:$ sudo vim /etc/profile 在文件末尾添加如下内容:

    export JAVA_HOME=/usr/jdk1.8.0_25
    export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    export PATH=$PATH:$JAVA_HOME/bin
    export HADOOP_PREFIX=/home/hadoop/hadoop-2.6.0
    export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoop
    export HADOOP_YARN_HOME=/home/hadoop/hadoop-2.6.0


    三、配置hadoop

    1、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/core-site.xml

    <configuration>
        <property>  
            <name>hadoop.tmp.dir</name>  
            <value>/home/hadoop/hadoop-2.6.0/tmp</value>  
            <description>Abase for other temporary directories.</description>  
        </property>  
        <property>  
            <name>fs.defaultFS</name>  
            <value>hdfs://s3:9000</value>  
        </property>  
        <property>  
            <name>io.file.buffer.size</name>  
            <value>131072</value>  
        </property>  
    </configuration>

    2、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/hdfs-site.xml

    <configuration>
     
        <property>  
            <name>dfs.namenode.name.dir</name>  
            <value>/home/hadoop/hadoop-2.6.0/dfs/name</value>  
        </property>  
        <property>  
            <name>dfs.datanode.data.dir</name>  
            <value>/home/hadoop/hadoop-2.6.0/dfs/data</value>  
        </property>  
    
        <property>  
            <name>dfs.replication</name>  
            <value>2</value>  
        </property>  
        <property>  
            <name>dfs.blocksize</name>  
            <value>268435456</value>  
        </property>
        <property>  
            <name>dfs.namenode.handler.count</name>  
            <value>100</value>  
        </property>  
    
    </configuration>

    3、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/yarn-site.xml

    <configuration>
    
    <!-- Site specific YARN configuration properties -->
        <property>  
            <name>yarn.acl.enable</name>  
            <value>true</value>  
        </property> 
        <property>  
            <name>yarn.admin.acl</name>  
            <value>*</value>  
        </property> 
        <property>  
            <name>yarn.log-aggregation-enable</name>  
            <value>false</value>  
        </property> 
    
        <property>  
            <name>yarn.resourcemanager.address</name>  
            <value>s3:8032</value>  
        </property> 
        <property>  
            <name>yarn.resourcemanager.scheduler.address</name>  
            <value>s3:8030</value>  
        </property>
        <property>  
            <name>yarn.resourcemanager.resource-tracker.address</name>  
            <value>s3:8031</value>  
        </property>
        <property>  
            <name>yarn.resourcemanager.admin.address</name>  
            <value>s3:8033</value>  
        </property> 
        <property>  
            <name>yarn.resourcemanager.webapp.address</name>  
            <value>s3:8088</value>  
        </property>  
         <property>  
            <name>yarn.resourcemanager.hostname</name>  
            <value>s3</value>  
        </property> 
        <property>  
            <name>yarn.resourcemanager.scheduler.class</name>  
            <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>  
        </property>
    
        <property>  
            <name>yarn.scheduler.minimum-allocation-mb</name>  
            <value>1024</value>  
        </property>  
    
        <property>  
            <name>yarn.scheduler.maximum-allocation-mb</name>  
            <value>8192</value>  
        </property>  
    
    
        <property>  
            <name>yarn.nodemanager.resource.memory-mb</name>  
            <value>8192</value>  
        </property>  
    
        <property>  
            <name>yarn.nodemanager.log.retain-seconds</name>  
            <value>10800</value>  
        </property>  
        <property>  
            <name>yarn.nodemanager.aux-services</name>  
            <value>mapreduce_shuffle</value>  
        </property>  
        <property>  
            <name>yarn.nodemanager.remote-app-log-dir</name>  
            <value>/logs</value>  
        </property>  
        <property>  
            <name>yarn.nodemanager.remote-app-log-dir-suffix</name>  
            <value>logs</value>  
        </property>  
    
        <property>  
            <name>yarn.log-aggregation.retain-seconds</name>  
            <value>-1</value>  
        </property>
        <property>  
            <name>yarn.log-aggregation.retain-check-interval-seconds</name>  
            <value>-1</value>  
        </property>
    
        <property>  
            <name>yarn.nodemanager.health-checker.script.path</name>  
            <value>-1</value>  
        </property>
    </configuration>

    4、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/mapred-site.xml

        <configuration>  
            <property>  
                <name>mapreduce.framework.name</name>  
                <value>yarn</value>  
            </property>  
            <property>  
                <name>mapreduce.map.memory.mb</name>  
                <value>1536</value>  
            </property>  
            <property>  
                <name>mapreduce.map.java.opts</name>  
                <value>-Xmx1024M</value>  
            </property>  
            <property>  
                <name>mapreduce.reduce.memory.mb</name>  
                <value>3072</value>  
            </property>  
            <property>  
                <name>mapreduce.reduce.java.opts</name>  
                <value>-Xmx2560M</value>  
            </property> 
    
            <property>  
                <name>mapreduce.task.io.sort.mb</name>  
                <value>512</value>  
            </property> 
            <property>  
                <name>mapreduce.task.io.sort.factor</name>  
                <value>100</value>  
            </property> 
            <property>  
                <name>mapreduce.reduce.shuffle.parallelcopies</name>  
                <value>50</value>  
            </property>  
    
    
            <property>  
                <name>mapreduce.jobhistory.address</name>  
                <value>s3:10020</value>  
            </property>  
            <property>  
                <name>mapreduce.jobhistory.webapp.address</name>  
                <value>s3:19888</value>  
            </property>  
            <property>  
                <name>mapreduce.jobhistory.intermediate-done-dir</name>  
                <value>/mr-history/tmp</value>  
            </property>  
            <property>  
                <name>mapreduce.jobhistory.done-dir</name>  
                <value>/mr-history/done</value>  
            </property>  
        </configuration>  

    5、配置/home/hadoop/hadoop-2.6.0/etc/hadoop/slaves 把datanode的ip或主机名一行一个写入文件中

    datanode1 或者192.168.1.111

    datanode2 或者192.168.1.112

    datanode3 或者192.168.1.113


    6、启动hadoop

    在namenode主机上启动如下进程:

    格式化hdfs文件系统

    $ $HADOOP_PREFIX/bin/hdfs namenode -format

    启动namenode

    $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode

    启动resourcemanager

    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager

    启动MapReduce JobHistory Server

    $ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR

    使用jps命令查看运行情况:


    分别在每台datanode上启动:

    启动datanode

    $ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode

    启动nodemanager

    $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager


    7、停止服务

    要停止相应的服务只需把上面的命令中的start改为stop即可。

    可通过浏览器访问如下地址查看hadoop的运行状态:

    NameNode http://namenode:50070

    ResourceManager http://namenode:8088

    MapReduce JobHistory Server http://namenode:19888

    四、运行测试

    1、在hdfs文件系统上新建输入、输出目录

    $ $HADOOP_PREFIX/bin/hdfs -mkdir /input

    $ $HADOOP_PREFIX/bin/hdfs -mkdir /output

    此次用hadoop自带的wordcount来做测试,在当前目录下新建测试文件test.txt

    $ touch test.txt 并向文件中写入一定量的单词文本

    $ $HADOOP_PREFIX/bin/hdfs -copyFromLocal test.txt /input

    运行程序

    $ $HADOOP_PREFIX/bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /input/ /output/result

    查看结果

    $ $HADOOP_PREFIX/bin/hdfs -cat /output/result/*

    欢迎指正!

    文章来源:http://blog.csdn.net/fteworld/article/details/41944597

    http://www.cnblogs.com/ddblog/
  • 相关阅读:
    IT开发者对Mac钟爱
    POJ 3486 &amp; HDU 1913 Computers(dp)
    基础排序算法
    LeetCode 70:Climbing Stairs
    Qt自己定义事件实现及子线程向主线程传送事件消息
    maven自己主动编译,解决你每次代码改动须要又一次编译的繁琐
    Unity定制 Image、Text的对象生成
    iOS学习4_UITableView的使用
    GTK+重拾--09 GTK+中的组件(一)
    Architecting Android…The clean way?
  • 原文地址:https://www.cnblogs.com/ddblog/p/3736916.html
Copyright © 2020-2023  润新知