• CentOS6.4安装Hadoop2.0.5 alpha


    1.在第2个个节点上重复http://www.cnblogs.com/littlesuccess/p/3361497.html文章中的第1-5步

    2.修改第1个节点上的hdfs-site.xml中的配置份数为3

    [root@server-305 ~]# vim /opt/hadoop/etc/hadoop/hdfs-site.xml
      <property>
        <name>dfs.replication</name>
        <value>3</value>
      </property>

    3.修改第一个节点上的yarn-site.xml中的yarn resourcemanager地址

    [root@server-306 hadoop]# vi yarn-site.xml
      <property>
        <name>yarn.resourcemanager.address</name>
        <value>10.10.96.32:8080</value>
      </property>
      <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>10.10.96.32:8081</value>
      </property>
      <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>10.10.96.32:8082</value>
      </property>

    4.将第1个节点上的core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml拷贝到第2个节点

    [root@server-305 ~]# scp /opt/hadoop/etc/hadoop/core-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/core-site.xml
    root@192.168.32.33's password:
    core-site.xml                                                                                                              100%  975     1.0KB/s   00:00
    [root@server-305 ~]# scp /opt/hadoop/etc/hadoop/hdfs-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/hdfs-site.xml
    root@192.168.32.33's password:
    hdfs-site.xml                                                                                                              100% 1406     1.4KB/s   00:00
    [root@server-305 ~]# scp /opt/hadoop/etc/hadoop/mapred-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/mapred-site.xml
    root@192.168.32.33's password:
    mapred-site.xml                                                                                                            100%  854     0.8KB/s   00:00
    [root@server-305 ~]# scp /opt/hadoop/etc/hadoop/yarn-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/yarn-site.xml
    root@192.168.32.33's password:
    yarn-site.xml                                                                                                              100%  964     0.9KB/s   00:00

    5.在第1个节点上关闭namenode,secondarynamenode,datanode

    [root@server-305 ~]# su - hdfs
    [hdfs@server-305 ~]$ cd /opt/hadoop/sbin
    [hdfs@server-305 sbin]$ ./hadoop-daemon.sh stop namenode
    stopping namenode
    [hdfs@server-305 sbin]$ ./hadoop-daemon.sh stop secondarynamenode
    stopping secondarynamenode
    [hdfs@server-305 sbin]$ ./hadoop-daemon.sh stop datanode
    stopping datanode

    6.在第1个节点上关闭resourcemanager,nodemanager

    [yarn@server-305 sbin]$ ./yarn-daemon.sh stop resourcemanager
    stopping resourcemanager
    [yarn@server-305 sbin]$ ./yarn-daemon.sh stop nodemanager
    stopping nodemanager

    7.在第一个节点上格式化集群并重新启动hdfs

    [root@server-305 ~]# su - hdfs
    [hdfs@server-305 ~]$ cd /opt/hadoop/bin
    [hdfs@server-305 bin]$ ./hadoop namenode -format
    [hdfs@server-305 bin]$ cd ../sbin
    [hdfs@server-305 sbin]$ ./hadoop-daemon.sh start namenode
    [hdfs@server-305 sbin]$ ./hadoop-daemon.sh start datanode

    8.在第2个节点上启动secondarynamenode,datanode

    [hdfs@server-305 ~]# ssh 192.168.32.33
    [hdfs@server-308 ~]# cd /opt/hadoop/sbin
    [hdfs@server-308 sbin]# ./hadoop-daemon.sh start secondarynamenode
    starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-root-secondarynamenode-server-308.out
    [hdfs@server-308 sbin]# jps
    23485 SecondaryNameNode
    23525 Jps
    [hdfs@server-308 sbin]# ./hadoop-daemon.sh start datanode
    starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-server-308.out
    Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /opt/hadoop-2.1.0-beta/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
    It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
    [hdfs@server-308 sbin]# jps
    23549 DataNode
    23485 SecondaryNameNode
    23617 Jps
    [hdfs@server-308 sbin]#

    9.在第一个节点上以yarn用户启动resourcemanager和nodemanager

    10.在第2个节点上yarn用户启动nodemanager

    11.检查192.168.32.31:8088

    常见问题处理:

    1.发现只能看到一个datanode,查看第2个,第3个节点上查看datanode日志,发现这两个节点无法连接到第一个节点。原因是防火墙没关掉。

    2. JAVA_HOME not set

    Error: JAVA_HOME is not set and could not be found.

    这个问题是在执行libexec/hadoop-config.sh文件时出错。可以在文件开头把JAVA_HOME环境变量设置一下。在文件开头加入
    JAVA_HOME=/usr/java/latest
    问题解决

    2.在启动hadoop之后测试hadoop写文件:

    hadoop fs -put testfile /user/shaochen/testfile
    报错误:/user/shaochen/testfile file or directory does not exist.
    通过检查第2个和第三个节点,发现/var/data/hadoop/hdfs的权限不对。

    chown hdfs:hadoop /var/data/hadoop/hdfs -R

  • 相关阅读:
    redis远程连接超时
    GNU LIBC源代码学习之strcmp
    计算最小生成树
    域名和空间的绑定问题
    Spring MVC 基于Method的映射规则(注解版)
    Spring MVC 基于URL的映射规则(注解版)
    手把手教你编写Logstash插件
    Ruby中如何识别13位的时间戳
    [logstash-input-http] 插件使用详解
    Java直接(堆外)内存使用详解
  • 原文地址:https://www.cnblogs.com/littlesuccess/p/3363181.html
Copyright © 2020-2023  润新知