• hadopp 环境搭建


    前序:
    首先准备三个虚拟机节点。
     配置hosts文件:每个节点都 如下配置:
    vi /etc/hosts

    1、 每个结点分别产生公私密钥 

    1. ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
                
    以上命令是产生公私密钥,产生目录在用户主目录下的.ssh目录中。
    Id_dsa.pub为公钥,id_dsa为私钥,紧接着将公钥文件复制成authorized_keys文件,这个步骤是必须的,过程如下
    1. [root@master .ssh]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
     
    单机回环ssh免密码登录测试
    如果ssh localhost 返回 
     则用yum下载SSH :
    1. yum -y install openssh-clients
     
    以下信息表示操作成功,单点回环SSH登录及注销成功,这将为后续跨子结点SSH远程免密码登录作好准备。
     
    2、其他节点也是如此操作即可。
    3、让主结点(master)能通过SSH免密码登录两个子结点(slave)
         为了实现这个功能,两个slave结点的公钥文件中必须要包含主结点的公钥信息,这样

         当master就可以顺利安全地访问这两个slave结点了。操作过程如下:

    1. [root@node1 ~]# scp root@master:~/.ssh/id_dsa.pub ./master_dsa.pub
    2. The authenticity of host 'master (30.96.76.220)' can't be established.
    3. RSA key fingerprint is ae:8c:7f:00:df:40:b8:ec:20:4b:53:78:98:46:8a:c5.
    4. Are you sure you want to continue connecting (yes/no)? yes
    5. Warning: Permanently added 'master,30.96.76.220' (RSA) to the list of known hosts.
    6. root@master's password:
    7. id_dsa.pub 100% 601 0.6KB/s 00:00
    8. [root@node1 .ssh]# cat master_dsa.pub >> authorized_keys
    如上过程显示了node1结点通过scp命令远程登录master结点,并复制master的公钥文件到当前的目录下,这一过程需要密码验证。接着,将master结点的公钥文件追加至authorized_keys文件中,通过这步操作,如果不出问题,master结点就可以通过ssh远程免密码连接node1结点了。在master结点中操作如下:
    1. [root@master .ssh]# ssh node1
    2. The authenticity of host 'node1 (30.96.76.221)' can't be established.
    3. RSA key fingerprint is ae:8c:7f:00:df:40:b8:ec:20:4b:53:78:98:46:8a:c5.
    4. Are you sure you want to continue connecting (yes/no)? yes
    5. Warning: Permanently added 'node1,30.96.76.221' (RSA) to the list of known hosts.
    6. root@node1's password:
    7. Last login: Fri Sep 9 17:42:15 2016 from localhost
    8. [root@node1 ~]# ll
    9. 总用量 24
    10. -rw-------. 1 root root 1144 9 9 18:11 anaconda-ks.cfg
    11. -rw-r--r--. 1 root root 13231 9 9 18:10 install.log
    12. -rw-r--r--. 1 root root 3482 9 9 18:09 install.log.syslog
    13. [root@node1 ~]# pwd
    14. /root
    15. [root@node1 ~]# exit
    16. logout
    17. Connection to node1 closed.
    18. [root@master .ssh]#
    表面上看,这两个结点的ssh免密码登录已经配置成功,但是我们还需要对主结点master也要进行上面的同样工作,这一步有点让人困惑,但是这是有原因的,具体原因现在也说不太好,据说是真实物理结点时需要做这项工作,因为jobtracker有可能会分布在其它结点上,jobtracker有不存在master结点上的可能性。
    1. [root@master .ssh]# scp root@master:~/.ssh/id_dsa.pub ./master_dsa.pub
    2. The authenticity of host 'master (30.96.76.220)' can't be established.
    3. RSA key fingerprint is ae:8c:7f:00:df:40:b8:ec:20:4b:53:78:98:46:8a:c5.
    4. Are you sure you want to continue connecting (yes/no)? yes
    5. Warning: Permanently added 'master,30.96.76.220' (RSA) to the list of known hosts.
    6. id_dsa.pub 100% 601 0.6KB/s 00:00
    7. [root@master .ssh]# ssh master
    8. Last login: Fri Sep 9 17:42:39 2016 from localhost
    9. [root@master ~]# hostname
    10. master
    11. [root@master ~]# exit
    12. logout
    13. Connection to master closed.
    下载JDK,hadoop安装包,并配置环境变量自行百度。
     
    4、配置hadoop 文件:
         4.1、 Core-site.xml
    1. <?xml version="1.0"?>
    2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    3. <!-- Put site-specific property overrides in this file. -->
    4. <configuration>
    5. <property>
    6. <name>fs.default.name</name>
    7. <value>hdfs://master:9000</value>
    8. <final>true></final>
    9. </property>
    10. <property>
    11. <name>hadoop.tmp.dir</name>
    12. <value>/urs/hadoop/tmp</value>
    13. <description>A base for other temporary directories</description>
    14. </property>
    15. </configuration>
      4.2、Hdfs-site.xml配置如下:
    1. <?xml version="1.0"?>
    2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    3. <!-- Put site-specific property overrides in this file. -->
    4. <configuration>
    5. <property>
    6. <name>dfs.name.dir</name>
    7. <value>/usr/hadoop-1.2.1/name</value>
    8. <final>true</final>
    9. </property>
    10. <property>
    11. <name>dfs.data.dir</name>
    12. <value>/usr/hadoop-1.2.1/data</value>
    13. <final>true</final>
    14. </property>
    15. <property>
    16. <name>dfs.replication</name>
    17. <value>2</value>
    18. <final>true</final>
    19. </property>
    20. </configuration>
      4.3、mapred-site.xml:
    1. <?xml version="1.0"?>
    2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    3. <!-- Put site-specific property overrides in this file. -->
    4. <configuration>
    5. <property>
    6. <name>mapred.job.tracker</name>
    7. <value>30.96.76.220:9001</value>
    8. </property>
    9. </configuration>
    (注:如果后续访问 http://30.96.76.220:50070/ 失败。请关闭防火墙试一下。具体问题还是查看日志比较清楚。)
       4.4、hadoop-env.sh:
         export JAVA_HOME=/usr/jdk/jdk1.7.0_76
      4.5 、masters和slaves文件
           
           
     
    注:复制hoodoop 向 node1,node2.
    5、格式化namenode
    1. [root@master ~]# hadoop namenode -format
    2. 16/09/10 17:03:49 INFO namenode.NameNode: STARTUP_MSG:
    3. /************************************************************
    4. STARTUP_MSG: Starting NameNode
    5. STARTUP_MSG: host = master/30.96.76.220
    6. STARTUP_MSG: args = [-format]
    7. STARTUP_MSG: version = 1.2.1
    8. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
    9. STARTUP_MSG: java = 1.7.0_76
    10. ************************************************************/
    11. 16/09/10 17:03:49 INFO util.GSet: Computing capacity for map BlocksMap
    12. 16/09/10 17:03:49 INFO util.GSet: VM type = 64-bit
    13. 16/09/10 17:03:49 INFO util.GSet: 2.0% max memory = 1013645312
    14. 16/09/10 17:03:49 INFO util.GSet: capacity = 2^21 = 2097152 entries
    15. 16/09/10 17:03:49 INFO util.GSet: recommended=2097152, actual=2097152
    16. 16/09/10 17:03:50 INFO namenode.FSNamesystem: fsOwner=root
    17. 16/09/10 17:03:50 INFO namenode.FSNamesystem: supergroup=supergroup
    18. 16/09/10 17:03:50 INFO namenode.FSNamesystem: isPermissionEnabled=true
    19. 16/09/10 17:03:50 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
    20. 16/09/10 17:03:50 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    21. 16/09/10 17:03:50 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
    22. 16/09/10 17:03:50 INFO namenode.NameNode: Caching file names occuring more than 10 times
    23. 16/09/10 17:03:50 INFO common.Storage: Image file /usr/hadoop-1.2.1/name/current/fsimage of size 110 bytes saved in 0 seconds.
    24. 16/09/10 17:03:51 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/usr/hadoop-1.2.1/name/current/edits
    25. 16/09/10 17:03:51 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/usr/hadoop-1.2.1/name/current/edits
    26. 16/09/10 17:03:51 INFO common.Storage: Storage directory /usr/hadoop-1.2.1/name has been successfully formatted.
    27. 16/09/10 17:03:51 INFO namenode.NameNode: SHUTDOWN_MSG:
    28. /************************************************************
    29. SHUTDOWN_MSG: Shutting down NameNode at master/30.96.76.220
    30. ************************************************************/
    注意:上面只要出现“successfully formatted”就表示成功了。
     6、启动hadoop
    有这四项进程表示启动成功。
     
     表示node1.node2 启动成功
    7、通过网站查看集群情况
         
       
       
     
     全部成功。周六没有白加班。
     
    最后附上原文连接:
     
     
  • 相关阅读:
    利用表格分页显示数据的js组件datatable的使用
    css和javascript代码写在页面中的位置说明
    jqueryui组件progressbar进度条和日期组件datepickers的简单使用
    漏洞扫描工具Nessu的安装和简单使用
    jqueryui插件slider的简单使用
    html常用标签表单和表格等及css的简单入门
    通过flask实现web页面简单的增删改查bootstrap美化版
    jquery简单使用入门
    bootstrap简单使用布局、栅格系统、modal标签页等常用组件入门
    Centos7.3_x86_64通过systemctl控制tomcat8.0.46启动和停止
  • 原文地址:https://www.cnblogs.com/shaoyu19900421/p/5860063.html
Copyright © 2020-2023  润新知