• Hadoop-2.7.3 部署


    Hadoop 部署
    1.准备环境
    硬件环境:
    (1)由于分布式计算需要用到很多机器,部署时用户须提供多台机器,至于提供几台,需要根据部署计划确定。标准方式的部署计划需要6台机器。
    实际上,完全模式部署Hadoop时,最低需要两台机器(一个主节点,一个从节点)即可实现完全模式部署,而使用多台机器部署(一个主节点,多个从节点),会使这种完全模式体现的更加充分(硬件方面:每台机器最低要求有1GB内存,20GB硬盘空间)
    (2)环境安装
    1.下载VMware Workstation并且安装。
    2.下载CentOS
    3.新建CentOS虚拟机:打开VMware Workstation → File→ New Virtual Machine Wizard
    → Typical → Installer disc image file(iso) → 填写用户名与密码(用户名密码建议使用joe)→填入机器名 cMaster → 直至Finish。
    4.根据第3步的做法分别安装CentOS系统主机名称为cMaster,cSlave0,cSlave1.
    软件环境:
    (1)我们使用Linux较成熟的发行版CentOS部署Hadoop,须注意的是新装系统(CentOS)的机器不可以直接部署Hadoop,须做一些基本配置。(修改机器名,添加域名映射,关闭防火墙,安装JDK)
    a.修改机器名
    [joe@localhost~]$su - root #切换成root用户修改机器名
    [root@localhost~]#vim /etc/sysconfig/network #编辑存储机器名文件
    将HOSTNAME=localhost.localdomain改为需要的机器名(cMaster/cSlave0/cSlave1)
    HOSTNAME=cMaster
    b.添加域名映射
    查看IP地址 :
    [root@cMaster~]#ifconfig
    我的IP地址为192.168.64.139 机器名为从cMaster,则域名映射为:192.168.64.139 cMaster
    192.168.64.140 cSlave0
    192.168.64.141 cSlave1
    [root@cMaster~]#vim /etc/hosts
    将上面映射加入文件“etc/hosts”中
    添加IP地址的映射
    192.168.64.139 cMaster
    192.168.64.140 cSlave0
    192.168.64.141 cSlave1
    添加域名映射后,用户就可以在cMaster上直接ping另外两台机器的机器名了,如:
    [root@cMaster~]# ping cSlave0 #在机器cMaster上ping机器cSlave0
    c.关闭防火墙
    CentOS的防火墙iptables默认情况下会阻止机器之间通信,下面为永久关闭防火墙(其命令重启后方可生效)
    [root@cMaster~]#chkconfig --level 35 iptables off
    #永久关闭防火墙,重启后生效
    d.安装JDK
    我用到的版本是: jdk1.8.0_101
    需要配置的环境变量
    [root@cMaster~]#vim /etc/profile
    添加如下文档:(添加到文件前面)
    JAVA_HOME=/home/joe/jdk1.8.0_101
    PATH=$JAVA_HOME/bin:$PATH
    CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
    export PATH JAVA_HOME CLASSPATH

    验证是否正确安装:
    [root@cMaster~]#Java
    如果正确安装则如下图所示:

    制定部署计划
    cMaster作为主节点,cSlave0,cSlave1作为从节点
    (5)解压Hadoop。
    分别以Joe身份登录三台机器,每台执行如下命令解压Hadoop文件

    [joe@cMaster~]#tar -zxvf /home/joe/Hadoop-2.7.3.tar.gz
    [joe@cSlave0~]#tar -zxvf /home/joe/Hadoop-2.7.3.tar.gz
    [joe@cSlave1~]#tar -zxvf /home/joe/Hadoop-2.7.3.tar.gz


    (6)配置Hadoop。

    首先编辑命令进入Hadoop-env.sh文件中修改配置信息
    vim /home/joe/hadoop-2.7.3/etc/hadoop/hadoop-env.sh
    找到:
    export JAVA_HOME=${JAVA_HOME}
    修改后:
    export JAVA_HOME=/home/joe/jdk1.8.0_101
    1.编辑文件: vim /home/joe/hadoop-2.7.3/etc/hadoop/core-site.xml 将如下内容嵌入此文件里的configuration标签中,和上一个操作相同,三台机器都需要执行此操作
    <configuration>
    <property><name>hadoop.tmp.dir</name><value>/home/joe/cloudData</value></property>
    <property><name>fs.defaultFS</name><value>hdfs://cMaster:8020</value></property>
    </configuration>
    2.编辑文件: vim /home/joe/hadoop-2.7.3/etc/hadoop/yarn-site.xml 将如下内容嵌入此文件里的configuration标签中,和上一个操作相同,三台机器都需要执行此操作
    <configuration>
    <property><name>yarn.resourcemanager.hostname</name><value>cMaster</value></property>
    <property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
    </configuration>
    3. 编辑文件:将文件“/home/joe/hadoop-2.7.3/etc/hadoop/mapred-site.xml.template”重命名为“/home/joe/hadoop-2.7.3/etc/hadoop/mapred-site.xml”
    编辑文件 vim /home/joe/hadoop-2.7.3/etc/hadoop/yarn-site.xml 将如下内容嵌入此文件里的configuration标签中,和上一个操作相同,三台机器都需要执行此操作
    <configuration>
    <property><name>mapreduce.framework.name</name><value>yarn</value></property>
    </configuration>

    (7)启动Hadoop。
    首先,在主节点cMaster 上格式化主节点命名空间
    [joe@cMaster ~]$ Hadoop-2.7.3/bin/hdfs namenode -format

    [joe@cMaster ~]$ hadoop-2.7.3/sbin/hadoop-daemon.sh start namenode
    [joe@cMaster ~]$ hadoop-2.7.3/sbin/yarn-daemon.sh start resourcemanager

    启动从服务器:
    [joe@cSlave0 ~]$ hadoop-2.7.3/sbin/hadoop-daemon.sh start datanode

    [joe@cSlave0 ~]$ hadoop-2.7.3/sbin/yarn-daemon.sh start nodemanager

    [joe@cSlave1 ~]$ hadoop-2.7.3/sbin/hadoop-daemon.sh start datanode
    [joe@cSlave1 ~]$ hadoop-2.7.3/sbin/yarn-daemon.sh start nodemanager

    [joe@cSlave1 ~]$ /home/joe/jdk1.8.0_101/bin/jps
    2916 NodeManager
    2848 DataNode
    3001 Jps
    例题5.6
    [joe@cMaster hadoop-2.7.3]$ bin/hdfs dfs -put /home/joe/hadoop-2.7.3/etc/hadoop/* /in

    [joe@cMaster hadoop-2.7.3]$ bin/hdfs dfs -mkdir /in
    mkdir: Call From cMaster/192.168.64.139 to cMaster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused


    [joe@cMaster hadoop-2.7.3]$ bin/hdfs dfs -put /home/joe/hadoop-2.7.3/etc/hadoop/* /in
    put: Call From cMaster/192.168.64.139 to cMaster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused


    [joe@cMaster hadoop-2.7.3]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /in /out/wc-01
    16/10/02 14:32:54 INFO client.RMProxy: Connecting to ResourceManager at cMaster/192.168.64.139:8032
    16/10/02 14:32:54 ERROR security.UserGroupInformation: PriviledgedActionException as:joe (auth:SIMPLE) cause:java.net.ConnectException: Call From cMaster/192.168.64.139 to cMaster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
    java.net.ConnectException: Call From cMaster/192.168.64.139 to cMaster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
    at org.apache.hadoop.ipc.Client.call(Client.java:1351)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:456)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:342)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
    at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
    at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
    at org.apache.hadoop.ipc.Client.call(Client.java:1318)
    ... 40 more

  • 相关阅读:
    在线古书式竖排工具
    智能实验室-全能优化(Guardio) 5.04.0.1040
    智能实验室-全能优化(Guardio) 5.03.0.1011
    在线专用链双向转换
    智能实验室-杀马(Defendio) 4.32.0.1020
    智能实验室-杀马(Defendio) 4.31.0.1010
    智能实验室-全能优化(Guardio) 4.999.0.981
    智能实验室-杀马(Defendio) 4.27.0.951
    智能实验室-全能优化(Guardio) 5.02.0.1000
    智能实验室-结构化存储浏览器(SSExplorer) 2.0.0.200
  • 原文地址:https://www.cnblogs.com/myhzb/p/7646212.html
Copyright © 2020-2023  润新知