Hadoop单机Hadoop测试环境搭建: 1. 安装jdk,并配置环境变量,配置ssh免密码登录 2. 下载安装包hadoop-2.7.3.tar.gz 3. 配置/etc/hosts 127.0.0.1 YARN001 4. 解压缩hadoop-2.7.3.tar.gtz到/home/zhangzhenghai/cluster目录下 5. 配置etc/hadoop/hadoop-env.sh 配置JAVA_HOME环境变量 export JAVA_HOME=/home/zhangzhenghai/soft/jdk1.8.0_121 6. 配置四个配置文件 etc/hadoop/mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> etc/hadoop/core-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.default.name</name> <value>hdfs://YARN001:8020</value> </property> </configuration> etc/hadoop/hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/zhangzhenghai/cluster/hadoop-2.7.3/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/zhangzhenghai/cluster/hadoop-2.7.3/dfs/data</value> </property> </configuration> etc/hadoop/yarn-site.xml <?xml version="1.0"?> <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 7. etc/hadoop/slaves配置文件 $more etc/hadoop/slaves YARN001 8. 格式化NameNode(仅在搭建的时候格式化;若常规启动集群需要跳过此步骤) bin/hadoop namenode -format 9. 启动方法三种,建议一个一个启动 sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode sbin/yarn-daemon.sh start resourcemanager sbin/yarn-daemon.sh start nodemanager 10. 检查各进程是否启动jps 28819 DataNode 29348 NodeManager 29065 ResourceManager 28269 NameNode 29487 Jps 11. web检查NameNode是否启动 http://localhost:50070 http://test:8088 12. 验证机群,跑一个测试用例<br> bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar pi 2 10