hadoop2.4.1伪分布模式部署
(承接上一篇hadoop2.4.1-src的编译安装继续配置:http://www.cnblogs.com/wrencai/p/3897438.html)
感谢:http://blog.sina.com.cn/s/blog_5252f6ca0101kb3s.html
感谢:http://blog.csdn.net/coolwzjcool/article/details/32072157
感谢***:http://www.ituring.com.cn/article/63927#
完全分布式:http://www.cnblogs.com/scotoma/archive/2012/09/18/2689902.html
1.配置hadoop环境变量
在/etc/profile文件结尾增加hadoop安装目录的PATH路径
export HADOOP_PREFIX=/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1 export PATH=$PATH:$HADOOP_PREFIX/bin
2.修改该主机名为master:
a.编辑/etc/sysconfig/network文件修改hostname,执行hostname master立刻生效修改
vim /etc/sysconfig/network hostname master
b.修改/etc/hosts文件,添加
127.0.0.1 master
注:此处修改主机名很重要,否则可能会导致后面启动时,无法启动datanode进程。后面配置文件的修改用到本机ip的地方 ,都用修改该过的主机名master代替,
3.配置hadoop相关配置文件
进入到hadoop安装目录此处为:/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1
对etc/hadoop中的文件进行配置(相关文件hadoop-env.sh 、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml)
a.配制core-site.xml
<configuration> <property> <name>fs.default.name</name> <value>hdfs://master:8010</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoop-2.4.1/tmp/hadoop-${user.name}</value> </property> </configuration>
注意红色字体hadoop是我为配置hadoop2.4.1设立的账户名称,是系统在home目录下自动创建的,可以根据需要更改。
b.配制hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <!--系统默认文件保存3份,因伪分布模式,故改为1份--> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/hadoop-2.4.0/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/hadoop-2.4.0/dfs/data</value> </property> </configuration>
c.配制mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>master:54311</value> </property> <property> <name>mapred.map.tasks</name> <value>10</value> </property> <property> <name>mapred.reduce.tasks</name> <value>2</value> </property> </configuration>
d.配置yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
f.修改slaves文件,修改后如下:
localhost
3.ssh免密码登陆设置:参考http://lhflinux.blog.51cto.com/1961662/526122
ssh链接是需要密码认证的,可以通过添加系统认证(即公钥-私钥)的修改,修改后系统间切换可以避免密码输入和ssh认证。
a. 修改文件:vi /etc/ssh/sshd_config
RSAAuthentication yes 开启RSA加密方式
PubkeyAuthentication yes 开启公钥认证
AuthorizedKeysFile .ssh/authorized_keys 公钥存放位置
PasswordAuthentication yes 使用密码登录
GSSAPIAuthentication no 防止登录慢,以及报错问题
ClientAliveInterval 300 300秒超时自动退出
ClientAliveCountMax 10 允许SSH远程连接的最大数
b.在root根目录下执行:
ssh-keygen -t rsa -P ''
回车,然后输入密码,完成后再执行:(本机作为伪集群的一个节点,也需要将认证写入authorized,不执行下一句可能会出现agent admitted failure to sign using the key 错误,参考http://blog.chinaunix.net/uid-28228356-id-3510267.html))
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
d.执行下面命令,能够直接进入则表示成功
[root@localhost]#ssh localhost
Last login:Fri Aug 8 13:44:42 2014 from localhost
4.运行测试hadoop
a.到hadoop2.4.0目录下执行下面命令,格式化结点信息,最后一句出现 "shutting down...",中间没有warn或者fatal error应该就对了。此处可能会出现 STARTUP_MSG: host = java.net.UnknownHostException: localhost.localdomain: localhost.localdomain的提示,可以参考http://lxy2330.iteye.com/blog/1112806进行修改,或者临时通过hostname localhost命令将本机主机名改为localhost.
./bin/hadoop namenode –format
b.执行sbin/start-all.sh启动hadoop第一次可能不成功,这是可以通过先执行一次sbin/stop-all.sh然后在执行sbin/start-all.sh来完成,最后用jps命令查看进程
[root@master hadoop-2.4.1]# ./sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to
/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-namenode-localhost.out
localhost: starting datanode, logging to
/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-datanode-localhost.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to
/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-secondarynamenode-localhost.out
starting yarn daemons
starting resourcemanager, logging to
/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/yarn-root-resourcemanager-localhost.out
localhost: starting nodemanager, logging to
/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/yarn-root-nodemanager-localhost.out
[root@localhost hadoop-2.4.1]# ssh localhost
Last login: Fri Aug 8 13:44:41 2014 from localhost
[root@master ~]# jps
6173 ResourceManager
6005 SecondaryNameNode
5712 NameNode
6270 NodeManager
5821 DataNode
6958 Jps
[root@master~]#
c.浏览器下http://localhost:50070 查看hdfs的页面
d.http://localhost:8088 hadoop进程管理页面