测试环境,系统信息
$uname -a
Linux 10.**.**.15 2.6.32-220.17.1.tb619.el6.x86_64 #1 SMP Fri Jun 8 13:48:13CST 2012 x86_64 x86_64 x86_64 GNU/Linux
hadoop和hbase版本信息:
hadoop-0.20.2-cdh3u4
hbase-0.90-adh1u7.1
10.**.**.12 NFS Server端,提供NFS服务
10.**.**.15 作为HDFS NameNode挂载10.**.**.12 NFS共享目录
以ganglia-5.rpm作为文件操作对象,大小在3m左右。
hadoop/conf/hdfs-site.xml 关于NFS配置信息如下:
<property>
<name>dfs.name.dir</name>
<value>/u01/hbase/nndata/local,/u01/hbase/nndata/nfs</value>
</property>
NFS Server端服务停掉情况
NFS Server端服务停掉,执行:
$sudo service nfs status
rpc.svcgssd is stopped
rpc.mountd is stopped
nfsd is stopped
rpc.rquotad is stopped
此时,HDFS继续put,但是一直hang住,不会退出。
NFS服务重启后,HDFS继续put,仍然hang住。重新执行put操作,hang住后timeout时长服务继续,提示文件存在,执行:
$sh hadoop/bin/hadoop fs -ls hdfs://10.**.**.15:9516/ 发现目录下存在同名空文件。
$tail -f hadoop-**-namenode-10.**.**.15.log 时日志无输出,直到put操作继续后才有日志输出,一次输出这段时间操作的所有日志,包括put失败的文件异常信息。
2012-10-23 11:22:38,956 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call create(/ganglia-4.rpm, rwxr-xr-x, DFSClient_-621134164, false, 3, 67108864) from 10.**.**.15:47771: output error
2012-10-23 11:22:38,957 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9516 caught: java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
at org.apache.hadoop.ipc.Server.channelWrite(Server.java:1763)
at org.apache.hadoop.ipc.Server.access$2000(Server.java:95)
at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:773)
at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:837)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1462)
……
2012-10-23 11:22:38,963 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:** (auth:SIMPLE) cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /ganglia-5.rpm for DFSClient_382171631 on client 10.**.**.15, because this file is already being created by DFSClient_-1964937422 on 10.**.**.15
......
2012-10-23 14:40:11,672 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call getDatanodeReport(LIVE) from 10.**.**.15:54929: output error
2012-10-23 14:40:11,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 10.**.**.12
2012-10-23 14:40:11,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9516 caught: java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
at org.apache.hadoop.ipc.Server.channelWrite(Server.java:1763)
at org.apache.hadoop.ipc.Server.access$2000(Server.java:95)
at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:773)
at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:837)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1462)
……
2012-10-23 14:40:11,672 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 8 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 4 SyncTimes(ms): 4 1007521
2012-10-23 14:40:12,152 INFO org.apache.hadoop.hdfs.server.namenode.GetImageServlet: Downloaded new fsimage with checksum: 444a843721bd52a951673a1ba7aecb37
2012-10-23 14:40:12,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 10.**.**.12
2012-10-23 14:40:12,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 4 16
此时,NFS Server端的hbase_home/nndata/share/current/edits文件修改时间在nfs服务恢复后重新更新。
恢复nfs后重新完整put文件后
$sh hadoop/bin/hadoop fs -put ~/dba-ganglia-gmetad-3.1.7-2.x86_64.rpm hdfs://10.**.**.15:9516/ganglia-5.rpm:
log日志信息如下:
2012-10-23 11:31:08,794 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 25 Total time for transactions(ms): 3Number of transactions batched in Syncs: 2 Number of syncs: 15 SyncTimes(ms): 10 676853
2012-10-23 11:31:08,804 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /ganglia-5.rpm. blk_2675602071792190621_3890
2012-10-23 11:31:08,855 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 10.**.**.13:50010 is added to blk_2675602071792190621_3890 size 38020
……
2012-10-23 11:31:08,860 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on file /ganglia-5.rpm from client DFSClient_-19034129
2012-10-23 11:31:08,861 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /ganglia-5.rpm is closed by DFSClient_-19034129
使用$sudo service nfs stop 关闭nfs服务时,namenode输出以下信息,此信息不是因为NFS服务停止通知而产生,是定期同步而产生:
2012-10-23 11:33:54,815 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 2 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 0
查询HDFS的safemode状态:
$sh hadoop/bin/hadoop dfsadmin -safemode get
Safe mode is OFF
可知HDFS并未自动切换到SafeMode。
NFS Server端服务挂掉情况
NFS Server端服务挂掉,执行:
$sudo killall -9 nfsd
查看NFS状态:
$sudo service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 10677) is running...
nfsd is stopped
rpc.rquotad (pid 10645) is running...
执行
$sh hadoop/bin/hadoop dfsadmin -report
和
$sh hadoop/bin/hadoop fs -put ~/dba-ganglia.rpm hdfs://10.**.**.15:9516/ganglia-13.rpm
测试文件put操作,两个操作都会被hang住。并且和测试用例1中的情况一样。report和put 会话会一直hang住,并且不会timeout退出。
此时重启NFS服务,会在一个超时时间后自动恢复。
测试得出结论
1. NFS挂掉后,如果客户端涉及到HDFS要读的文件均在本机datanode上将不受影响(eg:$shhadoop/bin/hadoop fs -cat hdfs://10.**.**.15:9516/11.txt 可以读到文本内容);
2.NFS挂掉后,客户端涉及到HDFS文件写操作将会被一直hang住,不会超时退出。
3.NFS挂掉后(包括servicenfs stop 或者killall nfsd 服务),HDFS端写操作将会一直被hang住,在NFS服务恢复之后,HDFS写操作会继续,并且会正常操作完成,这段时间内操作的详细日志也会在NFS服务恢复正常之后批量输出到hadoop_namenode.log中,后续测试会讨论对该超时的配置。
转自 http://hi.baidu.com/richarwu/item/0c900469d48e9f2069105b9f