• Hadoop 2.2.0学习笔记20131210


    伪分布式单节点安装执行pi失败:

    [root@server-518 ~]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 5 10

    出错信息:

    Number of Maps  = 5
    Samples per Map = 10
    13/12/10 11:04:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Wrote input for Map #0
    Wrote input for Map #1
    Wrote input for Map #2
    Wrote input for Map #3
    Wrote input for Map #4
    Starting Job
    13/12/10 11:04:27 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
    13/12/10 11:04:27 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0004
    13/12/10 11:04:27 ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386644665974_1643821138/in
    org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386644665974_1643821138/in
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:285)
            at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:340)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
            at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
            at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
            at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
            at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
            at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
            at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
            at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
            at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
            at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    View Code

     打开调试日志:

    [root@server-518 hadoop-2.2.0]#  export HADOOP_ROOT_LOGGER=DEBUG,console

    详细的出错信息:

    [root@server-518 hadoop-2.2.0]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 3 3
    Number of Maps  = 3
    Samples per Map = 3
    13/12/10 11:36:54 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
    13/12/10 11:36:54 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
    13/12/10 11:36:54 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
    13/12/10 11:36:54 DEBUG security.Groups:  Creating new Groups object
    13/12/10 11:36:54 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
    13/12/10 11:36:54 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
    13/12/10 11:36:54 DEBUG util.NativeCodeLoader: java.library.path=/root/hadoop-2.2.0/lib
    13/12/10 11:36:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    13/12/10 11:36:54 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Falling back to shell based
    13/12/10 11:36:54 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
    13/12/10 11:36:54 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000
    13/12/10 11:36:54 DEBUG security.UserGroupInformation: hadoop login
    13/12/10 11:36:54 DEBUG security.UserGroupInformation: hadoop login commit
    13/12/10 11:36:54 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
    13/12/10 11:36:54 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)
    13/12/10 11:36:54 DEBUG util.Shell: setsid exited with exit code 0
    13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
    13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
    13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
    13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
    13/12/10 11:36:54 DEBUG impl.MetricsSystemImpl: StartupProgress, NameNode startup progress
    13/12/10 11:36:54 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
    13/12/10 11:36:54 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@46ea3050
    13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
    13/12/10 11:36:54 DEBUG ipc.Client: The ping interval is 60000 ms.
    13/12/10 11:36:54 DEBUG ipc.Client: Connecting to /10.10.96.33:8020
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root: starting, having connections 1
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #0
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #0
    13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 33ms
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in: masked=rwxr-xr-x
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #1
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #1
    13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 25ms
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0: masked=rw-r--r--
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #2
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #2
    13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: create took 5ms
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:54 DEBUG hdfs.LeaseRenewer: Lease renewer daemon for [DFSClient_NONMAPREDUCE_-379311577_1] with renew id 1 started
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: Queued packet 0
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: Queued packet 1
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: Waiting for ack for: 1
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: Allocating new block
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #3
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #3
    13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 2ms
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
    13/12/10 11:36:54 DEBUG hdfs.DFSClient: Send buf size 131071
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #4
    13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #4
    13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 1ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741859_1035 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 118
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741859_1035 sending packet packet seqno:1 offsetInBlock:118 lastPacketInBlock:true lastByteOffsetInBlock: 118
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741859_1035
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #5
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #5
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 13ms
    Wrote input for Map #0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1: masked=rw-r--r--
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #6
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #6
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: create took 12ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 1
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Waiting for ack for: 1
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Allocating new block
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #7
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #7
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 3ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Send buf size 131071
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741860_1036 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 118
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741860_1036 sending packet packet seqno:1 offsetInBlock:118 lastPacketInBlock:true lastByteOffsetInBlock: 118
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741860_1036
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #8
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #8
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 8ms
    Wrote input for Map #1
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2: masked=rw-r--r--
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #9
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #9
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: create took 4ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 1
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Waiting for ack for: 1
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Allocating new block
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #10
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #10
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 0ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Send buf size 131071
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741861_1037 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 118
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741861_1037 sending packet packet seqno:1 offsetInBlock:118 lastPacketInBlock:true lastByteOffsetInBlock: 118
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741861_1037
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #11
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #11
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 12ms
    Wrote input for Map #2
    Starting Job
    13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:1233)
    13/12/10 11:36:55 DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
    13/12/10 11:36:55 DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
    13/12/10 11:36:55 DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
    13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:63)
    13/12/10 11:36:55 DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
    13/12/10 11:36:55 DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
    13/12/10 11:36:55 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
    13/12/10 11:36:55 DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
    13/12/10 11:36:55 DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
    13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:329)
    13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
    13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
    13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
    13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
    13/12/10 11:36:55 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
    13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
    13/12/10 11:36:55 DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
    13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:161)
    13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #12
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #12
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
    13/12/10 11:36:55 DEBUG mapred.ResourceMgrDelegate: getStagingAreaDir: dir=/tmp/hadoop-yarn/staging/root/.staging
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #13
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #13
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #14
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #14
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
    13/12/10 11:36:55 DEBUG ipc.Client: The ping interval is 60000 ms.
    13/12/10 11:36:55 DEBUG ipc.Client: Connecting to /0.0.0.0:8032
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /0.0.0.0:8032 from root: starting, having connections 2
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /0.0.0.0:8032 from root sending #15
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /0.0.0.0:8032 from root got value #15
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getNewApplication took 7ms
    13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: Configuring job job_1386598961500_0007 with /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007 as the submit dir
    13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:[hdfs://10.10.96.33:8020]
    13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: default FileSystem: hdfs://10.10.96.33:8020
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #16
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #16
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007: masked=rwxr-xr-x
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #17
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #17
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 7ms
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #18
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #18
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 6ms
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #19
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #19
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar: masked=rw-r--r--
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #20
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #20
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: create took 13ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=0, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=65024, blockSize=134217728, appendChunk=false
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Allocating new block
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=1, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=65024
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #21
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #21
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 1ms
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Send buf size 131071
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 65024
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=1, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=130048, blockSize=134217728, appendChunk=false
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 1
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false lastByteOffsetInBlock: 130048
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=2, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=130048
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=2, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=195072, blockSize=134217728, appendChunk=false
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 2
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:2 offsetInBlock:130048 lastPacketInBlock:false lastByteOffsetInBlock: 195072
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=3, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=195072
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=3, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=260096, blockSize=134217728, appendChunk=false
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 3
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:3 offsetInBlock:195072 lastPacketInBlock:false lastByteOffsetInBlock: 260096
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=4, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=260096
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 4
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 5
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Waiting for ack for: 5
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:4 offsetInBlock:260096 lastPacketInBlock:false lastByteOffsetInBlock: 270227
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 3 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 4 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:5 offsetInBlock:270227 lastPacketInBlock:true lastByteOffsetInBlock: 270227
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 5 status: SUCCESS downstreamAckTimeNanos: 0
    13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #22
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #22
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 6ms
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #23
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #23
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: setReplication took 6ms
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #24
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #24
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 12ms
    13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: Creating splits at hdfs://10.10.96.33:8020/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007
    13/12/10 11:36:55 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #25
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #25
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: delete took 11ms
    13/12/10 11:36:55 ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386646614155_1445162438/in
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #26
    13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #26
    13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: delete took 12ms
    org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386646614155_1445162438/in
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:285)
            at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:340)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
            at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
            at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
            at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
            at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
            at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
            at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
            at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
            at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
            at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    View Code
  • 相关阅读:
    MyBatis执行sql的整个流程
    Ftp传输:向linux服务器上传文件时“550 Permission denied.”错误问题解决
    SpringBoot框架:两个方法同时调用时父方法使内部方法的DataSource注解失效的解决办法
    SpringBoot框架:通过AOP和自定义注解完成druid连接池的动态数据源切换(三)
    SpringBoot框架:配置文件application.properties和application.yml的区别
    SpringBoot框架:'url' attribute is not specified and no embedded datasource could be configured问题处理
    bash脚本打印字符串一个空格的内容
    gethostbyname的线程安全
    算法工程师的职业规划
    理解Deep Link & URI Schemes & Universal Link & App Link
  • 原文地址:https://www.cnblogs.com/littlesuccess/p/3467078.html
Copyright © 2020-2023  润新知