• Failed to rename HdfsNamedFileStatus


    报错信息:

    21/03/29 14:05:08 WARN TaskSetManager: Lost task 53.0 in stage 6.0 (TID 580, test-bdp06, executor 7): org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    Caused by: java.io.IOException: Failed to rename HdfsNamedFileStatus{path=hdfs://mycluster/data/autohome-log/1614524400000.tmp/_temporary/0/_temporary/attempt_20210329140504_0016_m_000053_0/5; isDirectory=false; length=2859909; replication=3; blocksize=134217728; modification_time=1616997908781; access_time=1616997908596; owner=root; group=hdfs; permission=rw-r--r--; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} to hdfs://mycluster/data/autohome-log/1614524400000.tmp/5
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:473)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:486)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:597)
        at org.apache.hadoop.mapred.FileOutputCommitter.commitTask(FileOutputCommitter.java:172)
        at org.apache.hadoop.mapred.OutputCommitter.commitTask(OutputCommitter.java:343)
        at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
        at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
        at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:225)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:138)
        at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more

    错误代码:

    JavaPairRDD<String, String> domainLogRdd = logRdd.mapToPair(log -> {
        return new Tuple2<>(log.split("|")[1], log);
    })
    
    domainRdd = domainLogRdd.reduceByKey((log1, log2) -> log1.concat("
    ").concat(log2));
    
    domainLogRdd.saveAsHadoopFile(path, String.class, String.class, RDDMultipleTextOutputFormat.class);

    解决方法:

    未给分隔符添加转义字符,把log.split("|")改为log.split("\|")可以解决错误

    报错原因:

  • 相关阅读:
    【转】Python常见web系统返回码
    【转】暴力破解无线WiFi密码
    【转】Django继承AbstractUser新建User Model时出现auth.User.groups: (fields.E304)错误
    Python去除文件中的空格、Tab键和回车
    用filter求素数
    【转】Python读取PDF文档,输出内容
    【转】RPC简单介绍
    【转】Python3使用Django2.x的settings文件详解
    Python生成随机字符串
    Python之turtle画同心圆和棋盘
  • 原文地址:https://www.cnblogs.com/zcqkk/p/14593875.html
Copyright © 2020-2023  润新知