• SparkStreaming “Could not read data from write ahead log record” 报错分析解决


    # if open wal
    org.apache.spark.SparkException: Could not read data from write ahead log record FileBasedWriteAheadLogSegment

    SparkStreaming开启了checkpoint wal后有时会出现如上报错,但不会影响整体程序,只会丢失报错的那个job的数据。其根本原因是wal文件被删了,被sparkstreaming自己的清除机制删掉了。通常意味着一定程度流式程序上存在速率不匹配或堆积问题。

    查看driver日志可发现类似如下的日志:

    2017-03-23 13:55:00 INFO  [Logging.scala:58] Attempting to clear 0 old log files in hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata older than 1490248380000:
    2017-03-23 13:55:05 INFO  [Logging.scala:58] Attempting to clear 1 old log files in hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata older than 1490248470000: hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata/log-1490248404471-1490248464471
    2017-03-23 13:55:05 INFO  [Logging.scala:58] Cleared log files in hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata older than 1490248470000
    2017-03-23 13:55:05 ERROR [Logging.scala:74] Task 41 in stage 35.0 failed 4 times; aborting job
    2017-03-23 13:55:05 ERROR [Logging.scala:95] Error running job streaming job 1490248470000 ms.0
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 41 in stage 35.0 failed 4 times, most recent failure: Lost task 41.3 in stage 35.0 (TID 4273, alps60): org.apache.spark.SparkException: Could not read data from write ahead log record FileBasedWriteAheadLogSegment(hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedData/0/log-1490248403649-1490248463649,44333482,118014)
            at org.apache.spark.streaming.rdd.WriteAheadLogBackedBlockRDD.org$apache$spark$streaming$rdd$WriteAheadLogBackedBlockRDD$$getBlockFromWriteAheadLog$1(WriteAheadLogBackedBlockRDD.scala:143)

    可以发现 1490248403649 的日志被删除程序删除了(cleared log older than 1490248470000),然后这个wal就报错了。

    Spark官方文档没有任何关于这个的配置,因此直接看源码。(spark很多这样的坑,得看源码才知道如何hack或有些隐藏配置)。

    1.FileBasedWriteAheadLogSegment 类中根据日志搜索发现了clean方法(后面的逻辑就是具体删除逻辑,暂不关心),核心就是如何调整这个threshTime了。

     /**
       * Delete the log files that are older than the threshold time.
       *
       * Its important to note that the threshold time is based on the time stamps used in the log
       * files, which is usually based on the local system time. So if there is coordination necessary
       * between the node calculating the threshTime (say, driver node), and the local system time
       * (say, worker node), the caller has to take account of possible time skew.
       *
       * If waitForCompletion is set to true, this method will return only after old logs have been
       * deleted. This should be set to true only for testing. Else the files will be deleted
       * asynchronously.
       */
      def clean(threshTime: Long, waitForCompletion: Boolean): Unit = {
        val oldLogFiles = synchronized {
          val expiredLogs = pastLogs.filter { _.endTime < threshTime }
          pastLogs --= expiredLogs
          expiredLogs
        }
        logInfo(s"Attempting to clear ${oldLogFiles.size} old log files in $logDirectory " +
          s"older than $threshTime: ${oldLogFiles.map { _.path }.mkString("
    ")}")

    2.一步步看调用追踪出去,ReceivedBlockHandler -> ReceiverSupervisorImpl -> CleanUpOldBlocks 。这里有个和ReceiverTracker通信的rpc,因此直接搜索CleanUpOldBlocks -> ReceiverTracker -> JobGenerator 

    在JobGenerator.clearCheckpointData 中有这么一段逻辑

     /** Clear DStream checkpoint data for the given `time`. */
      private def clearCheckpointData(time: Time) {
        ssc.graph.clearCheckpointData(time)
    
        // All the checkpoint information about which batches have been processed, etc have
        // been saved to checkpoints, so its safe to delete block metadata and data WAL files
        val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
        jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
        jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
        markBatchFullyProcessed(time)
      }

    发现了 ssc.graph有个 maxRememberDuration 的成员属性!这就意味着有机会通过ssc去修改它。

    搜索一下代码便发现了相关方法:

    jssc.remember(new Duration(2 * 3600 * 1000));

    反思:

    从之前的日志我们发现默认的清除间隔是几十秒左右,但是在代码中我们可以发现这个参数只能被设置一次(每次设置都会检查当前为null才生效,初始值为null)。所以问题来了,这几十秒在哪里设置的?代码一时没找到,于是项目直接搜索 remember,发现了在DStream里的初始化代码(其中slideDuration初始化来自InputDStream)。根据计算,我们的batchInterval为15s,其他两个没有设置,则checkpointDuration 为15s,rememberDuration为30s。

    override def slideDuration: Duration = {
        if (ssc == null) throw new Exception("ssc is null")
        if (ssc.graph.batchDuration == null) throw new Exception("batchDuration is null")
        ssc.graph.batchDuration
      }
    
     /**
       * Initialize the DStream by setting the "zero" time, based on which
       * the validity of future times is calculated. This method also recursively initializes
       * its parent DStreams.
       */
      private[streaming] def initialize(time: Time) {
        if (zeroTime != null && zeroTime != time) {
          throw new SparkException("ZeroTime is already initialized to " + zeroTime
            + ", cannot initialize it again to " + time)
        }
        zeroTime = time
    
        // Set the checkpoint interval to be slideDuration or 10 seconds, which ever is larger
        if (mustCheckpoint && checkpointDuration == null) {
          checkpointDuration = slideDuration * math.ceil(Seconds(10) / slideDuration).toInt
          logInfo("Checkpoint interval automatically set to " + checkpointDuration)
        }
    
        // Set the minimum value of the rememberDuration if not already set
        var minRememberDuration = slideDuration
        if (checkpointDuration != null && minRememberDuration <= checkpointDuration) {
          // times 2 just to be sure that the latest checkpoint is not forgotten (#paranoia)
          minRememberDuration = checkpointDuration * 2
        }
        if (rememberDuration == null || rememberDuration < minRememberDuration) {
          rememberDuration = minRememberDuration
        }
    
        // Initialize the dependencies
        dependencies.foreach(_.initialize(zeroTime))
      }
  • 相关阅读:
    UWP Xaml设计器中输入特殊字符
    UWP DEP0700: 应用程序注册失败。[0x80073CF9] Install failed. Please contact your software vendor. (Exception from HRESULT: 0x80073CF9)
    UWP 使用Windows.Media.FaceAnalysis.FaceDetector检测人脸
    .Net Core和.Net Standard直观理解
    双屏互动h5
    前端发展趋势与开发技巧
    git学习总结
    前端路由
    【react开发】使用swiper插件,loop:true时产生的问题解决方案
    【纪念】我的大学同学名单
  • 原文地址:https://www.cnblogs.com/lhfcws/p/6605085.html
Copyright © 2020-2023  润新知