• 记一次DataNode慢启动问题


    前言

    最近一段时间,由于机房需要搬迁调整,我们hadoop集群所受到的影响机器无疑是数量最多的,如何能最优的使机器在从1个机房搬到另1个机房之后,对集群的数据和上面跑的业务造成尽可能小的影响或者说几乎没有影响.这就是我们想到达到的目标.但是在实施这整个搬迁的过程中,还是发现了一些比较有意思的现象的,于是就有了如文章标题所描绘的现象,为什么说是"记一次"呢,因为这个现象并不是每次都发生,所以这件事看上去可能就不会那么直接明了了.相信你在看完了文章之后,或多或少会有所收获.


    DataNode慢启动现象"场景回放"

    首先看到这个子标题,估计有人会有疑问,DataNode还会出现慢启动现象?DataNode执行了sbin/hadoop-daemon.sh start datanode命令后不是几秒钟的事情吗?没有错,在绝大多数的场景下,DataNode的启动就是简单的这么几个步骤.但是不知道大家有没有尝试过如下的情况:

    1.停止机器上的DataNode服务.

    2.将此节点进行机房搬迁,搬迁后此节点将会拥有新的主机名和IP.

    3.在第二步骤的搬迁过程中耗费了20,30分钟甚至长达数小时.

    4.重启被更换掉主机名,IP的DataNode.

    我在最近一段时间的DataNode迁移中就遇到了上述的场景,(感兴趣的同学可以查看这篇文章DataNode迁移方案)然后在我start新的DataNode之后,就发生了慢启动的现象,在我执行完了脚本之后,我发现NameNode的页面上迟迟没有这个新节点汇报上来的block块记总数信息.我用jps观察这个进程也的确还是在的,直到最后过了4,5分钟之后,页面上终于出现了新DN的记录信息了.然后datanode的log中也出现了block的receive,delete记录了.所以很显然,DataNode在启动的这4,5分钟一定卡在了什么操作上,否则不会出现这么大延时.千万不要小看了这4,5分钟,当你需要在短时间内恢复dn服务的时候,哪怕你多耽搁了1秒钟,影响了别人的使用,人家还是会认为这就是你的问题.既然目标已经锁定在dn启动的头4,5分钟,那么1个好的办法就是先看datanode的log日志,看看他在干嘛.经过多次尝试,我发现dn在每次打完下面这些信息的时候,就会停留相当长的时间.

    2016-01-06 16:05:08,118 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4
    2016-01-06 16:05:08,118 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK
    2016-01-06 16:05:08,176 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
    2016-01-06 16:05:08,177 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1942012336-xx.xx.xx.xx-1406726500544
    2016-01-06 16:05:08,178 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data2/current...
    2016-01-06 16:05:08,179 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data3/current...
    2016-01-06 16:05:08,179 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data4/current...
    2016-01-06 16:05:08,179 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data5/current...
    2016-01-06 16:05:08,180 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data6/current...
    2016-01-06 16:05:08,180 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data7/current...
    2016-01-06 16:05:08,180 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data8/current...
    2016-01-06 16:05:08,180 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data9/current...
    2016-01-06 16:05:08,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data10/current...
    2016-01-06 16:05:08,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data11/current...
    2016-01-06 16:05:08,181 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume /home/data/data/hadoop/dfs/data/data12/current...
    2016-01-06 16:09:49,646 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on /home/data/data/hadoop/dfs/data/data7/current: 281466ms
    2016-01-06 16:09:54,235 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on /home/data/data/hadoop/dfs/data/data9/current: 286054ms
    2016-01-06 16:09:57,859 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on /home/data/data/hadoop/dfs/data/data2/current: 289680ms

    我们可以从日志中看到,在dn添加完磁盘块后,进程scaning扫描操作的时候,在16:05分之后,就直接调到16:09的记录,就是在最后1次Scanning block的那行.显然就是这段时间导致的dn慢启动.所以后面我们可以以此作为关键线索,进行分析,通过打出的日志,1个有效的办法就是通过日志进行代码跟踪发现.


    代码追踪分析

    通过上个部分的日志记录信息,显示,这条信息是FsDatasetImpl中的Log对象打出的,但是这个LOG对象其实是在另外1个叫FsVolumeList的类中被调用的,代码如下:

      void addBlockPool(final String bpid, final Configuration conf) throws IOException {
        long totalStartTime = Time.monotonicNow();
        
        final List<IOException> exceptions = Collections.synchronizedList(
            new ArrayList<IOException>());
        List<Thread> blockPoolAddingThreads = new ArrayList<Thread>();
        for (final FsVolumeImpl v : volumes.get()) {
          Thread t = new Thread() {
            public void run() {
              try (FsVolumeReference ref = v.obtainReference()) {
                FsDatasetImpl.LOG.info("Scanning block pool " + bpid +
                    " on volume " + v + "...");
                long startTime = Time.monotonicNow();
                v.addBlockPool(bpid, conf);
                long timeTaken = Time.monotonicNow() - startTime;
                FsDatasetImpl.LOG.info("Time taken to scan block pool " + bpid +
                    " on " + v + ": " + timeTaken + "ms");
              } catch (ClosedChannelException e) {
                // ignore.
              } catch (IOException ioe) {
                FsDatasetImpl.LOG.info("Caught exception while scanning " + v +
                    ". Will throw later.", ioe);
                exceptions.add(ioe);
              }
            }
          };
          blockPoolAddingThreads.add(t);
          t.start();
        }
        for (Thread t : blockPoolAddingThreads) {
          try {
            t.join();
          } catch (InterruptedException ie) {
            throw new IOException(ie);
          }
        }
        if (!exceptions.isEmpty()) {
          throw exceptions.get(0);
        }
        
        long totalTimeTaken = Time.monotonicNow() - totalStartTime;
        FsDatasetImpl.LOG.info("Total time to scan all replicas for block pool " +
            bpid + ": " + totalTimeTaken + "ms");
      }
    因为在添加每块盘上的block pool操作的时候是开了进程的形式,并且调用了join方法等待其结束.然后我们再往里点击其中的操作到底是在干什么,进入了下面的这个方法:

    void addBlockPool(String bpid, Configuration conf) throws IOException {
        File bpdir = new File(currentDir, bpid);
        BlockPoolSlice bp = new BlockPoolSlice(bpid, this, bpdir, conf);
        bpSlices.put(bpid, bp);
      }
    BlockPoolSlice,这个又是什么类呢,源码上的标准注释介绍:

    /**
     * A block pool slice represents a portion of a block pool stored on a volume.  
     * Taken together, all BlockPoolSlices sharing a block pool ID across a 
     * cluster represent a single block pool.
     * 
     * This class is synchronized by {@link FsVolumeImpl}.
     */
    class BlockPoolSlice {
    ...

    portion的意思为"一部分",所以大意就是1个block pool存储在每块盘上的一部分.比如你有3个块盘,A,B,C,每个盘下都有对应目录是存放此block pool id对应的数据.block pool id又是哪里确定的呢,OK,这里要单独为大家理一理这方面的知识.

    DataNode FsVolume磁盘相关逻辑关系

    首先是Block Pool Id, 这个Id是在集群第一次开始创建的时候确定的,就是在namenode做format操作的时候,不管在后续的集群升级或者搬迁的时候,这个bp id都不会发生改变,保存在了NamespaceInfo这个类中,下面是这个类的介绍说明:

    /**
     * NamespaceInfo is returned by the name-node in reply 
     * to a data-node handshake.
     * 
     */
    @InterfaceAudience.Private
    @InterfaceStability.Evolving
    public class NamespaceInfo extends StorageInfo {
      final String  buildVersion;
      String blockPoolID = "";    // id of the block pool
      String softwareVersion;
      long capabilities;
    ...
    我作了一张结构图来展现关系结构帮助大家理解:


    回到最初的问题,这些操作看起来一点都耗时啊,那我们就继续看里看,进入BlockPoolSlice的构造方法中:

    /**
       * Create a blook pool slice 
       * @param bpid Block pool Id
       * @param volume {@link FsVolumeImpl} to which this BlockPool belongs to
       * @param bpDir directory corresponding to the BlockPool
       * @param conf configuration
       * @throws IOException
       */
      BlockPoolSlice(String bpid, FsVolumeImpl volume, File bpDir,
          Configuration conf) throws IOException {
        ...
        // Use cached value initially if available. Or the following call will
        // block until the initial du command completes.
        this.dfsUsage = new DU(bpDir, conf, loadDfsUsed());
        this.dfsUsage.start();
    
        // Make the dfs usage to be saved during shutdown.
        ShutdownHookManager.get().addShutdownHook(
          new Runnable() {
            @Override
            public void run() {
              if (!dfsUsedSaved) {
                saveDfsUsed();
              }
            }
          }, SHUTDOWN_HOOK_PRIORITY);
      }
    终于在这里找到了关键步骤,在构造函数中会进行1次DU操作,同时注释中特意说明,如果cached value不可用的话,会进行DU命令操作,阻塞操作进行,直到完成.那基本我们可以猜测DataNode"慢启动"场景属于缓存失效的case,加载dfsUsed缓存值的方法是loadDfsUsed,进入这个方法.

    /**
       * Read in the cached DU value and return it if it is less than 600 seconds
       * old (DU update interval). Slight imprecision of dfsUsed is not critical and
       * skipping DU can significantly shorten the startup time. If the cached value
       * is not available or too old, -1 is returned.
       */
      long loadDfsUsed() {
        long cachedDfsUsed;
        long mtime;
        Scanner sc;
    
        try {
          sc = new Scanner(new File(currentDir, DU_CACHE_FILE), "UTF-8");
        } catch (FileNotFoundException fnfe) {
          return -1;
        }
    
        try {
          // Get the recorded dfsUsed from the file.
          if (sc.hasNextLong()) {
            cachedDfsUsed = sc.nextLong();
          } else {
            return -1;
          }
          // Get the recorded mtime from the file.
          if (sc.hasNextLong()) {
            mtime = sc.nextLong();
          } else {
            return -1;
          }
    
          // Return the cached value if mtime is okay.
          if (mtime > 0 && (Time.now() - mtime < 600000L)) {
            FsDatasetImpl.LOG.info("Cached dfsUsed found for " + currentDir + ": " +
                cachedDfsUsed);
            return cachedDfsUsed;
          }
          return -1;
        } finally {
          sc.close();
        }
      }
    在这里,会首先从传入的目录地址中读取缓存值文件,如果更新时间在600s内,则直接读取值,否则返回-1,然后重新进行du计算dfs使用量.因为我发生的场景,使DataNode停止服务超过30分钟或者更久,显然这个缓存值会超过600s,直接将会失效,后面的du操作也必然会发生.OK,到了这里,问题基本锁定了,就是在这个写死的600s,专业上的术语描述为"hard code",对于一些标准的变量,这是不合理的,我们要将其configurable可配的,来适应使用者的需求,至少在我的使用情况下,我根本不需要进行dfsUsed的再计算,DataNode上面的数据没有经过任何变化.


    参数可配置化改造

    目标已经非常明确了,就是将hard code变成可配置化,作为hdfs-site.xml的1个新配置项.下面简要说明几个步骤,首先在DfsConfigKeys里添加key名称和默认值:

    public static final String DFS_DATANODE_CACHED_DFSUSED_CHECK_INTERVAL_MS =
          "dfs.datanode.cached-dfsused.check.interval.ms";
      public static final long DFS_DATANODE_CACHED_DFSUSED_CHECK_INTERVAL_DEFAULT_MS =
          600000;
    在BlockPoolSlice中新增变量,并获取此新配置属性,如果没有设,则取默认值

    /**
     * A block pool slice represents a portion of a block pool stored on a volume.  
     * Taken together, all BlockPoolSlices sharing a block pool ID across a 
     * cluster represent a single block pool.
     * 
     * This class is synchronized by {@link FsVolumeImpl}.
     */
    class BlockPoolSlice {
      ...
      private final long cachedDfsUsedCheckTime;
      
      // TODO:FEDERATION scalability issue - a thread per DU is needed
      private final DU dfsUsage;
    
      /**
       * Create a blook pool slice 
       * @param bpid Block pool Id
       * @param volume {@link FsVolumeImpl} to which this BlockPool belongs to
       * @param bpDir directory corresponding to the BlockPool
       * @param conf configuration
       * @throws IOException
       */
      BlockPoolSlice(String bpid, FsVolumeImpl volume, File bpDir,
          Configuration conf) throws IOException {
        ...
        this.cachedDfsUsedCheckTime =
            conf.getLong(
                DFSConfigKeys.DFS_DATANODE_CACHED_DFSUSED_CHECK_INTERVAL_MS,
                DFSConfigKeys.DFS_DATANODE_CACHED_DFSUSED_CHECK_INTERVAL_DEFAULT_MS);
    然后在刚刚的loadDfsUsed函数替换"hard code"600000L,更改方法的注释,

    /**
       * Read in the cached DU value and return it if it is less than
       * cachedDfsUsedCheckTime which is set by
       * dfs.datanode.cached-dfsused.check.interval.ms parameter. Slight imprecision
       * of dfsUsed is not critical and skipping DU can significantly shorten the
       * startup time. If the cached value is not available or too old, -1 is
       * returned.
       */
      long loadDfsUsed() {
        ...
          // Return the cached value if mtime is okay.
          if (mtime > 0 && (Time.now() - mtime < cachedDfsUsedCheckTime)) {
            FsDatasetImpl.LOG.info("Cached dfsUsed found for " + currentDir + ": " +
                cachedDfsUsed);
            return cachedDfsUsed;
          }
          return -1;
        } finally {
          sc.close();
        }
      }
    最后在res的hdfs-default.xml加上新的配置数以及描述信息:

    <property>
      <name>dfs.datanode.cached-dfsused.check.interval.ms</name>
      <value>600000</value>
      <description>The internal check time of loading cache-dfsused
        value.
      </description>
    </property>

    然后自己编写几个testcase进行测试,单元测试在后面的patch链接中会给出.在DataNode重新部署上新的jar包,重启DataNode.出现下面这个记录,说明是使用了dfsUsed缓存值,启动过程非常快.

    2016-01-13 09:13:20,639  INFO [Thread-68] (FsVolumeList.java:402) - Scanning block pool BP-1543590671-xx.xx.xx.xx-1449897014835 on volume /home/data/data/hadoop/dfs/data/data9/current...
    2016-01-13 09:13:20,639  INFO [Thread-69] (FsVolumeList.java:402) - Scanning block pool BP-1543590671-xx.xx.xx.xx-1449897014835 on volume /home/data/data/hadoop/dfs/data/data10/current...
    2016-01-13 09:13:20,639  INFO [Thread-71] (FsVolumeList.java:402) - Scanning block pool BP-1543590671-xx.xx.xx.xx-1449897014835 on volume /home/data/data/hadoop/dfs/data/data12/current...
    2016-01-13 09:13:20,639  INFO [Thread-70] (FsVolumeList.java:402) - Scanning block pool BP-1543590671-xx.xx.xx.xx-1449897014835 on volume /home/data/data/hadoop/dfs/data/data11/current...
    2016-01-13 09:13:20,822  INFO [Thread-62] (BlockPoolSlice.java:221) - Cached dfsUsed found for /home/data/data/hadoop/dfs/data/data3/current/BP-1543590671-xx.xx.xx.xx-1449897014835/current: 40487047190
    2016-01-13 09:13:20,825  INFO [Thread-63] (BlockPoolSlice.java:221) - Cached dfsUsed found for /home/data/data/hadoop/dfs/data/data4/current/BP-1543590671-xx.xx.xx.xx-1449897014835/current: 39336478590
    2016-01-13 09:13:20,825  INFO [Thread-62] (FsVolumeList.java:407) - Time taken to scan block pool BP-1543590671-xx.xx.xx.xx-1449897014835 on /home/data/data/hadoop/dfs/data/data3/current: 187ms
    2016-01-13 09:13:20,826  INFO [Thread-63] (FsVolumeList.java:407) - Time taken to scan block pool BP-1543590671-xx.xx.xx.xx-1449897014835 on /home/data/data/hadoop/dfs/data/data4/current: 189ms

    此新增功能我已提交开源社区,编号HDFS-9624.


    相关链接

    Issue链接:https://issues.apache.org/jira/browse/HDFS-9624

    Github patch链接:https://github.com/linyiqun/open-source-patch/tree/master/hdfs/HDFS-9624

  • 相关阅读:
    Chrome Extension 扩展程序 小白入门
    gulp 打包报错:ReferenceError: internalBinding is not defined
    error in static/js/xxx.js from UglifyJs Unpected token: punc() [static/js/xxx.js]
    ES6-
    浏览器的渲染原理(转载)
    nodeType、nodeName和nodeValue
    浏览器 HTTP 协议缓存机制详解
    JS函数的上下文环境
    ready和onload的区别
    自定义事件
  • 原文地址:https://www.cnblogs.com/bianqi/p/12183804.html
Copyright © 2020-2023  润新知