• MapReduce的InputFormat学习过程


              昨天,经过几个小时的学习。该MapReduce学习的某一位的方法的第一阶段。即当大多数文件的开头的Data至key-value制图。那是,InputFormat的过程。虽说过程不是非常难,可是也存在非常多细节的。

    也非常少会有人对此做比較细腻的研究。学习。今天。就让我来为大家剖析一下这段代码的原理。

    我还为此花了一点时间做了几张结构图。便于大家理解。

    在这里先声明一下。我研究的MapReduce主要研究的是旧版的API,也就是mapred包下的。

              InputFormat最最原始的形式就是一个接口。后面出现的各种Format都是他的衍生类。结构例如以下,仅仅包括最重要的2个方法:

    public interface InputFormat<K, V> {
    
      /** 
       * Logically split the set of input files for the job.  
       * 
       * <p>Each {@link InputSplit} is then assigned to an individual {@link Mapper}
       * for processing.</p>
       *
       * <p><i>Note</i>: The split is a <i>logical</i> split of the inputs and the
       * input files are not physically split into chunks. For e.g. a split could
       * be <i><input-file-path, start, offset></i> tuple.
       * 
       * @param job job configuration.
       * @param numSplits the desired number of splits, a hint.
       * @return an array of {@link InputSplit}s for the job.
       */
      InputSplit[] getSplits(JobConf job, int numSplits) throws IOException;
    
      /** 
       * Get the {@link RecordReader} for the given {@link InputSplit}.
       *
       * <p>It is the responsibility of the <code>RecordReader</code> to respect
       * record boundaries while processing the logical split to present a 
       * record-oriented view to the individual task.</p>
       * 
       * @param split the {@link InputSplit}
       * @param job the job that this split belongs to
       * @return a {@link RecordReader}
       */
      RecordReader<K, V> getRecordReader(InputSplit split,
                                         JobConf job, 
                                         Reporter reporter) throws IOException;
    }
    所以后面解说,我也仅仅是会环绕这2个方法进行分析。当然我们用的最多的是从文件里获得输入数据,也就是FileInputFormat这个类。继承关系例如以下:

    public abstract class FileInputFormat<K, V> implements InputFormat<K, V>
    我们看里面的1个主要方法:

    public InputSplit[] getSplits(JobConf job, int numSplits)
    返回的类型是一个InputSpilt对象。这是一个抽象的输入Spilt分片概念。结构例如以下:

    public interface InputSplit extends Writable {
    
      /**
       * Get the total number of bytes in the data of the <code>InputSplit</code>.
       * 
       * @return the number of bytes in the input split.
       * @throws IOException
       */
      long getLength() throws IOException;
      
      /**
       * Get the list of hostnames where the input split is located.
       * 
       * @return list of hostnames where data of the <code>InputSplit</code> is
       *         located as an array of <code>String</code>s.
       * @throws IOException
       */
      String[] getLocations() throws IOException;
    }
    提供了与数据相关的2个方法。后面这个返回的值会被用来传递给RecordReader里面去的。在想理解getSplits方法之前另一个类须要理解,FileStatus,里面包装了一系列的文件基本信息方法:

    public class FileStatus implements Writable, Comparable {
    
      private Path path;
      private long length;
      private boolean isdir;
      private short block_replication;
      private long blocksize;
      private long modification_time;
      private long access_time;
      private FsPermission permission;
      private String owner;
      private String group;
    .....

    看到这里你预计会有点晕了,以下是我做的一张小小类图关系:


    能够看到,FileSpilt为了兼容新老版本号,继承了新的抽象类InputSpilt。同一时候附上旧的接口形式的InputSpilt。以下我们看看里面的getspilt核心过程:

    /** Splits files returned by {@link #listStatus(JobConf)} when
       * they're too big.*/ 
      @SuppressWarnings("deprecation")
      public InputSplit[] getSplits(JobConf job, int numSplits)
        throws IOException {
    	//获取全部的状态文件
        FileStatus[] files = listStatus(job);
        
        // Save the number of input files in the job-conf
        //在job-cof中保存文件的数量
        job.setLong(NUM_INPUT_FILES, files.length);
        long totalSize = 0;                           
        // compute total size,计算文件总的大小
        for (FileStatus file: files) {                // check we have valid files
          if (file.isDir()) {
        	  //假设是文件夹不是纯文件的直接抛异常
            throw new IOException("Not a file: "+ file.getPath());
          }
          totalSize += file.getLen();
        }
    
        //用户期待的划分大小。总大小除以spilt划分数目
        long goalSize = totalSize / (numSplits == 0 ?

    1 : numSplits); //获取系统的划分最小值 long minSize = Math.max(job.getLong("mapred.min.split.size", 1), minSplitSize); // generate splits //创建numSplits个FileSpilt文件划分量 ArrayList<FileSplit> splits = new ArrayList<FileSplit>(numSplits); NetworkTopology clusterMap = new NetworkTopology(); for (FileStatus file: files) { Path path = file.getPath(); FileSystem fs = path.getFileSystem(job); long length = file.getLen(); //获取此文件的block的位置列表 BlockLocation[] blkLocations = fs.getFileBlockLocations(file, 0, length); //假设文件系统可划分 if ((length != 0) && isSplitable(fs, path)) { //计算此文件的总的block块的大小 long blockSize = file.getBlockSize(); //依据期待大小。最小大小。得出终于的split分片大小 long splitSize = computeSplitSize(goalSize, minSize, blockSize); long bytesRemaining = length; //假设剩余待划分字节倍数为划分大小超过1.1的划分比例,则进行拆分 while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) { //获取提供数据的splitHost位置 String[] splitHosts = getSplitHosts(blkLocations, length-bytesRemaining, splitSize, clusterMap); //加入FileSplit splits.add(new FileSplit(path, length-bytesRemaining, splitSize, splitHosts)); //数量降低splitSize大小 bytesRemaining -= splitSize; } if (bytesRemaining != 0) { //加入刚刚剩下的没划分完的部分。此时bytesRemaining已经小于splitSize的1.1倍了 splits.add(new FileSplit(path, length-bytesRemaining, bytesRemaining, blkLocations[blkLocations.length-1].getHosts())); } } else if (length != 0) { //不划分。直接加入Spilt String[] splitHosts = getSplitHosts(blkLocations,0,length,clusterMap); splits.add(new FileSplit(path, 0, length, splitHosts)); } else { //Create empty hosts array for zero length files splits.add(new FileSplit(path, 0, length, new String[0])); } } //最后返回FileSplit数组 LOG.debug("Total # of splits: " + splits.size()); return splits.toArray(new FileSplit[splits.size()]); }

    里面有个computerSpiltSize方法非常特殊,考虑了非常多情况。总之最小值不能小于系统设定的最小值。要与期待值,块大小,系统同意最小值:

    protected long computeSplitSize(long goalSize, long minSize,
                                           long blockSize) {
        return Math.max(minSize, Math.min(goalSize, blockSize));
      }
    上述过程的对应流程图例如以下:


    3种情况3中年运行流程。

          处理完getSpilt方法然后,也就是说已经把数据从文件里转划到InputSpilt中了,接下来就是给RecordRead去取出里面的一条条的记录了。当然这在FileInputFormat是抽象方法,必须由子类实现的,我在这里挑出了2个典型的子类SequenceFileInputFormat,和TextInputFormat。

    他们的实现RecordRead方法例如以下:

    public RecordReader<K, V> getRecordReader(InputSplit split,
                                          JobConf job, Reporter reporter)
        throws IOException {
    
        reporter.setStatus(split.toString());
    
        return new SequenceFileRecordReader<K, V>(job, (FileSplit) split);
      }
    public RecordReader<LongWritable, Text> getRecordReader(
                                              InputSplit genericSplit, JobConf job,
                                              Reporter reporter)
        throws IOException {
        
        reporter.setStatus(genericSplit.toString());
        return new LineRecordReader(job, (FileSplit) genericSplit);
      }

    能够看到里面的差别就在于LineRecordReader和SequenceFileRecordReader的不同了,这也就表明2种方式相应于数据的读取方式可能会不一样。继续往里深入看:

    /** An {@link RecordReader} for {@link SequenceFile}s. */
    public class SequenceFileRecordReader<K, V> implements RecordReader<K, V> {
      
      private SequenceFile.Reader in;
      private long start;
      private long end;
      private boolean more = true;
      protected Configuration conf;
    
      public SequenceFileRecordReader(Configuration conf, FileSplit split)
        throws IOException {
        Path path = split.getPath();
        FileSystem fs = path.getFileSystem(conf);
        //从文件系统中读取数据输入流
        this.in = new SequenceFile.Reader(fs, path, conf);
        this.end = split.getStart() + split.getLength();
        this.conf = conf;
    
        if (split.getStart() > in.getPosition())
          in.sync(split.getStart());                  // sync to start
    
        this.start = in.getPosition();
        more = start < end;
      }
    
      ......
      
      /**
       * 获取下一个键值对
       */
      public synchronized boolean next(K key, V value) throws IOException {
    	//推断还有无下一条记录
        if (!more) return false;
        long pos = in.getPosition();
        boolean remaining = (in.next(key) != null);
        if (remaining) {
          getCurrentValue(value);
        }
        if (pos >= end && in.syncSeen()) {
          more = false;
        } else {
          more = remaining;
        }
        return more;
      }
    我们能够看到SequenceFileRecordReader是从输入流in中一个键值。一个键值的读取,另外一个的实现方式例如以下:

    /**
     * Treats keys as offset in file and value as line. 
     */
    public class LineRecordReader implements RecordReader<LongWritable, Text> {
      private static final Log LOG
        = LogFactory.getLog(LineRecordReader.class.getName());
    
      private CompressionCodecFactory compressionCodecs = null;
      private long start;
      private long pos;
      private long end;
      private LineReader in;
      int maxLineLength;
    
      ....
      
      /** Read a line. */
      public synchronized boolean next(LongWritable key, Text value)
        throws IOException {
    
        while (pos < end) {
          //设置key 
          key.set(pos);
    
          //依据位置一行一行读取,设置value
          int newSize = in.readLine(value, maxLineLength,
                                    Math.max((int)Math.min(Integer.MAX_VALUE, end-pos),
                                             maxLineLength));
          if (newSize == 0) {
            return false;
          }
          pos += newSize;
          if (newSize < maxLineLength) {
            return true;
          }
    
          // line too long. try again
          LOG.info("Skipped line of size " + newSize + " at pos " + (pos - newSize));
        }
    
        return false;
      }
    
    实现的方式为通过读的位置,从输入流中逐行读取key-value。

    通过这2种方法,就能得到新的key-value。就会用于后面的map操作。

    InputFormat的我忽略了一个事实,整个过程非常详细。通常该过程如上所述。

    版权声明:本文博主原创文章。博客,未经同意不得转载。

  • 相关阅读:
    Aruba无线控制器添加黑名单
    H3C POE交换机办公网常用配置
    H3C接入交换机办公网常用配置
    H3C核心交换机办公网常用配置
    H3C IRF-原理
    华为交换机SSH远程配置
    H3C无线AC黑白名单配置
    H3C 5130交换机初始化配置
    H3C交换机恢复出厂设置
    H3C交换机SSH登录配置
  • 原文地址:https://www.cnblogs.com/mengfanrong/p/4824199.html
Copyright © 2020-2023  润新知