• 第一个Hadoop程序——Hello Hadoop


    本人原创,转载请注明出处:http://blog.csdn.net/panjunbiao/article/details/12773163


    下载Hadoop程序包,下载地址:http://hadoop.apache.org/releases.html#Download

    如果是在CentOS服务器安装,则执行:
    yum install hadoop-1.2.1-1.x86_64.rpm

    如果是在Linux或者Mac OS X开发环境下,可以下载bin或者源码包,然后解压缩即可。

    验证hadoop二进制执行文件(假设放在~/Developments/toolkits/hadoop-1.2.1文件夹中):
    cd ~/Developments/toolkits/hadoop-1.2.1

    执行hadoop程序:
    bin/hadoop

    Usage: hadoop [--config confdir] COMMAND
    where COMMAND is one of:
      namenode -format     format the DFS filesystem
      secondarynamenode    run the DFS secondary namenode
      namenode             run the DFS namenode
      datanode             run a DFS datanode...

    出现hadoop命令用法帮助,表示二进制文件可执行。

    创建Hello Hadoop的Java项目:
    按照《Hadoop权威指南(Hadoop: The Definitive Guide)》的例子,创建3个程序文件。
    MaxTemperature.java

    /**
     * Created with IntelliJ IDEA.
     * User: james
     * Date: 8/27/13
     * Time: 11:33 AM
     * To change this template use File | Settings | File Templates.
     */
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    public class MaxTemperature {
        public static void main(String[] args) throws Exception {
            if (args.length != 2) {
                System.err.println("Usage: MaxTemperature <input path> <output path>");
                System.exit(-1);
            }
    
            Job job = new Job();
            job.setJarByClass(MaxTemperature.class);
            job.setJobName("Max temperature");
            FileInputFormat.addInputPath(job, new Path(args[0]));
            FileOutputFormat.setOutputPath(job, new Path(args[1]));
    
            job.setMapperClass(MaxTemperatureMapper.class);
            job.setReducerClass(MaxTemperatureReducer.class);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);
    
            System.exit(job.waitForCompletion(true) ? 0 : 1);
        }
    }

     MaxTemperatureMapper.java

    /**
     * Created with IntelliJ IDEA.
     * User: james
     * Date: 8/27/13
     * Time: 11:28 AM
     * To change this template use File | Settings | File Templates.
     */
    import java.io.IOException;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Mapper;
    
    public class MaxTemperatureMapper
            extends Mapper<LongWritable, Text, Text, IntWritable> {
        private static final int MISSING = 9999;
    
        @Override
        public void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
    
            String line = value.toString();
            String year = line.substring(15, 19);
            int airTemperature;
            if (line.charAt(87) == '+') { // parseInt doesn't like leading plus signs
                airTemperature = Integer.parseInt(line.substring(88, 92));
            } else {
                airTemperature = Integer.parseInt(line.substring(87, 92));
            }
            String quality = line.substring(92, 93);
            if (airTemperature != MISSING && quality.matches("[01459]")) {
                context.write(new Text(year), new IntWritable(airTemperature));
            }
        }
    }


    MaxTemperatureReducer.java

    /**
     * Created with IntelliJ IDEA.
     * User: james
     * Date: 8/27/13
     * Time: 11:32 AM
     * To change this template use File | Settings | File Templates.
     */
    import java.io.IOException;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Reducer;
    public class MaxTemperatureReducer
            extends Reducer<Text, IntWritable, Text, IntWritable> {
    
        @Override
        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context)
                throws IOException, InterruptedException {
    
            int maxValue = Integer.MIN_VALUE;
            for (IntWritable value : values) {
                maxValue = Math.max(maxValue, value.get());
            }
            context.write(key, new IntWritable(maxValue));
        }
    }


    需要将hadoop-core-1.2.1.jar文件添加到项目的库中,这个jar文件在解压缩的文件夹中 

    编译之,假设项目编译到文件夹~/Developments/hello-hadoop/out/production/hello-hadoop/中,将这个文件夹位置输出到HADOOP_CLASSPATH:
    export HADOOP_CLASSPATH=~/Developments/hello-hadoop/out/production/hello-hadoop/

    另外还要注意定义JAVA_HOME,以Mac OS X为例:
    export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_21.jdk/Contents/Home/

    下载天气数据( http://hadoopbook.com/code.html ),上面有1901年和1902年的天气例子数据。

    进入hadoop文件夹:
    cd ~/Developments/toolkits/hadoop-1.2.1

    执行例子程序(这个MaxTemperature是hadoop程序通过HADOOP_CLASSPATH查找到的):

    bin/hadoop MaxTemperature 1901 output

    2013-10-15 17:56:40.412 java[5522:1703] Unable to load realm info from SCDynamicStore
    13/10/15 17:56:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    13/10/15 17:56:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
    13/10/15 17:56:41 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
    13/10/15 17:56:41 INFO input.FileInputFormat: Total input paths to process : 1
    13/10/15 17:56:41 WARN snappy.LoadSnappy: Snappy native library not loaded
    13/10/15 17:56:42 INFO mapred.JobClient: Running job: job_local1783370164_0001
    13/10/15 17:56:42 INFO mapred.LocalJobRunner: Waiting for map tasks
    13/10/15 17:56:42 INFO mapred.LocalJobRunner: Starting task: attempt_local1783370164_0001_m_000000_0
    13/10/15 17:56:42 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
    13/10/15 17:56:42 INFO mapred.MapTask: Processing split: file:/Users/james/Developments/hello-hadoop/out/production/hello-hadoop/1901:0+888190
    13/10/15 17:56:42 INFO mapred.MapTask: io.sort.mb = 100
    13/10/15 17:56:42 INFO mapred.MapTask: data buffer = 79691776/99614720
    13/10/15 17:56:42 INFO mapred.MapTask: record buffer = 262144/327680
    13/10/15 17:56:42 INFO mapred.MapTask: Starting flush of map output
    13/10/15 17:56:42 INFO mapred.MapTask: Finished spill 0
    13/10/15 17:56:42 INFO mapred.Task: Task:attempt_local1783370164_0001_m_000000_0 is done. And is in the process of commiting
    13/10/15 17:56:42 INFO mapred.LocalJobRunner:
    13/10/15 17:56:42 INFO mapred.Task: Task 'attempt_local1783370164_0001_m_000000_0' done.
    13/10/15 17:56:42 INFO mapred.LocalJobRunner: Finishing task: attempt_local1783370164_0001_m_000000_0
    13/10/15 17:56:42 INFO mapred.LocalJobRunner: Map task executor complete.
    13/10/15 17:56:42 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
    13/10/15 17:56:42 INFO mapred.LocalJobRunner:
    13/10/15 17:56:42 INFO mapred.Merger: Merging 1 sorted segments
    13/10/15 17:56:42 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 72206 bytes
    13/10/15 17:56:42 INFO mapred.LocalJobRunner:
    13/10/15 17:56:42 INFO mapred.Task: Task:attempt_local1783370164_0001_r_000000_0 is done. And is in the process of commiting
    13/10/15 17:56:42 INFO mapred.LocalJobRunner:
    13/10/15 17:56:42 INFO mapred.Task: Task attempt_local1783370164_0001_r_000000_0 is allowed to commit now
    13/10/15 17:56:42 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1783370164_0001_r_000000_0' to output
    13/10/15 17:56:42 INFO mapred.LocalJobRunner: reduce > reduce
    13/10/15 17:56:42 INFO mapred.Task: Task 'attempt_local1783370164_0001_r_000000_0' done.
    13/10/15 17:56:43 INFO mapred.JobClient:  map 100% reduce 100%
    13/10/15 17:56:43 INFO mapred.JobClient: Job complete: job_local1783370164_0001
    13/10/15 17:56:43 INFO mapred.JobClient: Counters: 17
    13/10/15 17:56:43 INFO mapred.JobClient:   File Output Format Counters
    13/10/15 17:56:43 INFO mapred.JobClient:     Bytes Written=21
    13/10/15 17:56:43 INFO mapred.JobClient:   File Input Format Counters
    13/10/15 17:56:43 INFO mapred.JobClient:     Bytes Read=888190
    13/10/15 17:56:43 INFO mapred.JobClient:   FileSystemCounters
    13/10/15 17:56:43 INFO mapred.JobClient:     FILE_BYTES_READ=1848986
    13/10/15 17:56:43 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=245951
    13/10/15 17:56:43 INFO mapred.JobClient:   Map-Reduce Framework
    13/10/15 17:56:43 INFO mapred.JobClient:     Reduce input groups=1
    13/10/15 17:56:43 INFO mapred.JobClient:     Map output materialized bytes=72210
    13/10/15 17:56:43 INFO mapred.JobClient:     Combine output records=0
    13/10/15 17:56:43 INFO mapred.JobClient:     Map input records=6565
    13/10/15 17:56:43 INFO mapred.JobClient:     Reduce shuffle bytes=0
    13/10/15 17:56:43 INFO mapred.JobClient:     Reduce output records=1
    13/10/15 17:56:43 INFO mapred.JobClient:     Spilled Records=13128
    13/10/15 17:56:43 INFO mapred.JobClient:     Map output bytes=59076
    13/10/15 17:56:43 INFO mapred.JobClient:     Total committed heap usage (bytes)=331350016
    13/10/15 17:56:43 INFO mapred.JobClient:     SPLIT_RAW_BYTES=141
    13/10/15 17:56:43 INFO mapred.JobClient:     Map output records=6564
    13/10/15 17:56:43 INFO mapred.JobClient:     Combine input records=0
    13/10/15 17:56:43 INFO mapred.JobClient:     Reduce input records=6564
    


    查看输出结果
    ls output/

    _SUCCESS     part-r-00000


    vi output/part-r-00000

    1901    317 
  • 相关阅读:
    拓端tecdat|R语言多变量广义正交GARCH(GOGARCH)模型对股市高维波动率时间序列拟合预测
    拓端tecdat|MATLAB用Lasso回归拟合高维数据和交叉验证
    【视频】线性混合效应模型(LMM,Linear Mixed Models)和R语言实现案例
    R语言样条曲线、分段线性回归模型piecewise regression估计个股beta值分析收益率数据
    Nacos极简教程
    解决SpringBoot连接Nacos集群报400问题
    SpringBoot整合Nacos自动刷新配置
    openssl
    Centos docker服务启动失败 A dependency job for docker.service failed
    php变量当做boolean
  • 原文地址:https://www.cnblogs.com/riasky/p/3371933.html
Copyright © 2020-2023  润新知