• 在Eclipse中运行hadoop程序




    1、下载hadoop-eclipse-plugin-1.2.1.jar,并将之复制到eclipse/plugins下。


    2、打开map-reduce视图

    在eclipse中,打开window——>open perspetive——>other,选择map/reduce。


    3、选择Map/Reduce Locations标签页,新建一个Location



    4、在project exploer中,可以浏览刚才定义站点的文件系统





    5、准备测试数据,并上传到hdfs中。

    liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -mkdir in

    liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -copyFromLocal maxTemp.txt in

    liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -ls in

    Found 1 items

    -rw-r--r--   1 liaoliuqing supergroup        953 2014-12-14 09:47 /user/liaoliuqing/in/maxTemp.txt


    其中maxTemp.txt的内容如下:

    123456798676231190101234567986762311901012345679867623119010123456798676231190101234561+00121534567890356

    123456798676231190101234567986762311901012345679867623119010123456798676231190101234562+01122934567890456

    123456798676231190201234567986762311901012345679867623119010123456798676231190101234562+02120234567893456

    123456798676231190401234567986762311901012345679867623119010123456798676231190101234561+00321234567803456

    123456798676231190101234567986762311902012345679867623119010123456798676231190101234561+00429234567903456

    123456798676231190501234567986762311902012345679867623119010123456798676231190101234561+01021134568903456

    123456798676231190201234567986762311902012345679867623119010123456798676231190101234561+01124234578903456

    123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+04121234678903456

    123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+00821235678903456


    6、准备map-reduce程序

    程序请见http://blog.csdn.net/jediael_lu/article/details/37596469


    7、运行程序

    MaxTemperature.java——>run as——>run configuration

    在arguments中填入输入及输出目录,开始run。



    此处是在hdfs中运行程序,事实上也可以在本地文件系统中运行程序,此方法可以方便的用于程序调试。

    如在参数中填入:

    /Users/liaoliuqing/in   /Users/liaoliuqing/out

    即可。



    8、以下是eclise console中的输出内容

    14/12/14 10:52:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

    14/12/14 10:52:05 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.

    14/12/14 10:52:05 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).

    14/12/14 10:52:05 INFO input.FileInputFormat: Total input paths to process : 1

    14/12/14 10:52:05 WARN snappy.LoadSnappy: Snappy native library not loaded

    14/12/14 10:52:06 INFO mapred.JobClient: Running job: job_local1815770300_0001

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: Waiting for map tasks

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: Starting task: attempt_local1815770300_0001_m_000000_0

    14/12/14 10:52:06 INFO mapred.Task:  Using ResourceCalculatorPlugin : null

    14/12/14 10:52:06 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/liaoliuqing/in/maxTemp.txt:0+953

    14/12/14 10:52:06 INFO mapred.MapTask: io.sort.mb = 100

    14/12/14 10:52:06 INFO mapred.MapTask: data buffer = 79691776/99614720

    14/12/14 10:52:06 INFO mapred.MapTask: record buffer = 262144/327680

    14/12/14 10:52:06 INFO mapred.MapTask: Starting flush of map output

    14/12/14 10:52:06 INFO mapred.MapTask: Finished spill 0

    14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_m_000000_0 is done. And is in the process of commiting

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: 

    14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_m_000000_0' done.

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: Finishing task: attempt_local1815770300_0001_m_000000_0

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: Map task executor complete.

    14/12/14 10:52:06 INFO mapred.Task:  Using ResourceCalculatorPlugin : null

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: 

    14/12/14 10:52:06 INFO mapred.Merger: Merging 1 sorted segments

    14/12/14 10:52:06 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 90 bytes

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: 

    14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_r_000000_0 is done. And is in the process of commiting

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: 

    14/12/14 10:52:06 INFO mapred.Task: Task attempt_local1815770300_0001_r_000000_0 is allowed to commit now

    14/12/14 10:52:06 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1815770300_0001_r_000000_0' to hdfs://localhost:9000/user/liaoliuqing/out

    14/12/14 10:52:06 INFO mapred.LocalJobRunner: reduce > reduce

    14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_r_000000_0' done.

    14/12/14 10:52:07 INFO mapred.JobClient:  map 100% reduce 100%

    14/12/14 10:52:07 INFO mapred.JobClient: Job complete: job_local1815770300_0001

    14/12/14 10:52:07 INFO mapred.JobClient: Counters: 19

    14/12/14 10:52:07 INFO mapred.JobClient:   File Output Format Counters 

    14/12/14 10:52:07 INFO mapred.JobClient:     Bytes Written=43

    14/12/14 10:52:07 INFO mapred.JobClient:   File Input Format Counters 

    14/12/14 10:52:07 INFO mapred.JobClient:     Bytes Read=953

    14/12/14 10:52:07 INFO mapred.JobClient:   FileSystemCounters

    14/12/14 10:52:07 INFO mapred.JobClient:     FILE_BYTES_READ=450

    14/12/14 10:52:07 INFO mapred.JobClient:     HDFS_BYTES_READ=1906

    14/12/14 10:52:07 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=135618

    14/12/14 10:52:07 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=43

    14/12/14 10:52:07 INFO mapred.JobClient:   Map-Reduce Framework

    14/12/14 10:52:07 INFO mapred.JobClient:     Reduce input groups=5

    14/12/14 10:52:07 INFO mapred.JobClient:     Map output materialized bytes=94

    14/12/14 10:52:07 INFO mapred.JobClient:     Combine output records=0

    14/12/14 10:52:07 INFO mapred.JobClient:     Map input records=9

    14/12/14 10:52:07 INFO mapred.JobClient:     Reduce shuffle bytes=0

    14/12/14 10:52:07 INFO mapred.JobClient:     Reduce output records=5

    14/12/14 10:52:07 INFO mapred.JobClient:     Spilled Records=16

    14/12/14 10:52:07 INFO mapred.JobClient:     Map output bytes=72

    14/12/14 10:52:07 INFO mapred.JobClient:     Total committed heap usage (bytes)=329252864

    14/12/14 10:52:07 INFO mapred.JobClient:     SPLIT_RAW_BYTES=118

    14/12/14 10:52:07 INFO mapred.JobClient:     Map output records=8

    14/12/14 10:52:07 INFO mapred.JobClient:     Combine input records=0


    14/12/14 10:52:07 INFO mapred.JobClient:     Reduce input records=8







  • 相关阅读:
    softmax和cross_entropy
    python初始化list列表(1维、2维)
    奥卡姆剃刀 (Occam Razor)
    何谓超参数?
    面试干货!21个必知数据科学面试题和答案
    计算广告算法到底要做什么?
    推荐系统的常用算法
    推荐系统常见面试题2
    推荐系统算法面试题
    mysql-面试题
  • 原文地址:https://www.cnblogs.com/eaglegeek/p/4557846.html
Copyright © 2020-2023  润新知