错误提示:
[xiaoqiu@s150 /home/xiaoqiu]$ hadoop jar wordcounter.jar com.cr.wordcount.WordcountApp hdfs://s150/user/xiaoqiu/data/wc.txt hdfs://s150/user/xiaoqiu/data/out
18/01/05 09:12:52 INFO client.RMProxy: Connecting to ResourceManager at s150/192.168.109.150:8032
18/01/05 09:12:55 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/01/05 09:13:13 INFO input.FileInputFormat: Total input paths to process : 1
18/01/05 09:13:18 INFO mapreduce.JobSubmitter: number of splits:1
18/01/05 09:13:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1515157734609_0002
18/01/05 09:14:14 INFO impl.YarnClientImpl: Submitted application application_1515157734609_0002
18/01/05 09:14:16 INFO mapreduce.Job: The url to track the job: http://s150:8088/proxy/application_1515157734609_0002/
18/01/05 09:14:16 INFO mapreduce.Job: Running job: job_1515157734609_0002
18/01/05 09:20:09 INFO mapreduce.Job: Job job_1515157734609_0002 running in uber mode : false
18/01/05 09:20:16 INFO mapreduce.Job: map 0% reduce 0%
解决:
修改了虚拟机的内存大小,我之前是每个虚拟机配置的512M,后来改成了1个G,解决了问题