实验原理
“数据去重”主要是为了掌握和利用并行化思想来对数据进行有意义的筛选。统计大数据集上的数据种类个数、从网站日志中计算访问地等这些看似庞杂的任务都会涉及数据去重。
数据去重的最终目标是让原始数据中出现次数超过一次的数据在输出文件中只出现一次。在MapReduce流程中,map的输出<key,value>经过shuffle过程聚集成<key,value-list>后交给reduce。我们自然而然会想到将同一个数据的所有记录都交给一台reduce机器,无论这个数据出现多少次,只要在最终结果中输出一次就可以了。具体就是reduce的输入应该以数据作为key,而对value-list则没有要求(可以设置为空)。当reduce接收到一个<key,value-list>时就直接将输入的key复制到输出的key中,并将value设置成空值,然后输出<key,value>。
实验数据
现有一个某电商网站的数据文件,名为buyer_favorite1,记录了用户收藏的商品以及收藏的日期,文件buyer_favorite1中包含(用户id,商品id,收藏日期)三个字段,数据内容以“ ”分割,由于数据很大,所以为了方便统计我们只截取它的一部分数据,内容如下:
用户id 商品id 收藏日期
10181 1000481 2010-04-04 16:54:31
20001 1001597 2010-04-07 15:07:52
20001 1001560 2010-04-07 15:08:27
20042 1001368 2010-04-08 08:20:30
20067 1002061 2010-04-08 16:45:33
20056 1003289 2010-04-12 10:50:55
20056 1003290 2010-04-12 11:57:35
20056 1003292 2010-04-12 12:05:29
20054 1002420 2010-04-14 15:24:12
20055 1001679 2010-04-14 19:46:04
20054 1010675 2010-04-14 15:23:53
20054 1002429 2010-04-14 17:52:45
20076 1002427 2010-04-14 19:35:39
20054 1003326 2010-04-20 12:54:44
20056 1002420 2010-04-15 11:24:49
20064 1002422 2010-04-15 11:35:54
20056 1003066 2010-04-15 11:43:01
20056 1003055 2010-04-15 11:43:06
20056 1010183 2010-04-15 11:45:24
20056 1002422 2010-04-15 11:45:49
20056 1003100 2010-04-15 11:45:54
20056 1003094 2010-04-15 11:45:57
20056 1003064 2010-04-15 11:46:04
20056 1010178 2010-04-15 16:15:20
20076 1003101 2010-04-15 16:37:27
20076 1003103 2010-04-15 16:37:05
20076 1003100 2010-04-15 16:37:18
20076 1003066 2010-04-15 16:37:31
20054 1003103 2010-04-15 16:40:14
20054 1003100 2010-04-15 16:40:16
要求用Java编写MapReduce程序,根据商品id进行去重,统计用户收藏商品中都有哪些商品被收藏。结果数据如下:
- 商品id
- 1000481
- 1001368
- 1001560
- 1001597
- 1001679
- 1002061
- 1002420
- 1002422
- 1002427
- 1002429
- 1003055
- 1003064
- 1003066
- 1003094
- 1003100
- 1003101
- 1003103
- 1003289
- 1003290
- 1003292
- 1003326
- 1010178
- 1010183
- 1010675
实验源码:
package mapreduce; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import Utils.FindHDFSText; public class Filter{ public static class Map extends Mapper<Object , Text , Text , NullWritable>{ private static Text newKey=new Text(); public void map(Object key,Text value,Context context) throws IOException, InterruptedException{ String line=value.toString(); System.out.println("line是:"+line); if(line!=null){ String arr[]=line.split(" "); System.out.println("a[1]是"+arr[1]); newKey.set(arr[1]); context.write(newKey, NullWritable.get()); System.out.println("新的截取的值是"+newKey); } } } public static class Reduce extends Reducer<Text, NullWritable, Text, NullWritable>{ public void reduce(Text key,Iterable<NullWritable> values,Context context) throws IOException, InterruptedException{ context.write(key,NullWritable.get()); } } public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException{ Configuration conf=new Configuration(); FindHDFSText find = new FindHDFSText(); conf.set("dfs.client.use.datanode.hostname", "true"); System.out.println("start"); @SuppressWarnings("deprecation") Job job =new Job(conf,"filter"); job.setJarByClass(Filter.class); System.out.println("fiter执行完毕"); job.setMapperClass(Map.class); System.out.println("map执行完毕"); job.setReducerClass(Reduce.class); System.out.println("reduce执行完毕"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(NullWritable.class); job.setInputFormatClass(TextInputFormat.class); System.out.println("TextInputFormat执行完毕"); job.setOutputFormatClass(TextOutputFormat.class); Path in=new Path("hdfs://**:9000/user/hadoop/input/b.txt"); System.out.println("in执行完毕"); Path out=new Path("hdfs://**:9000/user/hadoop/output"); System.out.println("out执行完毕"); Path path = new Path("hdfs://**:9000/user/hadoop/output"); FileSystem fileSystem = path.getFileSystem(conf);// 根据path找到这个文件 if (fileSystem.exists(path)) { fileSystem.delete(path, true);// true的意思是,就算output有东西,也一带删除 } FileInputFormat.addInputPath(job,in); System.out.println("读入执行完毕"); FileOutputFormat.setOutputPath(job,out); System.out.println("输出执行完毕"); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
遇不到的错误
数组越界的问题:
java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 1 at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 at mapreduce.Filter$Map.map(Filter.java:29) at mapreduce.Filter$Map.map(Filter.java:1) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
原因:明显自己的数组=没有问题,但是提示数组越界。原因是编码问题导致的。由于hdfs采用的字符编码是utf-8.所以,文件要用utf-8编码。把文档重新传一下。就行了。注意自己分片时候用的是什么符号。
比如(/t)或者(空格)。