• Windows+Idea安装Hadoop开发环境


    前言:这种问题,本来不应该写篇博客的,但是实在是折磨我太久了,现在终于修好了,必须记一下,否则对不起自己的时间,对自己的博客道歉


    简介

    环境:Windows 10+JDK1.8+Intellij Idea,无需手动安装Hadoop

    Maven

    Maven是项目管理及自动构建工具,由Apache软件基金会所提供。基于项目对象模型(缩写:POM)概念,Maven利用一个pom.xml的文件管理一个项目的构建、报告和文档等步骤。只需要在Maven配置文件中指定Hadoop依赖包名字和版本号,Maven就能自动搞定这些依赖。

    Intellij Idea

    之所以写这篇博客,就是因为我从Eclipse切换到了Idea产生了很多问题。之前使用Eclipse+Hadoop伪分布式,但是用起来有些不顺手。

    WordCount

    1.新建项目


    2.使用Maven配置Hadoop依赖
    完成以上步骤,如果正常将会出现:

    箭头所指,即是用来Maven的pom.xml文件,在该文件上填写好依赖,填写好的xml文件为:

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
    		 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    		 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    	<modelVersion>4.0.0</modelVersion>
    
    	<groupId>com.cmn</groupId>
    	<artifactId>test</artifactId>
    	<version>1.0-SNAPSHOT</version>
    
    	<repositories>
    		<repository>
    			<id>apache</id>
    			<url>http://maven.apache.org</url>
    		</repository>
    	</repositories>
    
    	<dependencies>
    		<dependency>
    			<groupId>junit</groupId>
    			<artifactId>junit</artifactId>
    			<version>4.12</version>
    			<scope>test</scope>
    		</dependency>
    
    		<dependency>
    			<groupId>org.apache.hadoop</groupId>
    			<artifactId>hadoop-common</artifactId>
    			<version>2.6.1</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.hadoop</groupId>
    			<artifactId>hadoop-hdfs</artifactId>
    			<version>2.6.1</version>
    		</dependency>
    		<dependency>
    			<groupId>commons-cli</groupId>
    			<artifactId>commons-cli</artifactId>
    			<version>1.2</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.hadoop</groupId>
    			<artifactId>hadoop-mapreduce-client-core</artifactId>
    			<version>2.6.1</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.hadoop</groupId>
    			<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
    			<version>2.6.1</version>
    		</dependency>
    		<dependency>
    			<groupId>log4j</groupId>
    			<artifactId>log4j</artifactId>
    			<version>1.2.17</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.hadoop</groupId>
    			<artifactId>hadoop-mapreduce-examples</artifactId>
    			<version>2.6.1</version>
    		</dependency>
    		<dependency>
    			<groupId>org.projectlombok</groupId>
    			<artifactId>lombok</artifactId>
    			<version>1.16.6</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.zookeeper</groupId>
    			<artifactId>zookeeper</artifactId>
    			<version>3.4.8</version>
    		</dependency>
    	</dependencies>
    
    </project>
    

    分别添加了<repositories></repositories><dependencies></dependencies>,Idea会询问是否导入变更,选择Import Changes

    3.展开项目目录,src->main->java,在该目录下新建java class,命名为WordCount,WordCount.java文件内容如下:

    import java.io.IOException;
    import java.util.StringTokenizer;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    
    public class WordCount {
    
    	public static class TokenizerMapper
    			extends Mapper<Object, Text, Text, IntWritable> {
    
    		private final static IntWritable one = new IntWritable(1);
    		private Text word = new Text();
    
    		public void map(Object key, Text value, Context context
    		) throws IOException, InterruptedException {
    			StringTokenizer itr = new StringTokenizer(value.toString());
    			while (itr.hasMoreTokens()) {
    				word.set(itr.nextToken());
    				context.write(word, one);
    			}
    		}
    	}
    
    	public static class IntSumReducer
    			extends Reducer<Text, IntWritable, Text, IntWritable> {
    		private IntWritable result = new IntWritable();
    
    		public void reduce(Text key, Iterable<IntWritable> values,
    						   Context context
    		) throws IOException, InterruptedException {
    			int sum = 0;
    			for (IntWritable val : values) {
    				sum += val.get();
    			}
    			result.set(sum);
    			context.write(key, result);
    		}
    	}
    
    	public static void main(String[] args) throws Exception {
    		Configuration conf = new Configuration();
    		Job job = Job.getInstance(conf, "word count");
    		job.setJarByClass(WordCount.class);
    		job.setMapperClass(TokenizerMapper.class);
    		job.setCombinerClass(IntSumReducer.class);
    		job.setReducerClass(IntSumReducer.class);
    		job.setOutputKeyClass(Text.class);
    		job.setOutputValueClass(IntWritable.class);
    		FileInputFormat.addInputPath(job, new Path(args[0]));
    		FileOutputFormat.setOutputPath(job, new Path(args[1]));
    		System.exit(job.waitForCompletion(true) ? 0 : 1);
    	}
    }
    

    4.输入代码后,在src目录的同级新建一个Dirctory,命名为input,在里面新建一个文本文件,命名随意,内容随意。完成后目录类似如下:

    5.选择File->Project Structure,在右侧的Project Setting中选择Modules,选择Language Level为8 - Lambdas...,如下:

    6.选择Run->Run/Debug Configurations->左上角绿色加号->Application,Name填WordCount,MainClass选择WordCount,Program填input/ output/,类似:

    7.运行,Run->Run WordCount,等待几秒,结果会出现在output目录下。

    可能出现的错误


    解决方法:重启你的Idea

  • 相关阅读:
    java基础英语---第十九天
    java基础英语---第十六天
    java基础英语---第十七天
    java基础英语---第十四天
    java基础英语---第十五天
    java基础英语---第十三天
    设计模式
    设计模式
    设计模式
    设计模式
  • 原文地址:https://www.cnblogs.com/mengnan/p/9307594.html
Copyright © 2020-2023  润新知