• Spark3000门徒第10课Java开发Spark实战总结


    今晚听了王家林老师的第10课Java开发Spark实战,课后作业是:用Java方式采用Maven开发Spark的WordCount并运行在集群中

    先配置pom.xml

    <groupId>com.dt.spark</groupId>
    <artifactId>SparkApps</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.10</artifactId>
    <version>1.6.0</version>
    </dependency>
    <dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql_2.10</artifactId>
    <version>1.6.0</version>
    </dependency>
    <dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>2.6.0</version>
    </dependency>

    然后写程序:

    public class WordCount {
    
    	public static void main(String[] args) {
    		
    		SparkConf conf = new SparkConf().setAppName("Spark WordCount by java");   
    		JavaSparkContext sc = new JavaSparkContext(conf);
    		JavaRDD<String> lines = sc.textFile(args(0));
    
    		JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String,String>(){   
    			public Iterable<String> call(String line) throws Exception{
    				return Arrays.asList(line.split(" "));
    			}
    		});  
    
    		JavaPairRDD<String,Integer> pairs = words.mapToPair(new PairFunction<String,String,Integer>(){
    			public Tuple2<String,Integer> call(String word) throws Exception {
    				return new Tuple2<String,Integer>(word,1);
    			}    
    		});
    
    		JavaPairRDD<String,Integer> wordsCount = pairs.reduceByKey(new Function2<Integer,Integer,Integer>(){			
    			public Integer call(Integer v1,Integer v2){
    				return v1+v2;
    			}
    		});
    
    		wordsCount.foreach(new VoidFunction<Tuple2<String,Integer>>(){
    			public void call(Tuple2<String,Integer> pairs) throws Exception{
    				System.out.println(pairs._1+":"+pairs._2);
    			}
    		});
    		
    		sc.close();
    	}
    
    }
    

      

    打包成jar文件放服务器上执行:

    /usr/lib/spark/bin/spark-submit --master  yarn-client  --class com.dt.spark.WordCount --executor-memory 2G --executor-cores  4 ~/spark/wc.jar  ./mydir/tmp.txt

    可以看到结果跟用scala写的一致。

    后续课程可以参照新浪微博 王家林_DT大数据梦工厂:http://weibo.com/ilovepains

    王家林  中国Spark第一人,微信公共号DT_Spark

    博客:http://bolg.sina.com.cn/ilovepains

     

    转载请说明出处!

  • 相关阅读:
    lsattr
    lsattr
    lsattr
    lsattr
    Java反射详细介绍
    Java反射详细介绍
    深入分析 Java 中的中文编码问题
    xgqfrms™, xgqfrms® : xgqfrms's offical website of GitHub!
    xgqfrms™, xgqfrms® : xgqfrms's offical website of GitHub!
    xgqfrms™, xgqfrms® : xgqfrms's offical website of GitHub!
  • 原文地址:https://www.cnblogs.com/haitianS/p/5123048.html
Copyright © 2020-2023  润新知