sortByKey和sortBy都是transforamation算子;
sortByKey 源码如下:
def sortByKey(ascending: Boolean = true, numPartitions: Int = self.partitions.length) : RDD[(K, V)] = self.withScope { val part = new RangePartitioner(numPartitions, self, ascending) new ShuffledRDD[K, V, V](self, part) .setKeyOrdering(if (ascending) ordering else ordering.reverse) }
参数一:ascending 默认是True正序排列
参数二:分区数
即可以对标准RDD进行排序,也可以按照指定指定键值对pair Rdd进行排序
sortBy 源码如下
def sortBy[K]( f: (T) => K, ascending: Boolean = true, numPartitions: Int = this.partitions.length) (implicit ord: Ordering[K], ctag: ClassTag[K]): RDD[T] = withScope { this.keyBy[K](f) .sortByKey(ascending, numPartitions) .values }
参数一: 输入一个匿名函数,匿名函数的返回值是排序的关键字
参数二:ascending 默认是True正序排列
参数三:分区数
案例对wordcount结果进行排序
数据
hadoop hadoop hadoop
spark spark spark spark spark
hadoop hadoop hadoop
spark spark spark spark spark
hive hive hive hive
hadoop hadoop hadoop
spark spark spark spark spark
hadoop hadoop hadoop
spark spark spark spark spark
apache apache apache apache
hadoop hadoop hadoop
spark spark spark spark spark
hive hive hive hive
hadoop hadoop hadoop
spark spark spark spark spark
apache apache apache apache
sortByKey
val data:RDD[String]=spark.sparkContext.textFile(datapath,2) data.flatMap(x=>x.split(" ")).map((_,1)).reduceByKey(_+_).sortByKey(false).collect.foreach(println(_))
sortBykey
val data:RDD[String]=spark.sparkContext.textFile(datapath,2)
data.flatMap(x=>x.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(x=>x._1,false).collect.foreach(println(_))
结果
(spark,30) (hive,8) (hadoop,18) (apache,8)