• spark 按照key 分组 然后统计每个key对应的最大、最小、平均值思路——使用groupby,或者reduceby


    What you're getting back is an object which allows you to iterate over the results. You can turn the results of groupByKey into a list by calling list() on the values, e.g.
    
    example = sc.parallelize([(0, u'D'), (0, u'D'), (1, u'E'), (2, u'F')])
    
    example.groupByKey().collect()
    # Gives [(0, <pyspark.resultiterable.ResultIterable object ......]
    
    example.groupByKey().map(lambda x : (x[0], list(x[1]))).collect()
    # Gives [(0, [u'D', u'D']), (1, [u'E']), (2, [u'F'])]

    # OR:
    example.groupByKey().mapValues(list)
     
    Hey Ron, 
    
    It was pretty much exactly as Sean had depicted. I just needed to provide
    count an anonymous function to tell it which elements to count. Since I
    wanted to count them all, the function is simply "true".
    
            val grouped = rdd.groupByKey().mapValues { mcs =>
              val values = mcs.map(_.foo.toDouble)
              val n = values.count(x => true)
              val sum = values.sum
              val sumSquares = values.map(x => x * x).sum
              val stddev = math.sqrt(n * sumSquares - sum * sum) / n
              print("stddev: " + stddev)
              stddev
            }
    
    
    I hope that helps
    Just don't. Use reduce by key:
    
    lines.map(lambda x: (x[1][0:4], (x[0], float(x[3])))).map(lambda x: (x, x)) 
        .reduceByKey(lambda x, y: (
            min(x[0], y[0], key=lambda x: x[1]), 
            max(x[1], y[1], , key=lambda x: x[1])))
  • 相关阅读:
    学习Python第三天
    学习Python第二天
    学习Python第一天
    centos7 系统优化
    crond计划任务
    day2
    day1
    A.浏览器访问 kube-apiserver 安全端口
    12.清理集群
    11.部署 harbor 私有仓库
  • 原文地址:https://www.cnblogs.com/bonelee/p/7156188.html
Copyright © 2020-2023  润新知