• spark streaming基础


    前言

    spark streaming在2.2.1版本之后出现一个类似的实时计算框架Structured Streaming。

    引用一句spark streaming structured streaming区别博客的原话,建议扩展读下:Structured Streaming 通过提供一套 high-level 的 declarative api 使得流式计算的编写相比 Spark Streaming 简单容易不少,同时通过提供 end-to-end 的 exactly-once 语义。

    核心优势有以下几点:用流式计算代替batch计算,declarative api可以减少代码编写难度,可以保证exactly-once。

    一:StreamingContext详解

    两种创建方式:

    一:sparkConf方式
    val conf = new SparkConf().setAppName(appName).setMaster(master);
    val ssc = new StreamingContext(conf, Seconds(1));
    
    二:sparkContext方式
    val sc = new SparkContext(conf)
    val ssc = new StreamingContext(sc, Seconds(1));

     一个StreamingContext定义之后,必须做以下几件事情:
    1、通过创建输入DStream来创建输入数据源。
    2、通过对DStream定义transformation和output算子操作,来定义实时计算逻辑。
    3、调用StreamingContext的start()方法,来开始实时处理数据。
    4、调用StreamingContext的awaitTermination()方法,来等待应用程序的终止。可以使用CTRL+C手动停止,或者就是让它持续不断的运行进行计算。
    5、也可以通过调用StreamingContext的stop()方法,来停止应用程序。
    需要注意的要点:
    1、只要一个StreamingContext启动之后,就不能再往其中添加任何计算逻辑了。比如执行start()方法之后,还给某个DStream执行一个算子。
    2、一个StreamingContext停止之后,是肯定不能够重启的。调用stop()之后,不能再调用start()
    3、一个JVM同时只能有一个StreamingContext启动。在你的应用程序中,不能创建两个StreamingContext。
    4、调用stop()方法时,会同时停止内部的SparkContext,如果不希望如此,还希望后面继续使用SparkContext创建其他类型的Context,比如SQLContext,那么就用stop(false)。
    5、一个SparkContext可以创建多个StreamingContext,只要上一个先用stop(false)停止,再创建下一个即可。
    6、一个Spark Streaming Application的Executor,是一个长时间运行的任务,因此,它会独占分配给Spark Streaming Application的cpu core。从而只要Spark Streaming运行起来以后,这个节点上的cpu core,就没法给其他应用使用了。所以单线程是不能正常接收数据并且处理数据的。必须只要一个core用于接收数据(receive),一个core用于处理数据。但是基于hdfs是不需要receive的。
    实例代码如下:

    public class HDFSWordCount {
        public static void main(String[] args) {
            SparkConf conf = new SparkConf().setAppName("HDFSWordCountJava").setMaster("local[2]");
            JavaStreamingContext javaStreamingContext = new JavaStreamingContext(conf, Durations.seconds(10));
    
    
            JavaDStream<String> lines = javaStreamingContext.textFileStream("hdfs://hadoop-100:9000/stream/");
    
            JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
                @Override
                public Iterable<String> call(String s) throws Exception {
                    return Arrays.asList(s.split(" "));
                }
            });
    
            JavaPairDStream<String, Integer> wordsNumber = words.mapToPair(new PairFunction<String, String, Integer>() {
                @Override
                public Tuple2<String, Integer> call(String s) throws Exception {
                    return new Tuple2<>(s, 1);
                }
            });
    
            JavaPairDStream<String, Integer> result = wordsNumber.reduceByKey(new Function2<Integer, Integer, Integer>() {
                @Override
                public Integer call(Integer v1, Integer v2) throws Exception {
                    return v1 + v2;
                }
            });
    
            result.print();
            javaStreamingContext.start();
            javaStreamingContext.awaitTermination();
            javaStreamingContext.close();
        }
    }

    二:kafka direct 跟receiver 方式接收数据的区别

    Receiver是使用Kafka的高层次Consumer API来实现的。receiver从Kafka中获取的数据都是存储在Spark Executor的内存中的,然后Spark Streaming启动的job会去处理那些数据。然而,在默认的配置下,这种方式可能会因为底层的失败而丢失数据。如果要启用高可靠机制,让数据零丢失,就必须启用Spark Streaming的预写日志机制(Write Ahead Log,WAL)。该机制会同步地将接收到的Kafka数据写入分布式文件系统(比如HDFS)上的预写日志中。所以,即使底层节点出现了失败,也可以使用预写日志中的数据进行恢复,但是效率会下降。

    direct 这种方式会周期性地查询Kafka,来获得每个topic+partition的最新的offset,从而定义每个batch的offset的范围。当处理数据的job启动时,就会使用Kafka的简单consumer api来获取Kafka指定offset范围的数据。这种方式有如下优点:
    1、简化并行读取:如果要读取多个partition,不需要创建多个输入DStream然后对它们进行union操作。Spark会创建跟Kafka partition一样多的RDD partition,并且会并行从Kafka中读取数据。所以在Kafka partition和RDD partition之间,有一个一对一的映射关系。
    2、高性能:如果要保证零数据丢失,在基于receiver的方式中,需要开启WAL机制。这种方式其实效率低下,因为数据实际上被复制了两份,Kafka自己本身就有高可靠的机制,会对数据复制一份,而这里又会复制一份到WAL中。而基于direct的方式,不依赖Receiver,不需要开启WAL机制,只要Kafka中作了数据的复制,那么就可以通过Kafka的副本进行恢复。

    3、一次且仅一次的事务机制:
        基于receiver的方式,是使用Kafka的高阶API来在ZooKeeper中保存消费过的offset的。这是消费Kafka数据的传统方式。这种方式配合着WAL机制可以保证数据零丢失的高可靠性,但是却无法保证数据被处理一次且仅一次,可能会处理两次。因为Spark和ZooKeeper之间可能是不同步的。
        基于direct的方式,使用kafka的简单api,Spark Streaming自己就负责追踪消费的offset,并保存在checkpoint中。Spark自己一定是同步的,因此可以保证数据是消费一次且仅消费一次。

    三:DStream操作

    转化操作:

    TransformationMeaning
    map(func) Return a new DStream by passing each element of the source DStream through a function func.
    flatMap(func) Similar to map, but each input item can be mapped to 0 or more output items.
    filter(func) Return a new DStream by selecting only the records of the source DStream on which func returns true.
    repartition(numPartitions) Changes the level of parallelism in this DStream by creating more or fewer partitions.
    union(otherStream) Return a new DStream that contains the union of the elements in the source DStream and otherDStream.
    count() Return a new DStream of single-element RDDs by counting the number of elements in each RDD of the source DStream.
    reduce(func) Return a new DStream of single-element RDDs by aggregating the elements in each RDD of the source DStream using a function func (which takes two arguments and returns one). The function should be associative and commutative so that it can be computed in parallel.
    countByValue() When called on a DStream of elements of type K, return a new DStream of (K, Long) pairs where the value of each key is its frequency in each RDD of the source DStream.
    reduceByKey(func, [numTasks]) When called on a DStream of (K, V) pairs, return a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function. Note: By default, this uses Spark's default number of parallel tasks (2 for local mode, and in cluster mode the number is determined by the config property spark.default.parallelism) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.
    join(otherStream, [numTasks]) When called on two DStreams of (K, V) and (K, W) pairs, return a new DStream of (K, (V, W)) pairs with all pairs of elements for each key.
    cogroup(otherStream, [numTasks]) When called on a DStream of (K, V) and (K, W) pairs, return a new DStream of (K, Seq[V], Seq[W]) tuples.
    transform(func) Return a new DStream by applying a RDD-to-RDD function to every RDD of the source DStream. This can be used to do arbitrary RDD operations on the DStream.
    updateStateByKey(func) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values for the key. This can be used to maintain arbitrary state data for each key.
    TransformationMeaning
    window(windowLengthslideInterval) Return a new DStream which is computed based on windowed batches of the source DStream.
    countByWindow(windowLengthslideInterval) Return a sliding window count of elements in the stream.
    reduceByWindow(funcwindowLengthslideInterval) Return a new single-element stream, created by aggregating elements in the stream over a sliding interval using func. The function should be associative and commutative so that it can be computed correctly in parallel.
    reduceByKeyAndWindow(funcwindowLengthslideInterval, [numTasks]) When called on a DStream of (K, V) pairs, returns a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function func over batches in a sliding window. Note: By default, this uses Spark's default number of parallel tasks (2 for local mode, and in cluster mode the number is determined by the config property spark.default.parallelism) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.
    reduceByKeyAndWindow(funcinvFuncwindowLengthslideInterval, [numTasks])

    A more efficient version of the above reduceByKeyAndWindow() where the reduce value of each window is calculated incrementally using the reduce values of the previous window. This is done by reducing the new data that enters the sliding window, and “inverse reducing” the old data that leaves the window. An example would be that of “adding” and “subtracting” counts of keys as the window slides. However, it is applicable only to “invertible reduce functions”, that is, those reduce functions which have a corresponding “inverse reduce” function (taken as parameter invFunc). Like in reduceByKeyAndWindow, the number of reduce tasks is configurable through an optional argument. Note that checkpointing must be enabled for using this operation.

    countByValueAndWindow(windowLengthslideInterval, [numTasks]) When called on a DStream of (K, V) pairs, returns a new DStream of (K, Long) pairs where the value of each key is its frequency within a sliding window. Like in reduceByKeyAndWindow, the number of reduce tasks is configurable through an optional argument.

     output

    Output OperationMeaning
    print() Prints the first ten elements of every batch of data in a DStream on the driver node running the streaming application. This is useful for development and debugging.
    Python API This is called pprint() in the Python API.
    saveAsTextFiles(prefix, [suffix]) Save this DStream's contents as text files. The file name at each batch interval is generated based on prefix and suffix"prefix-TIME_IN_MS[.suffix]".
    saveAsObjectFiles(prefix, [suffix]) Save this DStream's contents as SequenceFiles of serialized Java objects. The file name at each batch interval is generated based on prefix and suffix"prefix-TIME_IN_MS[.suffix]".
    Python API This is not available in the Python API.
    saveAsHadoopFiles(prefix, [suffix]) Save this DStream's contents as Hadoop files. The file name at each batch interval is generated based on prefix and suffix"prefix-TIME_IN_MS[.suffix]".
    Python API This is not available in the Python API.
    foreachRDD(func) The most generic output operator that applies a function, func, to each RDD generated from the stream. This function should push the data in each RDD to an external system, such as saving the RDD to files, or writing it over the network to a database. Note that the function func is executed in the driver process running the streaming application, and will usually have RDD actions in it that will force the computation of the streaming RDDs.
  • 相关阅读:
    急招.NET系列职位
    程序员成长的三个方法
    xwebkitspeech
    张小龙的产品
    浅析商业银行“业务连续性管理体系”的构建
    Sonar for dotNet
    Moles测试Contrustor时候遇到的一个问题
    EntityFramework 用Moles的mock
    Accessor中Generic的元素是internal/private的会导致转换失败的异常
    Android自用Intent 介绍
  • 原文地址:https://www.cnblogs.com/parent-absent-son/p/11805712.html
Copyright © 2020-2023  润新知