• Apache Sedona 流数据处理入门


    Apache Flink介绍

        Apache Flink是由Apache软件基金会开发的开源流处理框架,其核心是用Java和Scala编写的分布式流数据流引擎。Flink以数据并行和流水线方式执行任意流数据程序,Flink的流水线运行时系统可以执行批处理和流处理程序。

    Apache Sedona介绍

        Apache Sedona(孵化中)是一个用于处理大规模空间数据的集群计算系统。Sedona用一套开箱即用的分布式空间数据集和空间SQL扩展了Apache Spark和Apache Flink,可以在机器间有效地加载、处理和分析大规模空间数据。

    处理流程

    java代码

    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode().build();
    StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings);
    SedonaFlinkRegistrator.registerType(env);
    SedonaFlinkRegistrator.registerFunc(tableEnv);
    DataStream<String> socketTextStream = env.socketTextStream("localhost", 8888);
    DataStream<Row> rowDataStream = socketTextStream.map(new MapFunction<String, Row>() {
        private static final long serialVersionUID = -3351062125994879777L;
        @Override
        public Row map(String line) throws Exception {
            //System.out.println(line);
            //System.out.println("11");
            String[] fields = line.split(",");
            String pointWkt = fields[1] ;
            int id = Integer.parseInt(fields[0]);
            return Row.of(pointWkt, id);
    
        }
    }).returns(Types.ROW(Types.STRING, Types.INT));
    Table pointTable = tableEnv.fromDataStream(rowDataStream);
    tableEnv.createTemporaryView("myTable", pointTable);
    Table geomTbl = tableEnv.sqlQuery("SELECT ST_GeomFromWKT(f0) as geom_polygon, f1 FROM myTable");
    tableEnv.createTemporaryView("geoTable", geomTbl);
    geomTbl = tableEnv.sqlQuery("SELECT f1, geom_polygon    FROM geoTable  where  ST_Contains (ST_GeomFromWKT('MultiPolygon (((110.499997 20.010307, 110.499995 20.010759, 110.500473 20.01076, 110.500475 20.010308, 110.499997 20.010307)))'),geom_polygon)");
    geomTbl.execute().print();
    

    pom.xml配置

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>cn.hwang</groupId>
        <artifactId>geospark-dev</artifactId>
        <version>1.0</version>
    
        <properties>
            <scala.version>2.12</scala.version>
            <scala.compat.version>2.12</scala.compat.version>
            <geospark.version>1.2.0</geospark.version>
            <spark.compatible.verison>3.0</spark.compatible.verison>
            <spark.version>3.1.2</spark.version>
            <hadoop.version>3.2.0</hadoop.version>
            <geotools.version>24.0</geotools.version>
            <flink.version>1.14.3</flink.version>
            <kafka.version>2.8.1</kafka.version>
            <dependency.scope>compile</dependency.scope>
            <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    
        </properties>
    
        <dependencies>
    
            <dependency>
                <groupId>org.apache.sedona</groupId>
                <artifactId>sedona-core-3.0_2.12</artifactId>
                <version>1.2.0-incubating</version>
            </dependency>
            <dependency>
                <groupId>org.apache.sedona</groupId>
                <artifactId>sedona-viz-3.0_2.12</artifactId>
                <version>1.2.0-incubating</version>
            </dependency>
            <dependency>
                <groupId>org.apache.sedona</groupId>
                <artifactId>sedona-sql-3.0_2.12</artifactId>
                <version>1.2.0-incubating</version>
            </dependency>
            <dependency>
                <groupId>org.apache.sedona</groupId>
                <artifactId>sedona-flink_2.12</artifactId>
                <version>1.2.0-incubating</version>
            </dependency>
            <dependency>
                <groupId>org.scala-lang</groupId>
                <artifactId>scala-library</artifactId>
                <version>2.12.13</version>
            </dependency>
    
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-core_2.12</artifactId>
                <version>${spark.version}</version>
                <scope>${dependency.scope}</scope>
                <exclusions>
                    <exclusion>
                        <groupId>org.apache.hadoop</groupId>
                        <artifactId>*</artifactId>
                    </exclusion>
                </exclusions>
            </dependency>
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-sql_2.12</artifactId>
                <version>${spark.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-mapreduce-client-core</artifactId>
                <version>${hadoop.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-common</artifactId>
                <version>${hadoop.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <!-- https://mvnrepository.com/artifact/org.postgresql/postgresql -->
            <dependency>
                <groupId>org.postgresql</groupId>
                <artifactId>postgresql</artifactId>
                <version>42.2.5</version>
            </dependency>
    
    <!--        <dependency>-->
    <!--            <groupId>org.geotools</groupId>-->
    <!--            <artifactId>gt-shapefile</artifactId>-->
    <!--            <version>22-RC</version>-->
    <!--        </dependency>-->
            <dependency>
                <groupId>org.datasyslab</groupId>
                <artifactId>geotools-wrapper</artifactId>
                <version>1.1.0-25.2</version>
            </dependency>
            <dependency>
                <groupId>org.locationtech.jts</groupId>
                <artifactId>jts-core</artifactId>
                <version>1.18.0</version>
            </dependency>
    <!--        <dependency>-->
    <!--            <groupId>org.wololo</groupId>-->
    <!--            <artifactId>jts2geojson</artifactId>-->
    <!--            <version>0.16.1</version>-->
    <!--        </dependency>-->
            <dependency>
                <groupId>org.wololo</groupId>
                <artifactId>jts2geojson</artifactId>
                <version>0.16.1</version>
                <exclusions>
                    <exclusion>
                        <groupId>org.locationtech.jts</groupId>
                        <artifactId>jts-core</artifactId>
                    </exclusion>
                    <exclusion>
                        <groupId>com.fasterxml.jackson.core</groupId>
                        <artifactId>*</artifactId>
                    </exclusion>
                </exclusions>
            </dependency>
            <dependency>
                <groupId>org.geotools</groupId>
                <artifactId>gt-main</artifactId>
                <version>24.0</version>
            </dependency>
            <!-- https://mvnrepository.com/artifact/org.geotools/gt-referencing -->
            <dependency>
                <groupId>org.geotools</groupId>
                <artifactId>gt-referencing</artifactId>
                <version>24.0</version>
            </dependency>
            <!-- https://mvnrepository.com/artifact/org.geotools/gt-epsg-hsql -->
            <dependency>
                <groupId>org.geotools</groupId>
                <artifactId>gt-epsg-hsql</artifactId>
                <version>24.0</version>
            </dependency>
    
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-core</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <!--        For Flink DataStream API-->
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-streaming-java_${scala.compat.version}</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <!-- Kafka  -->
            <dependency>
                <groupId>org.apache.kafka</groupId>
                <artifactId>kafka-clients</artifactId>
                <version>${kafka.version}</version>
            </dependency>
            <!--        Flink Kafka connector-->
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-connector-kafka_${scala.compat.version}</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <!--        For playing flink in IDE-->
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-clients_${scala.compat.version}</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <!--        For Flink flink api, planner, udf/udt, csv-->
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-table-api-java-bridge_${scala.compat.version}</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <!--        Starting Flink 14, Blink planner has been renamed to the official Flink planner-->
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-table-planner_${scala.compat.version}</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-table-common</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-csv</artifactId>
                <version>${flink.version}</version>
                <scope>${dependency.scope}</scope>
            </dependency>
            <!--        For Flink Web Ui in test-->
            <dependency>
                <groupId>org.apache.flink</groupId>
                <artifactId>flink-runtime-web_${scala.compat.version}</artifactId>
                <version>${flink.version}</version>
                <scope>test</scope>
            </dependency>
        </dependencies>
    
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <version>3.8.0</version>
                    <configuration>
                        <source>1.8</source>
                        <target>1.8</target>
                    </configuration>
                </plugin>
            </plugins>
        </build>
    
        <repositories>
            <repository>
                <id>central</id>
                <name>maven.aliyun.com</name>
                <url>https://maven.aliyun.com/repository/public</url>
            </repository>
            <repository>
                <id>maven2-repository.dev.java.net</id>
                <name>Java.net repository</name>
                <url>https://download.java.net/maven/2</url>
            </repository>
            <repository>
                <id>osgeo</id>
                <name>OSGeo Release Repository</name>
                <url>https://repo.osgeo.org/repository/release/</url>
                <snapshots>
                    <enabled>false</enabled>
                </snapshots>
                <releases>
                    <enabled>true</enabled>
                </releases>
            </repository>
            <repository>
                <id>Central</id>
                <name>Central Repository</name>
                <url>https://repo1.maven.org/maven2/</url>
            </repository>
    
        </repositories>
    
    </project>
    

        下载netcat解压到本地,使用cmd运行nc程序,模拟流数据输入。下面就是测试的数据模板。
    image.png

    1,Point (110.500235 20.0105335)
    2,Point (110.18832409 20.06088375)
    3,Point (110.18784591 20.06088125)
    4,Point (109.4116775 18.2997045)
    5,Point (109.55539791 18.30762275)
    6,Point (109.5483405 19.778523)
    

        粘贴测试数据到nc程序下,开始运行代码。

    image.png
        Apache Sedona已经可以成功运行一些空间流数据,十分感谢JupiterChow同学对我的帮助,给我提供了示例代码和配置,还有耐心讲解。

    附( 花了好几天安装但是没用上的,flink1.14 部署到ubuntu)

    安装JDK

    sudo apt update 
    sudo apt install openjdk-11-jdk
    
    wget https://dlcdn.apache.org/flink/flink-1.14.4/flink-1.14.4-bin-scala_2.11.tgz
    tar -xzf flink-1.14.4-bin-scala_2.11.tgz
    cd flink-1.14.4
    

    启动集群

    ./bin/start-cluster.sh
    

    测试自带的例子

    ./bin/flink run ./examples/batch/WordCount.jar
    

    打开自带的UI界面

    image.png

    wget https://github.com/glink-incubator/glink/releases/download/release-1.0.0/glink-1.0.0-bin.tar.gz
    tar -zxvf glink-1.0.0-bin.tar.gz

    ./flink-1.14.4/bin/flink run ./examples/batch/WordCount.jar
    image.png

    参考资料

    https://zhuanlan.zhihu.com/p/447743903

    https://www.cnblogs.com/liufei1983/p/15661322.html

    https://blog.csdn.net/weixin_46684578/article/details/122803180

    https://juejin.cn/post/7023210394894204936

    https://cloud.tencent.com/developer/article/1626610

    https://baike.baidu.com/item/Apache Flink/59924858

    https://eternallybored.org/misc/netcat/

    https://sedona.apache.org/

  • 相关阅读:
    参考 ZTree 加载大数据量。加载慢问题解析
    script标签中type为<script type="text/x-template">是个啥
    最全的常用正则表达式大全
    利用split
    Lucene4.4.0 开发之排序
    scrollWidth,clientWidth,offsetWidth的区别
    JavaScript SetInterval与setTimeout使用方法详解
    JS实现悬浮移动窗口(悬浮广告)的特效
    $.ajax()方法详解
    DB2导入导出编目配置
  • 原文地址:https://www.cnblogs.com/polong/p/16247365.html
Copyright © 2020-2023  润新知