• 项目实战从 0 到 1 学习之Flink (18)Flink SQL读取kafka数据并通过JDBC方式写入Clickhouse实时场景的简单实例


    说明

    读取kafka数据并且经过ETL后,通过JDBC存入clickhouse中

    代码

    定义POJO类:

    1
    2
    3
    4
    5
    6
    7
    8
    public class Student {
    private int id;
    private String name;
    private String password;
    private int age;
    private String date;
    //构造,setter 和 getter 省略
    }

    完整代码:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.setParallelism(1);
    final StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

    //###############定义消费kafka source##############
    Properties props = new Properties();
    props.put("bootstrap.servers", "localhost:9092");
    props.put("zookeeper.connect", "localhost:2181");
    props.put("group.id", "metric-group");
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
    props.put("auto.offset.reset", "latest");

    tableEnv.connect(new Kafka().version("0.10")
    .topic("student").properties(props).startFromLatest())
    .withFormat(new Json().deriveSchema())
    .withSchema(new Schema().field("id", Types.INT())
    .field("name", Types.STRING())
    .field("password", Types.STRING())
    .field("age", Types.INT())
    .field("date", Types.STRING()))
    .inAppendMode()
    .registerTableSource("kafkaTable");
    Table result = tableEnv.sqlQuery("SELECT * FROM " + "kafkaTable");

    //###############定义clickhouse JDBC sink##############
    String targetTable = "clickhouse";
    TypeInformation[] fieldTypes = {BasicTypeInfo.INT_TYPE_INFO,BasicTypeInfo.STRING_TYPE_INFO,BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.INT_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO};
    TableSink jdbcSink = JDBCAppendTableSink.builder()
    .setDrivername("ru.yandex.clickhouse.ClickHouseDriver")
    .setDBUrl("jdbc:clickhouse://localhost:8123")
    .setQuery("insert into student_local(id, name, password, age, date) values(?, ?, ?, ?, ?)")
    .setParameterTypes(fieldTypes)
    .setBatchSize(15)
    .build();

    tableEnv.registerTableSink(targetTable,new String[]{"id","name", "password", "age", "date"}, new TypeInformation[]{Types.INT(), Types.STRING(), Types.STRING(), Types.INT(), Types.STRING()}, jdbcSink);

    result.insertInto(targetTable);
    env.execute("Flink add sink");

    POM:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-java</artifactId>
    <version>${flink.version}</version>
    <!--<scope>provided</scope>-->
    </dependency>
    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <!-- <scope>provided</scope>-->
    </dependency>

    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    <!-- <scope>provided</scope>-->
    </dependency>
    <dependency>
    <groupId>ru.yandex.clickhouse</groupId>
    <artifactId>clickhouse-jdbc</artifactId>
    <version>0.2</version>
    </dependency>

    <dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpcore</artifactId>
    <version>4.4.4</version>
    </dependency>

    <dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>19.0</version>
    </dependency>

    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-jdbc_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    </dependency>
    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-json</artifactId>
    <version>${flink.version}</version>
    </dependency>

    <!-- Either... -->
    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    </dependency>

    <!-- Add connector dependencies here. They must be in the default scope
    (compile). -->
    <!-- this is for kafka consuming -->
    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    </dependency>
    <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-table-planner_${scala.binary.version}</artifactId>
    <version>${flink.version}</version>
    </dependency>
    作者:大码王

    -------------------------------------------

    个性签名:独学而无友,则孤陋而寡闻。做一个灵魂有趣的人!

    如果觉得这篇文章对你有小小的帮助的话,记得在右下角点个“推荐”哦,博主在此感谢!

    万水千山总是情,打赏一分行不行,所以如果你心情还比较高兴,也是可以扫码打赏博主,哈哈哈(っ•?ω•?)っ???!

  • 相关阅读:
    如何解决MySQL Workbench Error Code 2013报错问题
    如何解决 执行 delete from 表等 遇到Mysql Workbench的Error Code: 1175错误
    java猜数字游戏while循环
    HDU 1069
    dp入门 石子相邻合并 详细带图讲解
    阶乘 大数保存
    strlen实现
    01背包和完全背包
    1.23 codeforces div3 C.Nice Garland
    数字三角形
  • 原文地址:https://www.cnblogs.com/huanghanyu/p/13632754.html
Copyright © 2020-2023  润新知