• Flink输出到Kafka(两种方式)


    方式一:读取文件输出到Kafka   

       1.代码

    import org.apache.flink.api.common.serialization.SimpleStringSchema
    import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
    import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011

    //温度传感器读取样例类
    case class SensorReading(id: String, timestamp: Long, temperature: Double)

    object KafkaSinkTest {
    def main(args: Array[String]): Unit = {
    val env = StreamExecutionEnvironment.getExecutionEnvironment
    env.setParallelism(1)

    import org.apache.flink.api.scala._
    val inputStream = env.readTextFile("sensor.txt")
    val dataStream = inputStream.map(x => {
    val arr = x.split(",")
    SensorReading(arr(0).trim, arr(1).trim.toLong, arr(2).trim.toDouble).toString //转成String方便序列化输出
    })

    //sink
    dataStream.addSink(new FlinkKafkaProducer011[String]("localhost:9092", "sinkTest", new SimpleStringSchema()))
    dataStream.print()

    env.execute(" kafka sink test")

    }
    }

    2.启动zookeeper:参考https://www.cnblogs.com/wddqy/p/12156527.html
    3.启动kafka:参考https://www.cnblogs.com/wddqy/p/12156527.html
    4.创建kafka消费者观察结果

    方式二:Kafka到Kafka   

       1.代码

    import java.util.Properties
    import org.apache.flink.api.common.serialization.SimpleStringSchema
    import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
    import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer011, FlinkKafkaProducer011}

    //温度传感器读取样例类
    case class SensorReading(id: String, timestamp: Long, temperature: Double)

    object KafkaSinkTest1 {
    def main(args: Array[String]): Unit = {
    val env = StreamExecutionEnvironment.getExecutionEnvironment
    env.setParallelism(1)

    import org.apache.flink.api.scala._
    //从Kafka到Kafka
    val properties = new Properties()
    properties.setProperty("bootstrap.servers", "localhost:9092")
    properties.setProperty("group.id", "consumer-group")
    properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    properties.setProperty("auto.offset.reset", "latest")

    val inputStream = env.addSource(new FlinkKafkaConsumer011[String]("sensor", new SimpleStringSchema(), properties))
    val dataStream = inputStream.map(x => {
    val arr = x.split(",")
    SensorReading(arr(0).trim, arr(1).trim.toLong, arr(2).trim.toDouble).toString //转成String方便序列化输出
    })

    //sink
    dataStream.addSink(new FlinkKafkaProducer011[String]("localhost:9092", "sinkTest", new SimpleStringSchema()))
    dataStream.print()

    env.execute(" kafka sink test")

    }
    }
    2.启动zookeeper:参考https://www.cnblogs.com/wddqy/p/12156527.html
    3.启动kafka:参考https://www.cnblogs.com/wddqy/p/12156527.html
    4.创建Kafka生产者和消费者,运行代码,观察结果

    有帮助的欢迎评论打赏哈,谢谢!

  • 相关阅读:
    每天一道leetcode 搜索旋转排序数组(二分法)
    每天一道leetcode 统计重复个数(循环节)
    python3 简单web目录扫描脚本(后续更新完整)
    每天一道leetcode 盛最多水的容器 (双指针)
    python3 语法学习 类和继承
    python3 语法学习 文件操作及os方法
    python3 语法学习 输入输出美观
    TCP/IP 协议:IP 协议
    TCP/IP 协议:链路层概述
    Http权威指南(二)---读书笔记
  • 原文地址:https://www.cnblogs.com/wddqy/p/12172801.html
Copyright © 2020-2023  润新知