创建执行环境
getExecutionEnvironment
创建一个执行环境,表示当前执行程序的上下文。 如果程序是独立调用的,则此方法返回本地执行环境;如果从命令行客户端调用程序以提交到集群,则此方法返回此集群的执行环境,也就是说,getExecutionEnvironment 会根据查询运行的方式决定返回什么样的运行环境,是最常用的一种创建执行环境的方式。
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
如果没有设置并行度,会以 flink-conf.yaml 中的配置为准,默认是 1。
createLocalEnvironment
返回本地执行环境,需要在调用时指定默认的并行度。
LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(1);
createRemoteEnvironment
返回集群执行环境,将 Jar 提交到远程服务器。需要在调用时指定 JobManager 的 IP 和端口号,并指定要在集群中运行的 Jar 包。
public static StreamExecutionEnvironment createRemoteEnvironment(String host, int port, String... jarFiles)
source
从集合读取数据
java实体类:
/**
* @author wen.jie
* @date 2021/9/1 16:45
* 传感器温度读数的数据类型
*/
public class SensorReading {
//id
private String id;
//时间戳
private Long timestamp;
//温度
private Double temperature;
//toString、getter、setter、有参无参构造省略
}
测试:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<SensorReading> sensorDataStream = env.fromCollection(Arrays.asList(
new SensorReading("sensor_1", 1547718199L, 35.8),
new SensorReading("sensor_6", 1547718201L, 15.4),
new SensorReading("sensor_7", 1547718202L, 6.7),
new SensorReading("sensor_10", 1547718205L, 38.1)
));
DataStream<Integer> integerDataStream = env.fromElements(1, 2, 5);
sensorDataStream.print();
integerDataStream.print().setParallelism(1);
env.execute();
}
从文件读取数据
sensor.txt:
sensor_1 1547718199L 35.8
sensor_6 1547718201L 15.4
sensor_7 1547718202L 6.7
sensor_10 1547718205L 38.1
测试代码:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> sensorDataStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
sensorDataStream.print();
env.execute();
}
从kafka读取数据
新增flink链接kafka的依赖:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.11_2.12</artifactId>
<version>1.10.1</version>
</dependency>
kafka安装包,具体安装过程这里不演示:https://archive.apache.org/dist/kafka/2.1.0/kafka_2.11-2.1.0.tgz
代码:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "192.168.1.77:9092");
properties.setProperty("group.id", "consumer-group");
properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.setProperty("auto.offset.reset", "latest");
DataStream<String> sensorDataStream = env.addSource(new FlinkKafkaConsumer011<>("sensor", new SimpleStringSchema(), properties));
sensorDataStream.print();
env.execute();
}
./bin/kafka-console-producer.sh --broker-list 192.168.1.77:9092 --topic sensor
效果如下:
自定义source
除了以上的 source 数据来源,我们还可以自定义 source。需要做的,只是传入 一个 SourceFunction 就可以。具体调用如下:
DataStream<SensorReading> dataStream = env.addSource(new MySensor());
我们希望可以随机生成传感器数据,MySensorSource 具体的代码实现如下:
public static class MySensor implements SourceFunction<SensorReading> {
private boolean running = true;
@Override
public void run(SourceContext<SensorReading> ctx) throws Exception {
Random random = new Random();
HashMap<String, Double> sensorTempMap = new HashMap<>();
for( int i = 0; i < 10; i++ ){
sensorTempMap.put("sensor_" + (i + 1), 60 + random.nextGaussian() * 20);
}
while (running) {
for(String sensorId: sensorTempMap.keySet() ){
Double newTemp = sensorTempMap.get(sensorId) + random.nextGaussian();
sensorTempMap.put(sensorId, newTemp);
ctx.collect( new SensorReading(sensorId, System.currentTimeMillis(),
newTemp));
}
Thread.sleep(1000L);
}
}
@Override
public void cancel() {
this.running = false;
}
}
测试方法:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//添加自定义数据源
DataStreamSource<SensorReading> dataStreamSource = env.addSource(new MySensor());
dataStreamSource.print();
env.execute();
}
测试结果:
Transform
基本转换操作
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
//map
DataStream<Integer> mapStream = inputStream.map(String::length);
//flatmap
DataStream<String> flatMapStream = inputStream.flatMap(new FlatMapFunction<String, String>() {
@Override
public void flatMap(String value, Collector<String> out) throws Exception {
String[] strs = value.split(" ");
for (String field : strs) {
out.collect(field);
}
}
});
//filter
DataStream<String> filterStream = inputStream.filter((str) -> str.startsWith("sensor_1"));
mapStream.print("map");
flatMapStream.print("flatMap");
filterStream.print("filter");
env.execute();
}
KeyBy
DataStream → KeyedStream:逻辑地将一个流拆分成不相交的分区,每个分区包含具有相同 key 的元素,在内部以 hash 的形式实现的。
滚动聚合算子(Rolling Aggregation)
这些算子可以针对 KeyedStream 的每一个支流做聚合。
sum(),min(),max(),minBy(),maxBy()
sensor.txt
sensor_1 1547718199 35.8
sensor_6 1547718201 15.4
sensor_7 1547718202 6.7
sensor_10 1547718205 38.1
sensor_10 1547718204 39.1
sensor_1 1547748199 32.8
sensor_7 1547718234 6.1
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<SensorReading> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2]));
});
//分组
KeyedStream<SensorReading, Tuple> keyByStream = mapStream.keyBy("id");
//滚动聚合,取当前最大值(来一条数据取一个最大值)
DataStream<SensorReading> maxStream = keyByStream.maxBy("temperature");
keyByStream.print("keyByStream");
maxStream.print("maxStream");
env.execute();
}
运行结果:
reduce聚合
KeyedStream → DataStream:一个分组数据流的聚合操作,合并当前的元素和上次聚合的结果,产生一个新的值,返回的流中包含每一次聚合的结果,而不是只返回最后一次聚合的最终结果。
需求:取最大温度值以及当前最新的时间戳
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<SensorReading> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2]));
});
KeyedStream<SensorReading, Tuple> keyedStream = mapStream.keyBy("id");
keyedStream.reduce((v1, v2) -> new SensorReading(v1.getId(), v2.getTimestamp(), Math.max(v1.getTemperature(), v2.getTemperature())))
.print();
env.execute();
}
分流:Split 和 Select
Split:
DataStream → SplitStream:根据某些特征把一个 DataStream 拆分成两个或者多个 DataStream。
Select:
SplitStream→DataStream:从一个 SplitStream 中获取一个或者多个 DataStream。
需求:传感器数据按照温度高低(以 30 度为界),拆分成两个流,并取出高温的数据。
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<SensorReading> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2]));
});
SplitStream<SensorReading> splitStream = mapStream.split((value -> value.getTemperature() > 30 ? Collections.singletonList("high") : Collections.singletonList("low")));
splitStream.select("high").print();
env.execute();
合流:Connect和CoMap
Connect:
DataStream,DataStream → ConnectedStreams:连接两个保持他们类型的数据流,两个数据流被 Connect 之后,只是被放在了一个同一个流中,内部依然保持各自的数据和形式不发生任何变化,两个流相互独立。
CoMap,CoFlatMap:
ConnectedStreams → DataStream:作用于 ConnectedStreams 上,功能与 map 和 flatMap 一样,对 ConnectedStreams 中的每一个 Stream 分别进行 map 和 flatMap 处理。
测试代码:
//前面代码与上面分流代码一样
SplitStream<SensorReading> splitStream = mapStream.split((value -> value.getTemperature() > 30 ? Collections.singletonList("high") : Collections.singletonList("low")));
DataStream<SensorReading> highStream = splitStream.select("high");
DataStream<SensorReading> lowStream = splitStream.select("low");
DataStream<SensorReading> allStream = splitStream.select("high", "low");
//合流
//先将高温六转换成二元组类型,与低温流连接合并之后,输出状态信息
DataStream<Tuple2<String, Double>> warningStream = highStream.map(new MapFunction<SensorReading, Tuple2<String, Double>>() {
@Override
public Tuple2<String, Double> map(SensorReading value) throws Exception {
return new Tuple2<>(value.getId(), value.getTemperature());
}
});
//连接两个流
ConnectedStreams<Tuple2<String, Double>, SensorReading> connectedStreams
= warningStream.connect(lowStream);
SingleOutputStreamOperator<Object> resultStream = connectedStreams.map(new CoMapFunction<Tuple2<String, Double>, SensorReading, Object>() {
@Override
public Object map1(Tuple2<String, Double> value) throws Exception {
return new Tuple3<>(value.f0, value.f1, "warning");
}
@Override
public Object map2(SensorReading value) throws Exception {
return new Tuple2<>(value.getId(), "normal");
}
});
resultStream.print();
env.execute();
union合流
union:
DataStream → DataStream:对两个或者两个以上的 DataStream 进行 union 操作,产生一个包含所有 DataStream 元素的新 DataStream。
-
Union 之前两个流的类型必须是一样,Connect 可以不一样,在之后的 coMap 中再去调整成为一样的。
-
Connect 只能操作两个流,Union 可以操作多个。
DataStream<SensorReading> unionStream = highStream.union(lowStream);
unionStream.print();
富函数(Rich Functions)
“富函数”是 DataStream API 提供的一个函数类的接口,所有 Flink 函数类都 有其 Rich 版本。它与常规函数的不同在于,可以获取运行环境的上下文,并拥有一些生命周期方法,所以可以实现更复杂的功能。
- RichMapFunction
- RichFlatMapFunction
- RichFilterFunction
- ...
Rich Function 有一个生命周期的概念。典型的生命周期方法有:
- open()方法是 rich function 的初始化方法,当一个算子例如 map 或者 filter 被调用之前 open()会被调用。
- close()方法是生命周期中的最后一个调用的方法,做一些清理工作。
- getRuntimeContext()方法提供了函数的 RuntimeContext 的一些信息,例如函数执行的并行度,任务的名字,以及 state 状态
测试:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(4);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<SensorReading> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2]));
});
DataStream<Tuple2<String, Integer>> resultStream = mapStream.map(new MyMapper());
resultStream.print();
env.execute();
}
// 实现自定义富函数类
public static class MyMapper extends RichMapFunction<SensorReading, Tuple2<String, Integer>>{
@Override
public Tuple2<String, Integer> map(SensorReading value) throws Exception {
// getRuntimeContext().getState();
return new Tuple2<>(value.getId(), getRuntimeContext().getIndexOfThisSubtask());
}
@Override
public void open(Configuration parameters) throws Exception {
// 初始化工作,一般是定义状态,或者建立数据库连接
System.out.println("open");
}
@Override
public void close() throws Exception {
// 一般是关闭连接和清空状态的收尾操作
System.out.println("close");
}
}
sink
Flink 没有类似于 spark 中 foreach 方法,让用户进行迭代的操作。虽有对外的输出操作都要利用 Sink 完成。最后通过类似如下方式完成整个任务最终输出操作。
stream.addSink(new MySink(xxxx))
官方提供了一部分的框架的 sink。除此以外,需要用户自定义实现 sink。
Kafka
代码测试:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(4);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<String> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2])).toString();
});
mapStream.addSink(new FlinkKafkaProducer011<>("192.168.1.77:9092", "sinktest", new SimpleStringSchema()));
env.execute();
kafka消费者:
./bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.77:9092 --topic sinktest
效果:
Redis
添加依赖:
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>flink-connector-redis_2.11</artifactId>
<version>1.0</version>
</dependency>
代码:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<SensorReading> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2]));
});
FlinkJedisPoolConfig config = new FlinkJedisPoolConfig.Builder()
.setHost("192.168.1.77")
.setPort(6379).build();
mapStream.addSink(new RedisSink<>(config, new MyRedisMapper()));
env.execute();
}
public static class MyRedisMapper implements RedisMapper<SensorReading> {
//返回redis操作信息
@Override
public RedisCommandDescription getCommandDescription() {
return new RedisCommandDescription(RedisCommand.HSET, "sensor");
}
@Override
public String getKeyFromData(SensorReading data) {
return data.getId();
}
@Override
public String getValueFromData(SensorReading data) {
return data.getTemperature().toString();
}
}
运行结果:
ElasticSearch
导入依赖:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch6_2.12</artifactId>
<version>1.10.1</version>
</dependency>
代码:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<SensorReading> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2]));
});
List<HttpHost> httpHosts = Collections.singletonList(new HttpHost("192.168.1.77", 9200));
ElasticsearchSink<SensorReading> elasticsearchSink = new ElasticsearchSink.Builder<SensorReading>(httpHosts, new MyEsSinkFunction()).build();
mapStream.addSink(elasticsearchSink);
env.execute();
}
public static class MyEsSinkFunction implements ElasticsearchSinkFunction<SensorReading> {
@Override
public void process(SensorReading element, RuntimeContext ctx, RequestIndexer indexer) {
//定义写入的数据source
HashMap<String, String> dataSource = new HashMap<>();
dataSource.put("id", element.getId());
dataSource.put("temp", element.getTemperature().toString());
dataSource.put("timestamp", element.getTimestamp().toString());
//创建请求,作为向es发起的写入命令
IndexRequest indexRequest = Requests.indexRequest().index("sensors")
.type("sensors").source(dataSource);
//发送请求
indexer.add(indexRequest);
}
}
访问:http://192.168.1.77:9200/sensors/_search?pretty
可见数据以及输出到ElasticSearch中去了
Mysql
添加依赖:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.44</version>
</dependency>
表sql:
DROP TABLE IF EXISTS `sensor_temp`;
CREATE TABLE `sensor_temp` (
`id` varchar(20) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
`temp` double NOT NULL,
PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
SET FOREIGN_KEY_CHECKS = 1;
java代码:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
DataStream<String> inputStream = env.readTextFile("D:\project\flink-demo\src\main\resources\sensor.txt");
DataStream<SensorReading> mapStream = inputStream.map((str) -> {
String[] split = str.split(" ");
return new SensorReading(split[0], Long.parseLong(split[1]), Double.parseDouble(split[2]));
});
mapStream.addSink(new MysqlRichSinkFunction());
env.execute();
}
public static class MysqlRichSinkFunction extends RichSinkFunction<SensorReading> {
Connection conn = null;
PreparedStatement insertStmt = null;
PreparedStatement updateStmt = null;
// open 主要是创建连接
@Override
public void open(Configuration parameters) throws Exception {
conn = DriverManager.getConnection("jdbc:mysql://192.168.1.77:3306/sensor", "root", "1234");
// 创建预编译器,有占位符,可传入参数
insertStmt = conn.prepareStatement("INSERT INTO sensor_temp (id, temp) VALUES (?, ?)");
updateStmt = conn.prepareStatement("UPDATE sensor_temp SET temp = ? WHERE id = ?");
}
@Override
public void invoke(SensorReading value, Context context) throws Exception {
// 执行更新语句,注意不要留 super
updateStmt.setDouble(1, value.getTemperature());
updateStmt.setString(2, value.getId());
updateStmt.execute();
// 如果刚才 update 语句没有更新,那么插入
if (updateStmt.getUpdateCount() == 0) {
insertStmt.setString(1, value.getId());
insertStmt.setDouble(2, value.getTemperature());
insertStmt.execute();
}
}
@Override
public void close() throws Exception {
insertStmt.close();
updateStmt.close();
conn.close();
}
}
运行结果: