• flink1.10安装详解


    flink从flink1.10版本对其内存结构发生改变,所以在环境配置的时候也要主要具体怎么配置比较合适。

    内存结构可以看官网:https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html

    Standalone模式官网:https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/cluster_setup.html

    基于yarn模式官网:https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html

    一、环境准备

    1.1、java环境

    下载java安装包,通过tar -zxvf 命令解压java压缩包,然后配置java变量环境

    vim /etc/profile
    ##########在后面追加##############
    export JAVA_HOME=/usr/java/default
    export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    export PATH=$JAVA_HOME/bin

    1.2、ssh环境

    准备几台机器

    #准备机器 /etc/hosts
    192.168.88.130 lgh
    192.168.88.131 lgh1
    192.168.88.132 lgh2

    添加flink group和flink user

    useradd flink -d /home/flink
    echo "flink123" | passwd flink --stdin
    

    然后对该用户配置ssh环境(在192.168.88.130,指定一台操作)

    su - flink
    ssh-keygen -t rsa
    ssh-copy-id 192.168.88.131
    ssh-copy-id 192.168.88.132

    1.3、zookeeper环境

    下载zookeeper的安装包进行安装

    ##解压
    tar -zxvf zookeeper-3.4.8.tar.gz -C xxx目录
    ##创建软链接(每一台机器)
    ln -s zookeeper-3.4.8 zookeeper
    

    配置环境变量

    vim ~/.bashrc
    export ZOOKEEPER_HOME=/home/spark/zookeeper
    export PATH=$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf:$PATH
    

    修改配置

    cd /home/spark/zookeeper/conf
    cp zoo_sample.cfg zoo.cfg
    vim zoo.cfg
    ###########修改配置如下##################
    tickTime=2000 #服务器与客户端之间交互的基本时间单元(ms)
    initLimit=10 # 此配置表示允许follower连接并同步到leader的初始化时间,它以tickTime的倍数来表示。当超过设置倍数的tickTime时间,则连接失败
    syncLimit=5 # Leader服务器与follower服务器之间信息同步允许的最大时间间隔,如果超过次间隔,默认follower服务器与leader服务器之间断开链接
    dataDir=/home/spark/zookeeper/data #保存zookeeper数据路径
    dataLogDir=/home//spark/zookeeper/dataLog #保存zookeeper日志路径,当此配置不存在时默认路径与dataDir一致
    clientPort=2181 #客户端访问zookeeper时经过服务器端时的端口号
    server.1=lgh:2888:3888 #表示了不同的zookeeper服务器的自身标识,作为集群的一部分,每一台服务器应该知道其他服务器的信息
    server.2=lgh1:2888:3888
    server.3=lgh2:2888:3888
    maxClientCnxns=60 #限制连接到zookeeper服务器客户端的数量
    

    创建myid文件

    cd /home/spark/zookeeper/data
    vim myid #输入1
    

    复制到其他机器以及启动

    ##复制到其他机器
    scp -r zookeeper-3.4.8 spark@lgh1:/home/spark/
    scp -r zookeeper-3.4.8 spark@lgh2:/home/spark/
    
    #修改myid文件
    不同机器数字不一样,分别为2和3
    
    ##启动
    zkServer.sh start
    
    #查看状态
    zkServer.sh status
    
    #查看进程
    jps
    QuorumPeerMain

    二、安装flink

    2.1、基础安装配置

    下载:https://flink.apache.org/downloads.html#apache-flink-1101

     

     下载如上两个两个包(第二个包对应hadoop的版本),然后对flink的通过过tar命令进行解压,把第二个包放在flink包的lib目录下,如下所示

    [flink@lgh01 lib]$ pwd
    /home/flink/flink10/lib
    [flink@lgh01 lib]$ ll | grep hadoop
    -rwxrwxrwx 1 mstream hive     80331 Apr 16 14:20 flink-hadoop-compatibility_2.11-1.10.0.jar
    -rwxrwxrwx 1 mstream hive  36433393 Apr 16 14:20 flink-shaded-hadoop-2-uber-2.6.5-8.0.jar

    然后配置hadoop相关的环境变量和flink环境变量

    vim /etc/profile
    #########后面追加内容#########
    export PATH=/apps/opt/cloudera/parcels/CDH/bin:$PATH
    export HADOOP_HOME=/apps/opt/cloudera/parcels/CDH/lib/hadoop
    export HADOOP_CONF_DIR=/etc/hadoop/conf
    export YARN_CONF_DIR=/etc/hadoop/conf
    export PATH=$PATH:/apps/opt/cloudera/parcels/CDH/bin
    export CLASSPATH=$CLASSPATH:/apps/opt/cloudera/parcels/CDH/jars/:/utf/
    
    #export FLINK_HOME=/apps/flink/flink
    #export PATH=$FLINK_HOME/bin:$PATH
    export FLINK_HOME=/apps/mstream/install/flink10
    export PATH=$FLINK_HOME/bin:$PATH

    然后修改flinkxxx/conf/flink-conf.yaml这个配置文件:

    jobmanager.rpc.address: 192.168.88.130
    jobmanager.rpc.port: 6124
    jobmanager.heap.size: 1024m 
    taskmanager.heap.size: 1024m
    taskmanager.numberOfTaskSlots: 2 #根据自己的CPU core进行配置,lscpu可以查看cpu的核数
    cluster.evenly-spread-out-slots: true
    env.java.home: /usr/java/default #可以不配置
    parallelism.default: 2
    high-availability: zookeeper
    high-availability.zookeeper.path.root: /flink
    high-availability.storageDir: hdfs:///user/flink/ha/ 
    high-availability.zookeeper.quorum: lgh1:2181,lgh2:2181,lgh3:2181
    #high-availability.cluster-id: /cluster_one # important: customize per cluster #这个参数在yarn模式下不能配置
    rest.port: 8081
    rest.bind-port: 8080-8180  #这里启动Standalone集群的时候会在日志中选择端口,而不一定就是8081,所以登录的网址就是看日志
    
    taskmanager.memory.process.size: 2048m #flink1.10内存结构变化之后必须要配置三种其中的一个,根据集群进行相应的配置,详情见官网
    jobmanager.execution.failover-strategy: region
    
    #checkpoint
    state.checkpoints.dir: hdfs:///user/flink/checkpoint
    state.checkpoints.num-retained: 20
    
    #savepoint
    state.savepoints.dir: hdfs:///user/flink/savepoints
    
    #stateful
    state.backend: filesystem #使用FsStateBackend(生产推荐使用rocksdb)
    state.backend.fs.checkpointdir: hdfs:///user/flink/pointsdata/
    state.backend.incremental: true
    
    jobmanager.archive.fs.dir: hdfs:///user/flink/flink-jobs/
    historyserver.web.address: 192.168.88.130
    historyserver.web.port: 8083
    historyserver.archive.fs.dir: hdfs:///user/flink/historyserver
    historyserver.archive.fs.refresh-interval: 10000
    
    blob.storage.directory: /tmp/

    配置conf/masters

    192.168.88.130:8081
    192.168.88.131:8081
    

    配置conf/slaves

    192.168.88.130 
    192.168.88.131 
    192.168.88.132 
    

    然后将flink的安装包通过scp -r 命令复制到其他的节点

    2.2、日志相关配置(可选)

    JobManager和TaskManager的启动日志可以在Flinkbinary目录下的log子目录中找到:

    -rw-r--r-- 1 flink flink          0 Mar 25 11:43 flink-flink-standalonesession-8-lgh01.out.1
    -rw-r--r-- 1 flink flink          0 Mar 23 16:31 flink-flink-standalonesession-8-lgh01.out.2
    -rw-r--r-- 1 flink flink          0 Mar 23 15:18 flink-flink-standalonesession-8-lgh01.out.3
    -rw-r--r-- 1 flink flink          0 Mar 23 14:55 flink-flink-standalonesession-8-lgh01.out.4
    -rw-r--r-- 1 flink flink     216520 May  5 16:38 flink-flink-taskexecutor-0-lgh01.log
    -rw-r--r-- 1 flink flink   14191242 Apr 19 00:05 flink-flink-taskexecutor-0-lgh01.log.1
    -rw-r--r-- 1 flink flink     821762 Apr 16 12:24 flink-flink-taskexecutor-0-lgh01.log.2
    -rw-r--r-- 1 flink flink          0 Apr 28 11:42 flink-flink-taskexecutor-0-lgh01.out
    

      

    目录中以“flink-${user}-standalonesession-${id}-${hostname}”为前缀的文件对应的即是JobManager 的输出,其中有三个文件:

    • flink-${user}-standalonesession-${id}-${hostname}.log:代码中的日志输出
    • flink-${user}-standalonesession-${id}-${hostname}.out:进程执行时的 stdout 输出
    • flink-${user}-standalonesession-${id}-${hostname}-gc.log:JVM 的 GC 的日志

    目录中以“flink-${user}-taskexecutor-${id}-${hostname}”为前缀的文件对应的是 TaskManager的输出,也包括三个文件,和 JobManager 的输出一致。

    日志配置文件在flink安装包的conf目录下,如下:

    [flink@lgh01 conf]$ pwd
    /home/flink/flink10/conf
    [flink@lgh01 conf]$ ll
    total 56
    -rw-r--r-- 1 flink flink 1187 Mar 25 11:43 flink-conf.yaml
    -rw-r--r-- 1 flink flink 2138 Jan 24 17:01 log4j-cli.properties
    -rw-r--r-- 1 flink flink 1884 Jan 24 17:01 log4j-console.properties
    -rw-r--r-- 1 flink flink 1939 Jan 24 17:01 log4j.properties
    -rw-r--r-- 1 flink flink 1709 Jan 24 17:01 log4j-yarn-session.properties
    -rw-r--r-- 1 flink flink 2294 Jan 24 17:01 logback-console.xml
    -rw-r--r-- 1 flink flink 2331 Jan 24 17:01 logback.xml
    -rw-r--r-- 1 flink flink 1550 Jan 24 17:01 logback-yarn.xml
    -rw-r--r-- 1 flink flink   36 Mar 16 15:15 masters
    -rw-r--r-- 1 flink flink   74 Feb 13 09:51 slaves
    -rw-r--r-- 1 flink flink 5484 Apr 28 14:08 sql-client-defaults.yaml
    -rw-r--r-- 1 flink flink    1541 Feb 28 16:38 sql-client-hive.yaml
    -rw-r--r-- 1 flink flink 1434 Jan 24 17:01 zoo.cfg 

    其中:

    • log4j-cli.properties:用Flink命令行时用的log配置,比如执行“flinkrun”命令
    • log4j-yarn-session.properties:是用yarn-session.sh启动时命令行执行时用的log配置
    • log4j.properties:无论是standalone还是yarn模式,JobManager和TaskManager上用的log配置都是log4j.properties

    这三个“log4j.*properties”文件分别有三个“logback.*xml”文件与之对应,如果想使用logback的同学,之需要把与之对应的“log4j.*properties”文件删掉即可,对应关系如下:

    • log4j-cli.properties->logback-console.xml
    • log4j-yarn-session.properties->logback-yarn.xml
    • log4j.properties->logback.xml

    需要注意的是,“flink-${user}-standalonesession-${id}-${hostname}”和“flink-${user}-taskexecutor-${id}-${hostname}”都带有“${id}”,“${id}”表示本进程在本机上该角色(JobManager
    或TaskManager)的所有进程中的启动顺序,默认从0开始。

    2.3、启动相关

    Standalone集群:在打通ssh的节点执行命令: start-cluster.sh即可,然后可以通过jps命令查看相关进程

    yarn-session模式:后台启动yarn-session.sh即可,这里可以配置很多参数,可以用yarn-session.sh --help查看,比如:

    ./bin/yarn-session.sh -jm 1024m -tm 4096m
    [flink@lgh01bin]$ ./yarn-session.sh --help
    Usage:
       Optional
         -at,--applicationType <arg>     Set a custom application type for the application on YARN
         -D <property=value>             use value for given property
         -d,--detached                   If present, runs the job in detached mode
         -h,--help                       Help for the Yarn session CLI.
         -id,--applicationId <arg>       Attach to running YARN session
         -j,--jar <arg>                  Path to Flink jar file
         -jm,--jobManagerMemory <arg>    Memory for JobManager Container with optional unit (default: MB)
         -m,--jobmanager <arg>           Address of the JobManager (master) to which to connect. 
         -nl,--nodeLabel <arg>           Specify YARN node label for the YARN application
         -nm,--name <arg>                Set a custom name for the application on YARN
         -q,--query                      Display available YARN resources (memory, cores)
         -qu,--queue <arg>               Specify YARN queue.
         -s,--slots <arg>                Number of slots per TaskManager
         -t,--ship <arg>                 Ship files in the specified directory (t for transfer)
         -tm,--taskManagerMemory <arg>   Memory per TaskManager Container with optional unit (default: MB)
         -yd,--yarndetached              If present, runs the job in detached mode (deprecated; use non-YARN specific option instead)
         -z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-paths for high availability mode
    

    yarn per-job模式:不要启动任何啥,通过flink run -m yarn-cluster命令进行提交任务即可,比如:

    ./bin/flink run -m yarn-cluster -p 4 -yjm 1024m -ytm 4096m ./examples/batch/WordCount.jar

     

    如果有小伙伴也在学习研究flink的话,可以关注下,后期会更新flink相关的基础和flink的相关源码分析

    参考:

    https://flink.apache.org/

    https://files.alicdn.com/tpsservice/4824447b829149c86bedd19424d05915.pdf

  • 相关阅读:
    JSP注册登录页教程
    SSH框架搭建详细图文教程
    .Net Core2.2升级到3.1小记
    AspNetCore容器化(Docker)部署(四) —— Jenkins自动化部署
    AspNetCore容器化(Docker)部署(三) —— Docker Compose容器编排
    AspNetCore容器化(Docker)部署(二) —— 多容器通信
    AspNetCore容器化(Docker)部署(一) —— 入门
    asp.net core 3.0 gRPC框架小试
    HttpClient Received an unexpected EOF or 0 bytes from the transport stream
    PdfReader按页将PDF切割成多个PDF
  • 原文地址:https://www.cnblogs.com/zsql/p/13019268.html
Copyright © 2020-2023  润新知