• flume 到 kafka整合


    fflume 版本为 1.6 cdh 5.13 注意启动flume是 --name 要和配置文件的 前缀一直 否知启动失败

    flume 目录为

     /opt/cloudera/parcels/CDH/lib/flume-ng
    

    flume 配置文件exec-memory-avro.conf

    vim /opt/test/exec-memory-avro.conf
    

    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1

    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /opt/test/data.log
    a1.sources.r1.shell = /bin/sh -c

    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = udap69a165
    a1.sinks.k1.port = 44444

    a1.channels.c1.type = memory
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1

    flume 配置文件avro-memory-kafka.conf

    vim /opt/test/avro-memory-kafka.conf
    

    avro-memory-kafka.sources = avro-source
    avro-memory-kafka.sinks = kafka-sink
    avro-memory-kafka.channels = memory-channel

    avro-memory-kafka.sources.avro-source.type = avro
    avro-memory-kafka.sources.avro-source.bind = udap69a165
    avro-memory-kafka.sources.avro-source.port = 44444

    avro-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
    avro-memory-kafka.sinks.kafka-sink.kafka.bootstrap.servers = udap69a166:9092
    avro-memory-kafka.sinks.kafka-sink.topic = hh_test
    avro-memory-kafka.sinks.kafka-sink.batchSize = 5
    avro-memory-kafka.sinks.kafka-sink.requiredAcks = 1

    avro-memory-kafka.channels.memory-channel.type = memory

    avro-memory-kafka.sources.avro-source.channels = memory-channel
    avro-memory-kafka.sinks.kafka-sink.channel = memory-channel

    进入到flume 的bin目录下,启动flume agent

    ./flume-ng agent --name a1 --conf-file /opt/test/exec-memory-avro.conf -Dflume.root.logger=INFO,console
    
    ./flume-ng agent --name avro-memeory-kafka --conf-file /opt/test/avro-memeory-kafka.conf -Dflume.root.logger=INFO,console
    

    查看日志是否启动成功,然后干掉程序,选择后台启动

    jps -m
    
    nohup sh flume-ng agent --name a1 --conf-file /opt/test/exec-memory-avro.conf -Dflume.root.logger=INFO,console &
    
    nohup sh flume-ng agent --name avro-memeory-kafka --conf-file /opt/test/avro-memeory-kafka.conf -Dflume.root.logger=INFO,console &
    

    启动kafka消费的脚本

    kafka-console-consumer --zookeeper udap69a166:2181/kafka --topic hh_test
    

    写入一些数据 

    echo "lisi" >> /opt/test/data.log
    
    echo "wangwu" >> /opt/test/data.log
    
    echo "zhangdan" >> /opt/test/data.log
    

    查看kafka 消费者是否消费到数据 问题解决!

  • 相关阅读:
    EasyUI:Easyui parser的用法
    解决ajax 遇到session失效后自动跳转的问题
    PHP 解决同一个IP不同端口号session冲突的问题
    Linux 使用crontab定时备份Mysql数据库
    Easyui validatebox后台服务端验证
    window之间、iframe之间的JS通信
    Nginx和Apache服务器上配置反向代理
    Linux下安装并配置SSH服务
    Mac下载Navicat premium提示文件损坏的解决方案
    优酷项目整体架构
  • 原文地址:https://www.cnblogs.com/erlou96/p/16878489.html
Copyright © 2020-2023  润新知