• Flume采集目录及文件到HDFS案例


    采集目录到HDFS

      使用flume采集目录需要启动hdfs集群

    vi spool-hdfs.conf
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    
    # Describe/configure the source
    ##注意:不能往监控目中重复丢同名文件
    a1.sources.r1.type = spooldir
    a1.sources.r1.spoolDir = /root/logs2
    a1.sources.r1.fileHeader = true
    
    # Describe the sink
    a1.sinks.k1.type = hdfs
    a1.sinks.k1.channel = c1
    a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/
    a1.sinks.k1.hdfs.filePrefix = events-
    #控制文件夹的滚动频率 a1.sinks.k1.hdfs.round = true a1.sinks.k1.hdfs.roundValue = 10 a1.sinks.k1.hdfs.roundUnit = minute
    #控制文件的滚动频率 a1.sinks.k1.hdfs.rollInterval
    = 3 #时间维度 a1.sinks.k1.hdfs.rollSize = 20  #文件大小维度 a1.sinks.k1.hdfs.rollCount = 5  #event数量维度 a1.sinks.k1.hdfs.batchSize = 1 a1.sinks.k1.hdfs.useLocalTimeStamp = true #生成的文件类型,默认是Sequencefile,可用DataStream,则为普通文本 a1.sinks.k1.hdfs.fileType = DataStream # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
    mkdir /root/logs2

        spooldir source 监控指定目录 如果目录下有新文件产生 就采集走

      • 注意!!! 此组件监控的目录不能有同名的文件产生 一旦有重名文件:报错 罢工

      启动命令:

    bin/flume-ng agent -c ./conf -f ./conf/spool-hdfs.conf -n a1 -Dflume.root.logger=INFO,console

    采集文件到HDFS

    vi tail-hdfs.conf
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    
    # Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /root/logs/test.log
    a1.sources.r1.channels = c1
    
    # Describe the sink
    a1.sinks.k1.type = hdfs
    a1.sinks.k1.channel = c1
    a1.sinks.k1.hdfs.path = /flume/tailout/%y-%m-%d/%H-%M/
    a1.sinks.k1.hdfs.filePrefix = events-
    a1.sinks.k1.hdfs.round = true
    a1.sinks.k1.hdfs.roundValue = 10
    a1.sinks.k1.hdfs.roundUnit = minute
    a1.sinks.k1.hdfs.rollInterval = 3
    a1.sinks.k1.hdfs.rollSize = 20
    a1.sinks.k1.hdfs.rollCount = 5
    a1.sinks.k1.hdfs.batchSize = 1
    a1.sinks.k1.hdfs.useLocalTimeStamp = true
    #生成的文件类型,默认是Sequencefile,可用DataStream,则为普通文本
    a1.sinks.k1.hdfs.fileType = DataStream
    
    
    
    # Use a channel which buffers events in memory
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100
    
    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
    mkdir /root/logs

    启动命令

    bin/flume-ng agent -c conf -f conf/tail-hdfs.conf -n a1

    exec source 可以执行一个shell命令 (tail -F sx.log) 实时采集文件数据变化

    模拟数据生成的脚步:

    while true;do date >> /root/logs/test.log;sleep 0.5;done
    
    或
    
        #!/bin/bash
    
    while true
    
    do
    
      date >> /root/logs/test.log
    
      sleep 1
    
    done
  • 相关阅读:
    《C++ Primer》学习笔记第2章 变量和基本类型
    Java学习笔记类的继承与多态特性
    Java的冒泡排序问题
    新起点,分享,进步
    MVC2中Area的路由注册实现
    了解一下new关键字实现阻断继承的原理
    利用Bing API开发的搜索工具(MVC+WCF)
    ASP.NET MVC中错误处理方式
    const和readonly内部区别
    WCF中校验参数的实现方式(一)
  • 原文地址:https://www.cnblogs.com/jifengblog/p/9277860.html
Copyright © 2020-2023  润新知