• ELK之filebeat


    1、概述

    filebeat使用go语言开发,轻量级、高效。主要由两个组件构成:prospector和harvesters。

    Harvesters负责进行单个文件的内容收集,在运行过程中,每一个Harvester会对一个文件逐行进行内容读取,并且把读写到的内容发送到配置的output中。

    Prospector负责管理Harvsters,并且找到所有需要进行读取的数据源。如果input type配置的是log类型,Prospector将会去配置度路径下查找所有能匹配上的文件,然后为每一个文件创建一个Harvster。

    2、下载

    https://www.elastic.co/cn/downloads/beats/filebeat

    3、安装

    解压即可,开包即用。方便升级推荐使用tar包。

    参考官网:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html

    4、配置文件

    ############################# input #########################################
    filebeat.prospectors:
    # 采集系统日志
    - input_type: log
      paths: /var/log/messages
      paths: /var/log/cron
      document_type: "messages"
    
    # 采集MySQL慢日志,这里用到了多行模式
    - input_type: log
      paths: /data/mysql/data/slow.log
      document_type: mysql_slow_log
      multiline:
        pattern: "^# User@Host: "
        negate: true
        what: "previous"
        charset: "ISO-8859-1"
        match: after
    
    # 采集web日志
    - input_type: log
      paths: /data/wwwlogs/access_*.log
      document_type: "web_access_log"
    
    # 规避数据热点的优化参数:
    # 积累1024条消息才上报
    spool_size: 1024
    # 或者空闲5s上报
    idle_timeout: "5s"
    
    #这里配置下name这个字段为本机IP,用于logstash里面将这个值赋给一些拿不到主机IP的数据,比如慢日志
    name: 10.x.x.x
    
    ############################# Kafka #########################################
    output.kafka:
      # initial brokers for reading cluster metadata
      hosts: ["x.x.x.1:9092","x.x.x.2:9092","x.x.x.3:9092"]
      # message topic selection + partitioning
      topic: '%{[type]}'
      flush_interval: 1s
      partition.round_robin:
        reachable_only: false
      required_acks: 1
      compression: gzip
      max_message_bytes: 1000000
    
    ############################# Logging #########################################
    logging.level: info
    logging.to_files: true
    logging.to_syslog: false
    logging.files:
      path: /data/filebeat/logs
      name: filebeat.log
      keepfiles: 7


    5、output几种类型
    本例使用的是kafka,还可以输出到logstash、redis、elsticsearch等
    1)logstash
    output.logstash:
      # The Logstash hosts
    hosts: ["ip:5044"]

    2)redis
    参考:https://www.elastic.co/guide/en/beats/filebeat/current/redis-output.html
    output.redis:
      hosts: ["localhost"]
      password: "my_password"
      key: "filebeat"
      db: 0
      timeout: 5

    3)elasticsearch
    参考:https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
    output.elasticsearch:
      hosts: ["http://localhost:9200"]
      username: "admin"
      password: "s3cr3t"

    6、elasticsearch加载filebeat的模板
    #curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_template/filebeat-6.1.0 -d@/etc/filebeat/filebeat.template.json


    filebeat之docker日志json报错处理:
    问题:
    使用Filebeats针对docker日志进行收集,docker日志默认是JSON格式,日志每个开头都有 {"log:"的字样,所以我在filebeats中加入了 json.message_key: '{"log":"'。但是filebeats 日志会报错,输出  
    ERR Error decoding JSON: invalid character 'J' looking for beginning of value

     prospectors:
            -
              paths:
                - /var/log/containers/*.log
              document_type: kube-logs
              ignore_older: 24h
              symlinks: true
              json:
                keys_under_root: true
                add_error_key: true
                message_key: log
  • 相关阅读:
    OS + Multipass
    服务器间文件实时双向同步(rsync+inotify)
    全链路追踪 & 性能监控工具 SkyWalking 实战
    TCP Dup ACK linux kernel 3.2
    Ns3.35 errata Compilation on Python 3.10 systems
    LeetCode 108. Convert Sorted Array to Binary Search Tree
    LeetCode 98. Validate Binary Search Tree
    LeetCode 701. Insert into a Binary Search Tree
    LeetCode 235. Lowest Common Ancestor of a Binary Search Tree
    LeetCode 783. Minimum Distance Between BST Nodes
  • 原文地址:https://www.cnblogs.com/cuishuai/p/8066358.html
Copyright © 2020-2023  润新知