• ELK7.4.0分析nginx json日志


    ELK7.4.0单节点部署

    环境准备

    • 安装系统,数据盘设置为/srv

    • 内核优化参考

    • 我们需要创建elk专用的账号,并创建所需要的目录并授权

    useradd elk;
    mkdir /srv/{app,data,logs}/elk
    chown -Rf elk:elk /srv/{app,data,logs}/elk
    
    • 修改/etc/security/limits.conf
    *  soft  nofile 65536
    *  hard  nofile 65536
    *  soft  nproc  65536
    *  hard  nproc  65536
    
    elk  soft  nofile 65536
    elk  hard  nofile 65536
    elk  soft  nproc  65536
    elk  hard  nproc  65536
    

    安装elk过程所有操作必须使用elk账户进行!

    su - elk
    

    Elasticsearch

    这次先用的是单节点的ES,没有部署集群,集群部署的后续会更新

    首先我们需要从官网下载最新的es安装包,这里建议使用tar包安装;

    cd /srv/app/elk;
    wget http://172.19.30.116/mirror/elk/elasticsearch/7.4.0/elasticsearch-7.4.0-linux-x86_64.tar.gz
    tar -zxvf elasticsearch-7.4.0-linux-x86_64.tar.gz
    mv elasticsearch-7.4.0-linux-x86_64.tar.gz elasticsearch
    
    • 修改es的配置文件/srv/app/elk/elasticsearch/config/elasticsearch.yml
    cluster.name: es-cluster
    node.name: es-1
    node.master: true     #允许为master节点
    node.data: true       #允许为数据节点
    path.data: /srv/data/elk/elasticsearch   #设置数据目录
    path.logs: /srv/logs/elk/elasticsearch   #设置日志目录
    
    network.host: 127.0.0.1     #仅允许本地访问,如要其它网段访问,可以设置为网段地址,也可以直接写成0.0.0.0
    http.port: 9200             #http端口,默认为9200
    
    http.cors.enabled: true
    http.cors.allow-origin: "*"
    xpack.security.enabled: false
    
    • JVM内存我给了4G,查询的量也比较小,小一点没关系,根据使用情况来看,es7.4.0版本比较耗费内存
    -Xms4g
    -Xmx4g
    
    /srv/app/elk/elasticsearch/bin/elasticsearch -d
    

    Kibana

    cd /srv/app/elk;
    wget http://172.19.30.116/mirror/elk/kibana/7.4.0/kibana-7.4.0-linux-x86_64.tar.gz
    tar -zxvf kibana-7.4.0-linux-x86_64.tar.gz
    mv kibana-7.4.0-linux-x86_64 kibana
    
    • 修改kafka的配置文件 ``
    server.port: 5601
    server.host: "localhost"   #也可以直接写成0.0.0.0
    server.name: "kibana"
    elasticsearch.hosts: ["http://127.0.0.1:9200"]
    i18n.locale: "en"     #如果要开启中文可以改成zh-CN
    
    /srv/app/elk/kibana/bin/kibana
    

    logstash

    cd /srv/app/elk;
    wget http://172.19.30.116/mirror/elk/logstash/7.4.0/logstash-7.4.0.tar.gz
    tar -zxvf logstash-7.4.0.tar.gz
    mv logstash-7.4.0 logstash
    
    • 根据实际情况修改jvm /srv/app/elk/logstash/config/jvm.options; 默认1G,如果日志的数量比较大,可以改成2G或者更多
    -Xms1g
    -Xmx1g
    

    至此,ELK集群已经部署完成了,现在我们需要准备我们的Redis和filebeat了,redis用来做日志的暂存队列,filebeat收集nginx或者其他应用的日志

    REDIS

    yum install epel-release -y
    yum install redis* -y
    chkconfig redis on
    service redis start
    

    Filebeat

    在nginx节点上安装filebeat,修改nginx的log_format,新增nginxjson,并让日志引用这个格式的日志,可以参考这篇博客:

    log_format nginxjson '{"@timestamp":"$time_iso8601",'
                      '"host":"$server_addr",'
                      '"service":"nginx",'
                      '"trace":"$upstream_http_ctx_transaction_id",'
                      '"clientip":"$remote_addr",'
                      '"remote_user":"$remote_user",'
                      '"request":"$request",'
                      '"url":"$scheme://$http_host$request_uri",'
                      '"http_user_agent":"$http_user_agent",'
                      '"server_protocol":"$server_protocol",'
                      '"size":$body_bytes_sent,'
                      '"responsetime":$request_time,'
                      '"upstreamtime":"$upstream_response_time",'
                      '"upstreamhost":"$upstream_addr",'
                      '"http_host":"$host",'
                      '"domain":"$host",'
                      '"xff":"$http_x_forwarded_for",'
                      '"x_clientOs":"$http_x_clientOs",'
                      '"x_access_token":"$http_x_access_token",'
                      '"referer":"$http_referer",'
                      '"status":"$status"}';
    
    rpm -ivh http://172.19.30.116/mirror/elk/filebeat/7.4.0/filebeat-7.4.0-x86_64.rpm
    chkconfig filebeat on
    

    修改filebeat的配置/etc/filebeat/filebeat.yml

    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /var/log/nginx/nginx_access.log
      tags: ["nginx-access"]
      document_type: json-nginxaccess
      tail_files: true
      
    filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    setup.template.settings:
      index.number_of_shards: 1
    setup.kibana:
    
    output.redis:
       enabled: true
       hosts: ["192.168.1.1:7000"]    #这里的是自定义的REDIS服务器IP,redis端口默认是6379,请根据自己的情况修改
       port: 7000
       key: nginx
       db: 0
       datatype: list
    

    现在我们反过来配置logstash

    mkdir /srv/app/elk/logstash/config/conf.d
    vim /srv/app/elk/logstash/config/conf.d/nginx-logs.conf
    

    写入以下内容

    input {
         redis {
             host => "192.168.1.1"
             port => "7000"
             key => "nginx"
             data_type => "list"
             threads => "5"
             db => "0"
        }
    }
    
    filter {
        json {
            source => "message"
            remove_field => ["beat"]
        }
    
        geoip {
                   source => "clientip"
        }
    
        geoip {
            source => "clientip"
            target => "geoip"
            add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
            add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
    
        grok {
                    match => ["message","%{TIMESTAMP_ISO8601:isotime}"]
            }
        date {
            locale => "en"
            match => ["isotime","ISO8601"]
            target => "@timestamp"
        }
            mutate {
                    convert => [ "[geoip][coordinates]", "float"]
                    # remove_field => ["message"]
            }
    }
    
    output {
            if "nginx-access" in [tags] {
                elasticsearch {
                        hosts => ["127.0.0.1:9200"]
                        index => "logstash-nginx-logs-%{+YYYY.MM.dd}"
                }
          }
    }
    
    /srv/app/elk/logstash/bin/logstash -f  /srv/app/elk/logstash/config/conf.d/nginx-logs.conf
    

    后记

    • Nginx代理kibana访问,方便添加http认证

    主要的配置如下:

    server {
             listen       80;
             server_name  kibana;
             access_log   off;
             error_log    off;
    
             location / {
                 auth_basic         "Kibana";
                 auth_basic_user_file  /srv/app/tengine/conf/conf.d/passwd; 
                 proxy_pass         http://127.0.0.1:5601;
                 proxy_set_header   Host             $host;
                 proxy_set_header   X-Real-IP        $remote_addr;
                 proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
             }
    }
    
    • 添加账户的脚本/srv/app/tengine/conf/conf.d/adduser.sh
    #!/bin/bash
    read -p "请输入用户名: " USERNAME
    read -p "请指定用户密码: " PASSWD
    printf "$USERNAME:$(openssl passwd -crypt $PASSWD)
    " >> passwd
    
  • 相关阅读:
    sklearn.feature_selection.SelectKBest k 个最高分的特征
    阿里云的金融风控-贷款违约预测_模型融合
    阿里云的金融风控-贷款违约预测_建模和调参
    阿里云的金融风控-贷款违约预测_特征工程
    阿里云的金融风控-贷款违约预测_数据分析
    XGBoost 原生版本和sklearn接口版本的使用(泰坦尼克数据)
    XGBoost基本原理
    页面优化
    merge 时候
    inferno
  • 原文地址:https://www.cnblogs.com/DevOpsTechLab/p/11962524.html
Copyright © 2020-2023  润新知