• ELK 之三:Kibana 使用与Tomcat、Nginx 日志格式处理


    一:kibana安装:

      kibana主要是搜索elasticsearch的数据,并进行数据可视化的展现,新版使用nodejs。

    1、下载地址:

    https://www.elastic.co/downloads/kibana

    2、解压安装:

    [root@node6 local]# tar xvf kibana-4.1.1-linux-x64.tar.gz 
    [root@node6 local]# mv kibana-4.1.1-linux-x64 kibana
    [root@node6 ~]# cd /usr/local/kibana/
    [root@node6 kibana]# ls
    bin config LICENSE.txt node plugins README.txt src

    3、编辑配置文件:

    [root@node6 kibana]# cd config/
    [root@node6 config]# ls
    kibana.yml
    [root@node6 config]# vim kibana.yml
    elasticsearch_url: "http://192.168.10.206:9200"

    4、直接启动:

    [root@node6 kibana]# bin/kibana 
    {"name":"Kibana","hostname":"node6.a.com","pid":3942,"level":30,"msg":"No existing kibana index found","time":"2016-04-12T12:20:50.069Z","v":0}
    {"name":"Kibana","hostname":"node6.a.com","pid":3942,"level":30,"msg":"Listening on 0.0.0.0:5601","time":"2016-04-12T12:20:50.096Z","v":0}

     5、验证启动:

    [root@node6 ~]# ps -ef | grep  kibana
    root       3942   3745  3 20:20 pts/2    00:00:01 bin/../node/bin/node bin/../src/bin/kibana.js
    root       3968   3947  0 20:21 pts/3    00:00:00 grep kibana
    [root@node6 ~]# ss -tnl | grep 5601
    LISTEN     0      128                       *:5601                     *:*   

     6、后台启动:

    [root@node6 kibana]# nohup  bin/kibana &
    [1] 3975

    7、访问测试:默认监听端口5601
    http://192.168.10.206:5601

    8、配置索引:索引的名称要和logstash的output生成的索引能进行匹配才可以

    9、查看数据:默认显示最新的500个文档

    10、数据精确搜索:

    11、搜索高级语法:

    status:404 OR status:500  #搜索状态是404或者是500之一的
    status:301 AND status:200  #搜索即是301和200同时匹配的
    status:[200 TO 300] :搜索指定范围的

    12、保存常用的搜索语法:

    二:其他的常用模块:

    1、系统日志收集---> syslog:配置syslog结果写入到elasticsearch,指定端口514,主机就是要收集日志的服务器IP地址,即可使用

    2、访问日志:nginx转换成json格式

    3、错误日志:使用codec插件:

    https://www.elastic.co/guide/en/logstash/1.5/codec-plugins.html
    复制代码
    input {
      stdin {
        codec => multiline {  #多行日志,比如java的日志
          pattern => "^s"  #pattern => ".*	.*"  #找到换行符,会把多行认为是一行,即会把当前行和上一行合成一行,直到有换行符结束
          what => "previous"
        }
      }
    }
    复制代码

    4、运行日志 codec => json,如果不是json要使用grok进行匹配,相对比较麻烦,如果丢日志就看logstash.log,另外检查日志是否有效的json格式:

    json效验地址:http://www.bejson.com/

    5、kibana的时区和时间问题:kibana会自动根据浏览器将时间加8小时,通过logstash写入会自动解决,如果通过python脚本等写入会产生时间问题

    6、在地图显示IP具体来源地址:

    https://www.elastic.co/guide/en/logstash/1.5/filter-plugins.html

    7、条件判断:

    复制代码
    input {
      file {
        type => "apache"
        path => "/var/log/apache.log"
      }
      file {
        type => "tomcat"
        path => "/var/log/tomcat.log"
      }
    }
    filter { if [type] == "apache" { #假如索引为apache,就执行以下操作 redis { data_type => "list" key => "system-message-jack" host => "192.168.10.205" port => "6379" db => "0" } if [type] == "tomcat" { #假如索引为tomcat,就执行一次操作 redis { data_type => "list" key => "system-message-tomcat" host => "192.168.10.205" port => "6379" db => "1" #写不同的数据库 } }
    复制代码

     nginx 最好设置buffer大小,64k

    kibana要添加elastsearch的key

    搜索的语法:直接搜索键值  a:b  AND ALL NOT进行匹配。范围 [200-299]

    6.测试logstash配置文件语法是否正确:

    6.1:配置正确的检查结果:

    [root@elk-server2 conf.d]# /etc/init.d/logstash configtest
    Configuration OK

    6.2:语法错误的显示结果:

    复制代码
    [root@elk-server2 tianqi]# /etc/init.d/logstash configtest
    The given configuration is invalid. Reason: Expected one of #, {, } at line 17, column 53 (byte 355) after output {
        if  [type] == "nginx3"  {
            elasticsearch {
                    hosts => ["192.168.0.251:9200"]
                    index => "logstash-newsmart-nginx3-" {:level=>:fatal}  #会指明语法错误的具体地方
    复制代码

    三:tomcat日志:

    1、tomcat日志默认不是json格式的,但是logstash分析的时候就没有key和valus了,所以我们可以将tomcat日志的格式定义为json的格式:

    复制代码
    directory="logs"  prefix="localhost_access_log." suffix=".log"
         pattern="{"client":"%h",  "client user":"%l",   "authenticated":"%u",   "access time":"%t",     "method":"%r",   "status":"%s",  "send bytes":"%b",  "Query?string":"%q",  "partner":"%{Referer}i",  "Agent version":"%{User-Agent}i"}"/>
    复制代码

    2、取到的日志结果为:

    {"client":"180.95.129.206",  "client user":"-",   "authenticated":"-",   "access time":"[20/Apr/2016:03:47:40 +0000]",     "method":"GET /image/android_logo.png HTTP/1.1",   "status":"200",  "send bytes":"1915",  "Query string":"",  "partner":"http://mobile.weathercn.com/index.do?id=101160101&partner=1000001003",  "Agent version":"Mozilla/5.0 (Linux; U; Android 5.1.1; zh-cn; NX510J Build/LMY47V) AppleWebKit/537.36 (KHTML, like Gecko)Version/4.0 Chrome/37.0.0.0 MQQBrowser/6.6 Mobile Safari/537.36"}

    3、在线验证是否合法的json格式:

    地址:http://www.bejson.com/,将完整的一行日志复制到验证框,然后点验证即可:结果如下

    四:nginx 日志格式处理:

    1、编辑nginx.conf配置文件,自定义一个日志格式:

    [root@node5 ~]# vim  /etc/nginx/nginx.conf

    2、添加内容如下:

    复制代码
        log_format logstash_json '{"@timestamp":"$time_iso8601",'
            '"host":"$server_addr",'
            '"clientip":"$remote_addr",'
            '"size":$body_bytes_sent,'
            '"responsetime":$request_time,'
            '"upstreamtime":"$upstream_response_time",'
            '"upstreamhost":"$upstream_addr",'
            '"http_host":"$host",'
            '"url":"$uri",'
            '"domain":"$host",'
            '"xff":"$http_x_forwarded_for",'
            '"referer":"$http_referer",'
            '"agent":"$http_user_agent",'
            '"status":"$status"}';
    复制代码

     3、编辑主机配置:

    复制代码
    [root@node5 ~]# grep -v "#"  /etc/nginx/conf.d/locathost.conf  | grep -v "^$" 
    server {
        listen       9009; #监听的端口
        server_name  www.a.com;  #主机名
        
        access_log  /var/log/nginx/json.access.log  logstash_json;  #定义日志路径为/var/log/nginx/json.access.log,并引用在主配置文件nginx.conf中定义的json日志格式
        include /etc/nginx/default.d/*.conf;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        error_page  404              /404.html;
        location = /404.html {
            root   /usr/share/nginx/html;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
    复制代码

    4、重启nginx,查看日志格式是json格式了:

    复制代码
    [root@node5 ~]# tail /var/log/nginx/json.access.log 
    {"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
    {"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
    {"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
    {"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
    {"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.001,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
    {"@timestamp":"2016-04-12T22:15:19+08:00","host":"192.168.10.205","clientip":"192.168.10.205","size":3698,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.10.205","url":"/index.html","domain":"192.168.10.205","xff":"-","referer":"-","agent":"ApacheBench/2.3","status":"200"}
    复制代码

    5、在线效验日志格式是否正确:

    效验地址:http://www.bejson.com/

    五:画图功能

    在地图显示IP的访问次数统计:

    1、在elasticsearch服务器用户家目录下载一个Filebeat 模板:

    cd ~
    curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json #这是一个模板文件

    2、加载模板:

    [root@elk-server1 ~]# curl -XPUT 'http://192.168.0.251:9200/_template/filebeat?pretty' -d@filebeat-index-template.json  #是elasticsearch监听的IP地址
    {
      "acknowledged" : true  #一定要返回true才表示成功
    }

    3、下载GeoIP 数据库文件:

    [root@elk-server1 ~]# cd /etc/logstash/
    [root@elk-server1 logstash]# curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"
    [root@elk-server1 logstash]# gunzip GeoLiteCity.dat.gz
    [root@elk-server1 logstash]# ls
    conf.d  GeoLiteCity.dat  #确认文件存在

    4、配置logstash使用GeoIP:

    [root@elk-server1 logstash]# vim /etc/logstash/conf.d/11-mobile-tomcat-access.conf  #logstash的文件配置要以.conf结尾

    复制代码
    input {
            redis {
                    data_type => "list"
                    key => "mobile-tomcat-access-log"
                    host => "192.168.0.251"
                    port => "6379"
                    db => "0"
                    codec  => "json"
            }
    }
    
    #input部分为从redis读取客户端logstash分析提交后的访问日志
    
    filter {
            if [type] == "mobile-tomcat" {
            geoip {
                    source => "client"  #client 是客户端logstash收集日志时定义的公网IP的key名称,一定要和实际名称一致,因为要通过此名称获取到其对于的ip地址
                    target => "geoip"
                    database => "/etc/logstash/GeoLiteCity.dat"
                    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
            }
        mutate {
          convert => [ "[geoip][coordinates]", "float"]
            }
        }
    }
    
    
    output { 
            if [type] == "mobile-tomcat" {
            elasticsearch {
                    hosts => ["192.168.0.251"]
                    manage_template => true
                    index => "logstash-mobile-tomcat-access-log-%{+YYYY.MM.dd}" #index的名称一定要是logstash开头的,否则会在使用地图的时候出现geoIP type无法找找到的类似错误
                    flush_size => 2000
                    idle_flush_time => 10
                    }
            }
    }
    复制代码

    5、在kibana界面添加新的索引,然后visualize---->Tile map---->From a new search---->Select a index patterm--->选择之前的index---->Geo coordinates,然后点绿色的运行按钮即可:

  • 相关阅读:
    聊天界面的实现
    继续,迫不及待想学数据库
    今天休息,我来研究上次的代码了
    项目导入
    命令行常用的一下命令
    svn的安装和基本操作,及常见问题
    maven项目里写测试
    Eclipse里新建maven项目
    maven简介
    重装win7,没有管理员权限,没有以管理员身份运行
  • 原文地址:https://www.cnblogs.com/cheyunhua/p/8618398.html
Copyright © 2020-2023  润新知