• ELK展示NGINX访问IP地理位置图


    一、设置NGINX日志格式

    [root@zabbix_server ~]# vim /etc/nginx/nginx.conf 
        log_format access_json_log  '{"@timestamp":"$time_local",'
                                      '"http_host":"$http_host",'
                                      '"clinetip":"$remote_addr",'
                                      '"request":"$request",'
                                      '"status":"$status",'
                                      '"size":"$body_bytes_sent",'
                                      '"upstream_addr":"$upstream_addr",'
                                      '"upstream_status":"$upstream_status",'
                                      '"upstream_response_time":"$upstream_response_time",'
                                      '"request_time":"$request_time",'
                                      '"http_referer":"$http_referer",'
                                      '"http_user_agent":"$http_user_agent",'
                                      '"http_x_forwarded_for":"$http_x_forwarded_for"}';
        
        access_log  /var/log/nginx/access.log  access_json_log;

    二、在logstash目录下,下载geolite数据库。

    geoip是logstash的一个过滤插件,用于分析IP获取地理位置。

    root@server-1 logstash]# wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz 
    --2019-11-20 10:23:55-- http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz 正在解析主机 geolite.maxmind.com (geolite.maxmind.com)... 104.17.200.89, 104.17.201.89, 2606:4700::6811:c859, ... 正在连接 geolite.maxmind.com (geolite.maxmind.com)|104.17.200.89|:80... 已连接。 已发出 HTTP 请求,正在等待回应... 200 OK 长度:29963029 (29M) [application/gzip] 正在保存至: “GeoLite2-City.tar.gz” 35% [===========================> ] 10,599,312 24.1KB/s 用时 11m 30s 2019-11-20 10:35:26 (15.0 KB/s) - 在 10599312 字节处连接关闭。重试中。 --2019-11-20 10:35:27-- (尝试次数: 2) http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz 正在连接 geolite.maxmind.com (geolite.maxmind.com)|104.17.200.89|:80... 已连接。 已发出 HTTP 请求,正在等待回应... 206 Partial Content 长度:29963029 (29M),剩余 19363717 (18M) [application/gzip] 正在保存至: “GeoLite2-City.tar.gz” 100%[++++++++++++++++++++++++++++====================================================>] 29,963,029 15.2KB/s 用时 9m 9s 2019-11-20 10:44:37 (34.4 KB/s) - 已保存 “GeoLite2-City.tar.gz” [29963029/29963029])

    三、解压

    [root@server-1 logstash]# tar -zxvf  GeoLite2-City.tar.gz 
    GeoLite2-City_20191119/
    GeoLite2-City_20191119/LICENSE.txt
    GeoLite2-City_20191119/GeoLite2-City.mmdb
    GeoLite2-City_20191119/COPYRIGHT.txt
    GeoLite2-City_20191119/README.txt
    [root@server-1 logstash]# 

    四、设置logstash配置文件

    在/etc/logstash/conf.d目录下新建一个nginx.conf的配置文件

    [root@server-1 conf.d]# vim /etc/logstash/conf.d/nginx.conf 
    input {
      beats {
       port => 10001
      }
    }
    
    filter {
    geoip {
      source => "clientip"
      target => "geoip"
      database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
      add_field => ["[geoip][coordinates]","%{[geoip][longitude]}"]
      add_field => ["[geoip][coordinates]","%{[geoip][latitude]}"]
    }
    }
    
    output {
      stdout{
        codec=>rubydebug
      }
    }

    source:需要查询IP位置的源字段

    target:目标字段。默认为geoip

    database:IP位置信息数据库目录

    add_field:增加经纬度字段

    五、测试配置文件

    [root@server-1 conf.d]# logstash -f /etc/logstash/conf.d/nginx.conf 
    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    [INFO ] 2019-11-20 17:17:04.916 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
    [INFO ] 2019-11-20 17:17:04.931 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
    [WARN ] 2019-11-20 17:17:05.931 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
    [INFO ] 2019-11-20 17:17:06.292 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
    [INFO ] 2019-11-20 17:17:06.542 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    [INFO ] 2019-11-20 17:17:08.302 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
    [INFO ] 2019-11-20 17:17:08.329 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"}
    [INFO ] 2019-11-20 17:17:09.704 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:10001"}
    [INFO ] 2019-11-20 17:17:09.911 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x17715055@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
    [INFO ] 2019-11-20 17:17:09.936 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
    [INFO ] 2019-11-20 17:17:09.948 [[main]<beats] Server - Starting server on port: 10001

    新开一个SSH连接,查看JAVA进程

    [root@server-1 conf.d]# netstat -tunlp|grep java
    tcp6       0      0 172.28.18.69:9200       :::*                    LISTEN      18608/java          
    tcp6       0      0 :::10001                :::*                    LISTEN      16856/java          
    tcp6       0      0 172.28.18.69:9300       :::*                    LISTEN      18608/java          
    tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      16856/java          
    tcp6       0      0 172.28.18.69:9600       :::*                    LISTEN      15599/java          
    tcp6       0      0 :::514                  :::*                    LISTEN      15599/java 

    此时,10001端口已经被监听,启动成功,过一会屏幕打印收到的NGINX日志数据如如下:

                  "http_referer" => "http://zabbix.9500.cn/zabbix.php?action=dashboard.view&ddreset=1",
                 "upstream_addr" => "127.0.0.1:9000",
                      "clinetip" => "219.239.8.14",
                        "source" => "/var/log/nginx/access.log",
                          "beat" => {
                "name" => "zabbix_server.jinglong",
             "version" => "6.2.4",
            "hostname" => "zabbix_server.jinglong"
        },
                        "fields" => {
            "log_topics" => "nginx-172.28.18.75"
        },
                      "@version" => "1",
               "upstream_status" => "200",
               "http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0",
                        "offset" => 20686132,
                    "prospector" => {
            "type" => "log"
        },
                  "request_time" => "0.639",
                        "status" => "200",
                          "host" => "zabbix_server.jinglong"
    }
    {
        "upstream_response_time" => "0.828",
                    "@timestamp" => 2019-11-20T09:31:32.368Z,
                     "http_host" => "zabbix.9500.cn",
                          "tags" => [
            [0] "beats_input_raw_event"
        ],
                       "request" => "GET /map.php?sysmapid=8&severity_min=0&sid=126eba41a3be1fb9&curtime=1574242326679&uniqueid=BCYQV&used_in_widget=1 HTTP/1.1",
          "http_x_forwarded_for" => "-",
                          "size" => "3502",
                         "geoip" => {
                        "ip" => "219.239.8.14",
                 "longitude" => 116.3883,
             "country_code2" => "CN",
               "region_code" => "BJ",
             "country_code3" => "CN",
            "continent_code" => "AS",
                  "timezone" => "Asia/Shanghai",
                  "latitude" => 39.9289,
              "country_name" => "China",
               "region_name" => "Beijing",
                  "location" => {
                "lon" => 116.3883,
                "lat" => 39.9289
            }
        },

    此时已经能够看到geoip的数据了,包括经纬度、国家代码,国家名称、城市名称。

     修改配置文件,指定需要的字段

    [root@server-1 conf.d]# vim nginx.conf
    filter {
    geoip {
      source => "clinetip"
      database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
      fileds => ["country_name","region_name","longitude","latitude"]
    }
    }

    fields:指定需要的字段

    保存,退出,重新启动配置文件

                 "request" => "POST /elasticsearch/_msearch HTTP/1.1",
               "upstream_status" => "200",
                        "fields" => {
            "log_topics" => "nginx-172.28.18.75"
        },
                          "size" => "24668",
                          "beat" => {
                "name" => "zabbix_server.jinglong",
            "hostname" => "zabbix_server.jinglong",
             "version" => "6.2.4"
        },
                  "request_time" => "0.159",
                        "offset" => 20983233,
                      "@version" => "1",
                 "upstream_addr" => "172.28.18.69:5601",
                     "http_host" => "elk.9500.cn"
    }
    {
                         "geoip" => {
                "latitude" => 39.9289,
             "region_name" => "Beijing",
               "longitude" => 116.3883,
            "country_name" => "China"
        },
               "http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0",
                    "prospector" => {
            "type" => "log"
        },

    此时geoip的数据字段就只显示我们指定的那几个了。修改配置文件将数据输出到elasticsearch

    input {
      beats {
       port => 10001
      }
    }
    
    filter {
    geoip {
      source => "clinetip"
      database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
      fields => ["country_name","region_name","longitude","latitude"]
    }
    }
    
    output {
        elasticsearch {
         hosts=>["172.28.18.69:9200"]
          index=>"nginx-172.28.18.75-%{+YYYY.MM.dd}"
         }
    }

    启动logstash配置文件nginx.conf

    ~
    [root@server-1 conf.d]# logstash -f /etc/logstash/conf.bak/nginx.conf 
    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    [INFO ] 2019-11-21 09:00:40.934 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
    [INFO ] 2019-11-21 09:00:40.965 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
    [WARN ] 2019-11-21 09:00:41.962 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
    [INFO ] 2019-11-21 09:00:42.365 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
    [INFO ] 2019-11-21 09:00:42.637 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    [INFO ] 2019-11-21 09:00:44.436 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
    [INFO ] 2019-11-21 09:00:45.078 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.28.18.69:9200/]}}
    [INFO ] 2019-11-21 09:00:45.089 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://172.28.18.69:9200/, :path=>"/"}
    [WARN ] 2019-11-21 09:00:45.337 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://172.28.18.69:9200/"}
    [INFO ] 2019-11-21 09:00:45.856 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
    [WARN ] 2019-11-21 09:00:45.857 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
    [INFO ] 2019-11-21 09:00:45.874 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
    [INFO ] 2019-11-21 09:00:45.878 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
    [INFO ] 2019-11-21 09:00:45.897 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//172.28.18.69:9200"]}
    [INFO ] 2019-11-21 09:00:45.902 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"}
    [INFO ] 2019-11-21 09:00:46.712 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:10001"}
    [INFO ] 2019-11-21 09:00:46.846 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1b610349@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
    [INFO ] 2019-11-21 09:00:46.909 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
    [INFO ] 2019-11-21 09:00:46.911 [[main]<beats] Server - Starting server on port: 10001

    六、配置kibana展示

    打开kibana,建立索引

     下一步,点击创建索引模式,创建成功后,就能看到索引对应的字段列表,其中包含geoip字段

     在“发现“里,新建,选择刚才建立的索引模式,,此时能看到关于geoip的相关字段

    接下来,用地图展示数据

    “可视化”里面点击创建一个可视化视图“,选择“坐标地图”

    选择创建的索引,选择“选择buckets类型”为"GEOHASH"

    此时报错:

    说没有发现字段类型为geo_point的数据字段,此时需要修改logstash配置文件,增加location字段

    input {
      beats {
       port => 10001
      }
    }
    
    filter {
    geoip {
      source => "clinetip"
      database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
      fields => ["country_name","region_name","location"]
    }
    }
    
    output {
        elasticsearch {
         hosts=>["172.28.18.69:9200"]
          index=>"nginx-172.28.18.75-%{+YYYY.MM.dd}"
         }
    }

    重启logstash配置文件,并删除elasticsearch 的索引

    [root@server-1 conf.d]# curl -XDELETE http://172.28.18.69:9200/nginx-172.28.18.75-*

    重启kibana

    root@server-1 conf.d]# systemctl restart kibana

    打开kibana,重新建立索引,发现已经有了geoip.location字段

    再建立坐标地图,还是报错

    后来,百度发现是因为输出index的文件名不对,必须以logstash开头才可以使location字段输出为geo_point类型,于是修改logstash配置文件

    input {
      beats {
       port => 10001
      }
    }
    
    filter {
    geoip {
      source => "clinetip"
      database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
      fields => ["country_name","region_name","location"]
      #add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      #add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    }
    }
    
    output {
        elasticsearch {
         hosts=>["172.28.18.69:9200"]
         index=>"logstash-nginx-172.28.18.75-%{+YYYY.MM.dd}"
         }
    }

    将index文件名改为logstash-nginx-172.28.18.75-%{+YYYY.MM.dd},重新启动配置文件,并删除以前的index

    [root@server-1 conf.d]# logstash -f /etc/logstash/conf.bak/nginx.conf 
    curl -XDELETE http://172.28.18.69:9200/nginx-172.28.18.75-2019.11.21

    打开kibana,删除之前的索引,重新建立索引

     此时,发现geoip.location字段的类型变成了geo_point,问题解决,重新建立坐标地图

     展示数据成功。

    七、使用高德地图展示数据为中文

    编辑kibana配置文件,在最后加一行

    tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

    [root@server-1 conf.d]# vim /etc/kibana/kibana.yml
    # The default locale. This locale can be used in certain circumstances to substitute any missing
    # translations.
    #i18n.defaultLocale: "en"
    
    tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

    重启kibana

    [root@server-1 conf.d]# systemctl restart kibana

    刷新kibana页面,即可显示中文地图

  • 相关阅读:
    Redisson分布式锁学习总结:公平锁 RedissonFairLock#lock 获取锁源码分析
    Redisson分布式锁学习总结:可重入锁 RedissonLock#lock 获取锁源码分析
    Redisson分布式锁学习总结:公平锁 RedissonFairLock#unLock 释放锁源码分析
    npm更改为淘宝镜像
    博客园统计阅读量
    自动下载MarkDown格式会议论文的程序
    修改linux ll 命令的日期显示格式
    Canal 实战 | 第一篇:SpringBoot 整合 Canal + RabbitMQ 实现监听 MySQL 数据库同步更新 Redis 缓存
    Log4j2 Jndi 漏洞原理解析、复盘
    一个菜鸡技术人员,很另类的总结
  • 原文地址:https://www.cnblogs.com/sky-cheng/p/11899316.html
Copyright © 2020-2023  润新知