• ELK测试


    ELK 日志分析系统
    
            ELK 日志分析系统
                1.0 ELK 介绍
                1.1 ELK 安装准备工作
                1.2 es 安装
                1.3 es配置
                1.4 es测试
                1.5 Kibana安装
                1.6 logstash安装
                1.7 logstash配置解析rsyslog文件
                1.8 kibana查看日志
                1.9 nginx日志收集
                2.0 beats采集日志
    
    1.0 ELK 介绍
    
    官网https://www.elastic.co/cn/
    
    中文指南https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details
    
    ELK Stack (5.0版本之后) Elastic Stack == (ELK Stack + Beats)
    
    ELK Stack包含:ElasticSearch、Logstash、Kibana
    
    ElasticSearch是一个搜索引擎,用来搜索、分析、存储日志。它是分布式的,也就是说可以横向扩容,可以自动发现,索引自动分片,总之很强大。文档https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
    
    Logstash用来采集日志,把日志解析为json格式交给ElasticSearch。
    
    Kibana是一个数据可视化组件,把处理后的结果通过web界面展示
    
    Beats在这里是一个轻量级日志采集器,其实Beats家族有5个成员
    早期的ELK架构中使用Logstash收集、解析日志,但是Logstash对内存、cpu、io等资源消耗比较高。相比 
    Logstash,Beats所占系统的CPU和内存几乎可以忽略不计
    
    x-pack对Elastic Stack提供了安全、警报、监控、报表、图表于一身的扩展包,是收费的
    
    1.1 ELK 安装准备工作
    
    环境准备:192.168.137.30、192.168.137.40、192.168.137.45
    // 三台机器均安装Elaelasticsearch(后续简称es)、jdk8,设置hosts 
    主节点:
    192.168.137.30
    数据节点:
    192.168.137.40、192.168.137.45
    所有节点均安装jdk环境
    yum install -y java-1.8.0-openjdk
    
    1.2 es 安装
    
    官方文档 https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html
    三台机器均都要执行以下操作
    [root@linux-node3 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    [root@linux-node3 ~]# cat /etc/yum.repos.d/elastic.repo
    
    [elasticsearch-6.x]
    name=Elasticsearch repository for 6.x packages
    baseurl=https://artifacts.elastic.co/packages/6.x/yum
    gpgcheck=1
    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    autorefresh=1
    type=rpm-md
    
    [root@linux-node3 ~]# yum install -y elasticsearch
    或者方法安装
    [root@linux-node3 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
    [root@linux-node3 ~]# rpm -ivh elasticsearch-6.0.0.rpm
    
    1.3 es配置
    
    elasticsearch配置文件/etc/elasticsearch和/etc/sysconfig/elasticsearch 
    参考https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rpm.html 
    主节点:192.168.137.30编辑配置文件
    
    [root@linux-node3 ~]# cat /etc/elasticsearch/elasticsearch.yml 
    # ======================== Elasticsearch Configuration =========================
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure you
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lists
    # the most important settings you may want to configure for a production cluster.
    #
    # Please consult the documentation for further information on configuration options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster -----------------------------------
    #
    # Use a descriptive name for your cluster:
    #
    #cluster.name: my-application
    cluster.name: linux-node3.com
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    #node.name: node-1
    node.name: linux-node3.com
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
     node.master: true
     node.data: false
    
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    path.data: /var/lib/elasticsearch
    #
    # Path to log files:
    #
    path.logs: /var/log/elasticsearch
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: 192.168.137.30
    #
    # Set a custom port for HTTP:
    #
    #http.port: 9200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when new node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
    discovery.zen.ping.unicast.hosts: ["192.168.137.30", "192.168.137.40", "192.168.137.45"]
    #
    # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
    #
    #discovery.zen.minimum_master_nodes: 
    #
    # For more information, consult the zen discovery module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    [root@linux-node3 ~]#
    
    同理修改数据节点配置:
    
    [root@linux-node4 ~]# cat /etc/elasticsearch/elasticsearch.yml 
    # ======================== Elasticsearch Configuration =========================
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure you
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lists
    # the most important settings you may want to configure for a production cluster.
    #
    # Please consult the documentation for further information on configuration options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster -----------------------------------
    #
    # Use a descriptive name for your cluster:
    #
    #cluster.name: my-application
    cluster.name: linux-node3.com
    #
    # ------------------------------------ Node ------------------------------------
    #
    # Use a descriptive name for the node:
    #
    #node.name: node-1
    node.name: linux-node4.com
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
     node.master: false
     node.data: true
    
    #
    # ----------------------------------- Paths ------------------------------------
    #
    # Path to directory where to store the data (separate multiple locations by comma):
    #
    path.data: /var/lib/elasticsearch
    #
    # Path to log files:
    #
    path.logs: /var/log/elasticsearch
    #
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network -----------------------------------
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: 192.168.137.40
    #
    # Set a custom port for HTTP:
    #
    #http.port: 9200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when new node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
    discovery.zen.ping.unicast.hosts: ["192.168.137.30", "192.168.137.40", "192.168.137.45"]
    #
    # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
    #
    #discovery.zen.minimum_master_nodes: 
    #
    # For more information, consult the zen discovery module documentation.
    #
    # ---------------------------------- Gateway -----------------------------------
    #
    # Block initial recovery after a full cluster restart until N nodes are started:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various -----------------------------------
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    [root@linux-node4 ~]# 
    
    三台均启动服务
    
    [root@linux-node3 ~]# systemctl start elasticsearch
    [root@linux-node3 ~]# ps -aux |grep elasticsearch
    elastic+   3140 23.7 45.9 1482312 459248 ?      Ssl  14:29   0:00 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
    root       3168  5.0  0.0 112680   716 pts/1    S+   14:29   0:00 grep --color=auto elasticsearch
    [root@linux-node3 ~]# 
    [root@linux-node3 ~]# netstat -lntnp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      966/sshd            
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1095/master         
    tcp        0      0 192.168.137.30:27017    0.0.0.0:*               LISTEN      1006/mongod         
    tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      1006/mongod         
    tcp6       0      0 192.168.137.30:9200     :::*                    LISTEN      1422/java           
    tcp6       0      0 :::8080                 :::*                    LISTEN      1185/java           
    tcp6       0      0 :::80                   :::*                    LISTEN      961/httpd           
    tcp6       0      0 192.168.137.30:9300     :::*                    LISTEN      1422/java           
    tcp6       0      0 :::22                   :::*                    LISTEN      966/sshd            
    tcp6       0      0 ::1:25                  :::*                    LISTEN      1095/master         
    [root@linux-node3 ~]# 
    // 依次启动数据节点。端口分别为9200,9300
    [root@linux-05 ~]# ss -ltnp |grep -E '9200|9300'
    LISTEN     0      128      ::ffff:192.168.137.45:9200                    :::*                   users:(("java",pid=2758,fd=118))
    LISTEN     0      128      ::ffff:192.168.137.45:9300                    :::*                   users:(("java",pid=2758,fd=108))
    [root@linux-05 ~]#
    [root@linux-node4 ~]# ss -ltnp |grep -E '9200|9300'
    LISTEN     0      128      ::ffff:192.168.137.40:9200                    :::*                   users:(("java",pid=3257,fd=119))
    LISTEN     0      128      ::ffff:192.168.137.40:9300                    :::*                   users:(("java",pid=3257,fd=110))
    
    1.4 es测试
    
    健康检查
    
    [root@linux-node3 ~]# curl '192.168.137.30:9200/_cluster/health?pretty'
    {
      "cluster_name" : "linux-node3.com",
      "status" : "green",   //健康状态
      "timed_out" : false,
      "number_of_nodes" : 3,     //3个节点
      "number_of_data_nodes" : 2,  //2个数据节点
      "active_primary_shards" : 0,
      "active_shards" : 0,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 100.0
    }
    [root@linux-node3 ~]#
    
    集群详细信息
    
    [root@linux-node3 ~]# curl '192.168.137.30:9200/_cluster/state?pretty' 
    {
      "cluster_name" : "linux-node3.com",
      "compressed_size_in_bytes" : 355,
      "version" : 5,
      "state_uuid" : "RBH5dvstTyqgHVVSdfNi_Q",
      "master_node" : "d1yLa9f9RfSPwUXPvm_lqQ",
      "blocks" : { },
      "nodes" : {
        "d1yLa9f9RfSPwUXPvm_lqQ" : {
          "name" : "linux-node3.com",
          "ephemeral_id" : "DGs6lBiaQvaJlmyasez-TA",
          "transport_address" : "192.168.137.30:9300",
          "attributes" : { }
        },
        "pyOddTkYRN6fRjWjb-ehBw" : {
          "name" : "linux-05.com",
          "ephemeral_id" : "X8oa-yozSxqVmb6Dp2fhAQ",
          "transport_address" : "192.168.137.45:9300",
          "attributes" : { }
        },
        "mf7rEM3oScqEOqNFniEJfA" : {
          "name" : "linux-node4.com",
          "ephemeral_id" : "eZ4jATDJRDyv3rmnup3zfg",
          "transport_address" : "192.168.137.40:9300",
          "attributes" : { }
        }
      },
      "metadata" : {
        "cluster_uuid" : "3_2FFY-XTPexeDEZ6MXR1Q",
        "templates" : { },
        "indices" : { },
        "index-graveyard" : {
          "tombstones" : [ ]
        }
      },
      "routing_table" : {
        "indices" : { }
      },
      "routing_nodes" : {
        "unassigned" : [ ],
        "nodes" : {
          "mf7rEM3oScqEOqNFniEJfA" : [ ],
          "pyOddTkYRN6fRjWjb-ehBw" : [ ]
        }
      },
      "restore" : {
        "snapshots" : [ ]
      },
      "snapshots" : {
        "snapshots" : [ ]
      },
      "snapshot_deletions" : {
        "snapshot_deletions" : [ ]
      }
    }
    [root@linux-node3 ~]#
    
    1.5 Kibana安装
    
    主节点安装:kibana
    [root@linux-node3 ~]#  yum install -y kibana //会很慢
    [root@linux-node ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
    [root@linux-node3 ~]# rpm -ivh kibana-6.0.0-x86_64.rpm 
    准备中...                          ################################# [100%]
    正在升级/安装...
    1:kibana-6.0.0-1                   ################################# [100%]
    [root@linux-node3 ~]# 
    [root@linux-node3 ~]# grep -v "^#" /etc/kibana/kibana.yml 
    server.port: 5601  //监听端口
    server.host: "192.168.137.30"
    
    elasticsearch.url: "http://192.168.137.30:9200" //es访问地址
    
    logging.dest: /var/log/kibana.log
    
    [root@linux-node3 ~]#
    [root@linux-node3 log]# touch kibana.log
    [root@linux-node3 log]# chmod 777 kibana.log 
    [root@linux-node3 log]# systemctl restart kibana
    [root@linux-node3 log]# ps aux | grep kibana
    kibana     1626 39.5 11.6 1121852 116968 ?      Ssl  17:09   0:04 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
    root       1638  0.0  0.0 112680   980 pts/0    R+   17:10   0:00 grep --color=auto kibana
    [root@linux-node3 log]# netstat -lntnp | grep nod
    tcp        0      0 192.168.137.30:5601     0.0.0.0:*               LISTEN      1626/node           
    [root@linux-node3 log]#
    浏览器里访问 http://192.168.137.30:5601
    
    1.6 logstash安装
    
    数据节点安装: logstash
    [root@linux-node4 ~]# yum install -y  logstash 或者使用
    wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm
    rpm -ivh logstash-6.0.0.rpm 
    [root@linux-node4 ~]# rpm -ivh logstash-6.0.0.rpm 
    准备中...                          ################################# [100%]
    正在升级/安装...
    1:logstash-1:6.0.0-1               ################################# [100%]
    Using provided startup.options file: /etc/logstash/startup.options
    OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
    Successfully created system startup script for Logstash
    
    1.7 logstash配置解析rsyslog文件
    
    [root@linux-node4 ~]# cat /etc/logstash/conf.d/syslog.conf
    
     input {
      syslog {
        type => "system-syslog"
        port => 10514  
      }
    }
    output {
      stdout {
        codec => rubydebug
      }
    }
    
    [root@linux-node4 ~]# 
    检测配置文件是否有错
    [root@linux-node4 ~]# cd /usr/share/logstash/bin
    [root@linux-node4 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
    OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
    Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
    Configuration OK
    启动logstash
    vim /etc/rsyslog.conf//在#### RULES下面增加一行
    *.* @@192.168.137.40:10514
    [root@linux-node4 ~]# systemctl restart rsyslog
    [root@linux-node4 ~]# netstat -lnp |grep 10514
    tcp6       0      0 :::10514                :::*                    LISTEN      3708/java           
    udp        0      0 0.0.0.0:10514           0.0.0.0:*                           3708/java
    [root@linux-node4 bin]#  ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
    
    OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
    Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
    {
              "severity" => 6,
               "program" => "rsyslogd",
               "message" => "[origin software="rsyslogd" swVersion="7.4.7" x-pid="3768" x-info="http://www.rsyslog.com"] start
    ",
                  "type" => "system-syslog",
              "priority" => 46,
             "logsource" => "linux-node4",
            "@timestamp" => 2017-12-14T10:07:03.000Z,
              "@version" => "1",
                  "host" => "192.168.137.40",
              "facility" => 5,
        "severity_label" => "Informational",
             "timestamp" => "Dec 14 18:07:03",
        "facility_label" => "syslogd"
    }
    {
              "severity" => 6,
               "program" => "systemd",
               "message" => "Stopping System Logging Service...
    ",
                  "type" => "system-syslog",
              "priority" => 30,
             "logsource" => "linux-node4",
            "@timestamp" => 2017-12-14T10:07:03.000Z,
              "@version" => "1",
                  "host" => "192.168.137.40",
              "facility" => 3,
        "severity_label" => "Informational",
             "timestamp" => "Dec 14 18:07:03",
        "facility_label" => "system"
    }
    {
              "severity" => 6,
               "program" => "systemd",
               "message" => "Starting System Logging Service...
    ",
                  "type" => "system-syslog",
              "priority" => 30,
             "logsource" => "linux-node4",
            "@timestamp" => 2017-12-14T10:07:03.000Z,
              "@version" => "1",
                  "host" => "192.168.137.40",
              "facility" => 3,
        "severity_label" => "Informational",
             "timestamp" => "Dec 14 18:07:03",
        "facility_label" => "system"
    }
    {
              "severity" => 6,
               "program" => "systemd",
               "message" => "Started System Logging Service.
    ",
                  "type" => "system-syslog",
              "priority" => 30,
             "logsource" => "linux-node4",
            "@timestamp" => 2017-12-14T10:07:03.000Z,
              "@version" => "1",
                  "host" => "192.168.137.40",
              "facility" => 3,
        "severity_label" => "Informational",
             "timestamp" => "Dec 14 18:07:03",
        "facility_label" => "system"
    }
    {
              "severity" => 5,
                   "pid" => "654",
               "program" => "polkitd",
               "message" => "Unregistered Authentication Agent for unix-process:3761:2437779 (system bus name :1.47, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale zh_CN.UTF-8) (disconnected from bus)
    ",
                  "type" => "system-syslog",
              "priority" => 85,
             "logsource" => "linux-node4",
            "@timestamp" => 2017-12-14T10:07:03.000Z,
              "@version" => "1",
                  "host" => "192.168.137.40",
              "facility" => 10,
        "severity_label" => "Notice",
             "timestamp" => "Dec 14 18:07:03",
        "facility_label" => "security/authorization"
    }
    
    // 注:启动后不能敲命令,屏幕上查看到日志输出
    
    1.8 kibana查看日志
    
    数据节点配置日志收集:
    启动logstash
    [root@linux-node4 ~]# cat /etc/logstash/conf.d/syslog.conf
    
    input {
      syslog {
        type => "system-syslog"
        port => 10514  
      }
    }
    output {
      elasticsearch {
      hosts => ["192.168.137.30:9200"]
      index => "system-syslog-%{+YYYY.MM}" 
      }
    }
    
    [root@linux-node4 ~]# chown -R logstash /var/lib/logstash
    [root@linux-node4 ~]# systemctl start logstash
    [root@linux-node4 ~]#
    [root@linux-node4 ~]# netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      963/sshd            
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1238/master         
    tcp6       0      0 192.168.137.40:9200     :::*                    LISTEN      13890/java          
    tcp6       0      0 :::10514                :::*                    LISTEN      14164/java          
    tcp6       0      0 192.168.137.40:9300     :::*                    LISTEN      13890/java          
    tcp6       0      0 :::22                   :::*                    LISTEN      963/sshd            
    tcp6       0      0 ::1:25                  :::*                    LISTEN      1238/master         
    tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      14164/java 
    // 默认是127.0.0.1:9600,修改
    [root@linux-node4 ~]# grep -v "^#" /etc/logstash/logstash.yml 
    path.data: /var/lib/logstash
    path.config: /etc/logstash/conf.d/*.conf
    
    
    http.host: "192.168.137.40"
    path.logs: /var/log/logstash
    
    [root@linux-node4 ~]#
    [root@linux-node4 ~]# netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      965/sshd            
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1224/master         
    tcp6       0      0 192.168.137.40:9200     :::*                    LISTEN      2215/java           
    tcp6       0      0 :::10514                :::*                    LISTEN      5450/java           
    tcp6       0      0 192.168.137.40:9300     :::*                    LISTEN      2215/java           
    tcp6       0      0 :::22                   :::*                    LISTEN      965/sshd            
    tcp6       0      0 ::1:25                  :::*                    LISTEN      1224/master         
    tcp6       0      0 192.168.137.40:9600     :::*                    LISTEN      5450/java
    主节点查看索引信息
    [root@linux-node3 ~]# curl '192.168.137.30:9200/_cat/indices?v' //可以获取索引信息
    health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   .kibana               Z7JVUVlLSRSu5xqySplQ5w   1   1          1            0      6.9kb          3.4kb
    green  open   system-syslog-2017.12 c9ZmYijTTYSMLMIESb3N4Q   5   1          1            0     24.5kb         12.2kb
    [root@linux-node3 ~]#
    
        [root@linux-node3 ~]# curl -XGET '192.168.137.30:9200/indexname?pretty' 
    {  // 获指定索引详细信息
      "error" : {
        "root_cause" : [
          {
            "type" : "index_not_found_exception",
            "reason" : "no such index",
            "resource.type" : "index_or_alias",
            "resource.id" : "indexname",
            "index_uuid" : "_na_",
            "index" : "indexname"
          }
        ],
        "type" : "index_not_found_exception",
        "reason" : "no such index",
        "resource.type" : "index_or_alias",
        "resource.id" : "indexname",
        "index_uuid" : "_na_",
        "index" : "indexname"
      },
      "status" : 404
    }
    
    [root@linux-node3 ~]#
    curl -XDELETE 'localhost:9200/logstash-xxx-*' 可以删除指定索引
    浏览器访问192.168.137.40:5601,到kibana配置索引
    左侧点击“Managerment”-> “Index Patterns”-> “Create Index Pattern”
    Index pattern这里需要根据前面curl查询到的索引名字来写,否则下面的按钮是无法点击
    输入:system-syslog-2017.12 或者system-syslog-*
    
    1.9 nginx日志收集
    
    [root@linux-node4 ~]# cat /etc/logstash/conf.d/nginx.conf
    
    input {
      file {
        path => "/tmp/elk_access.log"
        start_position => "beginning"
        type => "nginx"
      }
    }
    filter {
        grok {
            match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} [%{HTTPDATE:timestamp}] "(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
        }
        geoip {
            source => "clientip"
        }
    }
    output {
        stdout { codec => rubydebug }
        elasticsearch {
            hosts => ["192.168.137.40:9200"]
    	index => "nginx-test-%{+YYYY.MM.dd}"
      }
    }
    
    [root@linux-node4 ~]# cd /usr/share/logstash/bin
    [root@linux-node4 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
    OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
    Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
    Configuration OK 
    //没有nginx的需要安装nginx
    [root@linux-node4 ~]# yum -y install nginx
    [root@linux-node4 ~]# cat /etc/nginx/conf.d/elk.conf
    
    server {
                listen 80;
                server_name elk.linux.com;
    
                location / {
                    proxy_pass      http://192.168.137.30:5601;
                    proxy_set_header Host   $host;
                    proxy_set_header X-Real-IP      $remote_addr;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                }
                access_log  /tmp/elk_access.log main2;
            }
    
    [root@linux-node4 ~]#
    配置日志
    vim /etc/nginx/nginx.conf//增加如下内容
    
    log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$upstream_addr" $request_time';
    
    [root@linux-node4 ~]# systemctl start nginx
    [root@linux-node4 ~]# ps -ef | grep nginx
    root       2916      1  0 14:48 ?        00:00:00 nginx: master process /usr/sbin/nginx
    nginx      2917   2916  0 14:48 ?        00:00:00 nginx: worker process
    root       2919   2732  0 14:48 pts/4    00:00:00 grep --color=auto nginx
    [root@linux-node4 ~]# netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2916/nginx: master  
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      972/sshd            
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1247/master         
    tcp6       0      0 :::80                   :::*                    LISTEN      2916/nginx: master  
    tcp6       0      0 192.168.137.40:9200     :::*                    LISTEN      2214/java           
    tcp6       0      0 :::10514                :::*                    LISTEN      2304/java           
    tcp6       0      0 192.168.137.40:9300     :::*                    LISTEN      2214/java           
    tcp6       0      0 :::22                   :::*                    LISTEN      972/sshd            
    tcp6       0      0 ::1:25                  :::*                    LISTEN      1247/master         
    tcp6       0      0 192.168.137.40:9600     :::*                    LISTEN      2304/java           
    [root@linux-node4 ~]#
    绑定hosts 192.168.137.40 elk.linux.com
    浏览器访问,检查是否有日志产生
    [root@linux-node4 ~]# systemctl restart logstash  
    
    主节点查看获取的索引
    [root@linux-node3 ~]# curl '192.168.137.30:9200/_cat/indices?v' 
    health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   .kibana               Z7JVUVlLSRSu5xqySplQ5w   1   1          2            0     20.4kb         10.2kb
    green  open   system-syslog-2017.12 c9ZmYijTTYSMLMIESb3N4Q   5   1        127            0    723.8kb        317.6kb
    green  open   nginx-test-2017.12.18 w3j3J-wXT6eXzaVf6ycmBg   5   1         20            0     42.7kb           466b
    [root@linux-node3 ~]#
    // 检查是否有nginx-test开头的索引生成 
    如果有,才能到kibana里去配置该索引
    左侧点击“Managerment”-> “Index Patterns”-> “Create Index Pattern”
    Index pattern这里写nginx-test-*
    之后点击左侧的Discover
    
    2.0 beats采集日志
    
    官网:https://www.elastic.co/cn/products/beats
    优点:可扩展,支持自定义构建
    数据节点:linux-05.com
    [root@linux-05 ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.0.0-x86_64.rpm
    [root@linux-05 ~]# rpm -ivh filebeat-6.0.0-x86_64.rpm 
    Preparing...                          ################################# [100%]
    Updating / installing...
    1:filebeat-6.0.0-1                 ################################# [100%]
    [root@linux-05 ~]#
    编辑配置文件
    [root@linux-05 ~]# grep  -v "^#" /etc/filebeat/filebeat.yml  | grep -v "#" |grep -v "^$"
    
    filebeat.prospectors:
    - type: log
      paths:
        - /var/log/messages
    output.console:
      enable: true
    
    /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml //可以在屏幕上看到对应的日志信息
    
    [root@linux-05 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml 
    ^C[root@linux-05 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml 
    {"@timestamp":"2017-12-18T07:32:01.785Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.0.0"},"prospector":{"type":"log"},"beat":{"name":"linux-05.com","hostname":"linux-05.com","version":"6.0.0"},"message":"Dec 18 12:30:01 linux-05 systemd: Started Session 6 of user root.","source":"/var/log/messages","offset":66}
    {"@timestamp":"2017-12-18T07:32:01.785Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.0.0"},"offset":133,"message":"Dec 18 12:30:01 linux-05 systemd: Starting Session 6 of user root.","prospector":{"type":"log"},"beat":{"version":"6.0.0","name":"linux-05.com","hostname":"linux-05.com"},"source":"/var/log/messages"}...........
    
    再编辑配置文件
    vim /etc/filebeat/filebeat.yml //增加或者更改
    
    filebeat.prospectors:
    - input_type: log 
      paths:
        - /var/log/messages
    output.elasticsearch:
      hosts: ["192.168.137.30:9200"]
    
    [root@linux-05 ~]# systemctl start  filebeat
    [root@linux-05 ~]# netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      933/sshd            
    tcp        0      0 192.168.137.45:27017    0.0.0.0:*               LISTEN      1061/mongod         
    tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      1061/mongod         
    tcp6       0      0 :::3306                 :::*                    LISTEN      1454/mysqld         
    tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
    tcp6       0      0 192.168.137.45:9200     :::*                    LISTEN      888/java            
    tcp6       0      0 192.168.137.45:9300     :::*                    LISTEN      888/java            
    tcp6       0      0 :::22                   :::*                    LISTEN      933/sshd            
    [root@linux-05 ~]# ps aux |grep filebeat
    root       5123  0.0  1.2 277436 12324 ?        Ssl  15:45   0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebea
    root       5140  0.0  0.0 112652   964 pts/3    R+   15:46   0:00 grep --color=auto filebeat
    [root@linux-05 ~]#
    
    
    拓展
    x-pack 收费,免费  http://www.jianshu.com/p/a49d93212eca
    https://www.elastic.co/subscriptions
    Elastic stack演进  http://70data.net/1505.html
    基于kafka和elasticsearch,linkedin构建实时日志分析系统 http://t.cn/RYffDoE  
    使用redis http://blog.lishiming.net/?p=463
    ELK+Filebeat+Kafka+ZooKeeper 构建海量日志分析平台  https://www.cnblogs.com/delgyd/p/elk.html
    http://www.jianshu.com/p/d65aed756587
    

      

  • 相关阅读:
    Java-JUC(四):同步容器介绍
    Java-JUC(三):原子性变量与CAS算法
    Java:双向链表反转实现
    Java-JUC(二):Java内存模型可见性、原子性、有序性及volatile具有特性
    Java-JUC(一):volatile引入
    TSQL:判断某较短字符串在较长字符串中出现的次数。
    二叉树的定义与前序、中序、后序遍历
    c#:判断一个数组元素中否有重复元素
    c#:对两个字符串大小比较(不使用c#/java内部的比较函数),按升序排序
    mysql之 OPTIMIZE TABLE整理碎片
  • 原文地址:https://www.cnblogs.com/zhaobin-diray/p/10390420.html
Copyright © 2020-2023  润新知