简介:
ELK-Stack 日志收集系统。最后一次全篇记录的笔记,之后关于 ELK 的笔记都将是片段型、针对性的。
环境介绍:
ELK-Stack:192.168.1.25 ( Redis、LogStash、Kibana )
ES-1: 192.168.1.26 ( ElasticSearch )
ES-2: 192.168.1.27 ( ElasticSearch )
Node: *.*.*.* ( LogStash )
工作流程:
1、Node:LogStash 收集日志发往 --> ELK-Stack:Redis
2、ELK-Stack:LogStash 将 Redis 消息队列中的数据取出发往 --> ElasticSearch
3、ELK-Stack:Kibana 去 ElasticSearch 读取数据并展示
# 接下来我们将按 ELK 工作流程来搭建并配置环境
一、Node:LogStash 客户端收集日志
shell > yum -y install java shell > cd /usr/local/src shell > wget https://download.elastic.co/logstash/logstash/logstash-2.4.0.tar.gz shell > tar zxf logstash-2.4.0.tar.gz shell > mv logstash-2.4.0 ../; cd ~ shell > vim /usr/local/logstash-2.4.0/logstash.conf # Logstash input { file { type => "nginx_access" path => ["/tmp/access.log"] start_position => "beginning" } } output { stdout { codec => rubydebug } }
# input 插件定义从哪里读取日志,output 插件定义将日志输出到哪里。
shell > /usr/local/logstash-2.4.0/bin/logstash -f /usr/local/logstash-2.4.0/logstash.conf -t # 测试配置文件正确 Configuration OK shell > /usr/local/logstash-2.4.0/bin/logstash -f /usr/local/logstash-2.4.0/logstash.conf & # 启动
# 进入测试阶段,导入一段日志文件,查看是否有输出
{ "message" => "125.118.134.85 - - [18/Sep/2016:14:06:58 +0800] "POST / HTTP/1.0" 200 8317 "-" "-" "ptv.giv.tv" "-" cid=3&tvtoken=78%3A0a%3Ac7%3A01%3A7b%3Ae1&year=0&tag=0&version=1.0&area=0&method=bftv.video.list&extend=isVip&sn=600000MW600T165J1764_5188&page=4&pageSize=20&apptoken=282340ce12c5e10fa84171660a2054f8&time=1474178818312&channel=0&hot=1&plateForm=bftv_android "125.118.134.85"", "@version" => "1", "@timestamp" => "2016-09-18T06:07:30.425Z", "path" => "/tmp/access.log", "host" => "king", "type" => "nginx_access" } { "message" => "59.172.199.235 - - [18/Sep/2016:14:06:58 +0800] "POST / HTTP/1.0" 200 822 "-" "-" "ptv.giv.tv" "-" tvtoken=78%3A0a%3Ac7%3A04%3Aa9%3Ac0&version=1.0&method=bftv.tv.topicpiclist&sn=600000MW301D167B3808_214E&apptoken=282340ce12c5e10fa84171660a2054f8&time=1474178817053&plateForm=bftv_android "59.172.199.235"", "@version" => "1", "@timestamp" => "2016-09-18T06:07:30.425Z", "path" => "/tmp/access.log", "host" => "king", "type" => "nginx_access" }
# 屏幕立即出现数据,说明客户端 LogStash 配置无误。
二、ELK-Stack:Redis 客户端将日志写入 Redis
shell > yum -y install gcc shell > cd /usr/local/src shell > wget http://download.redis.io/releases/redis-3.2.3.tar.gz shell > tar zxf redis-3.2.3.tar.gz shell > cd redis-3.2.3; make; make install shell > mkdir /usr/local/redis-3.2.3; cp redis.conf ../../redis-3.2.3; cd ~ shell > sed -i 's/bind 127.0.0.1/bind 127.0.0.1 192.168.1.25/' /usr/local/redis-3.2.3/redis.conf shell > sed -i '/daemonize/s/no/yes/' /usr/local/redis-3.2.3/redis.conf shell > sed -i 's#dir ./#dir /data/redis_data#' /usr/local/redis-3.2.3/redis.conf shell > redis-server /usr/local/redis-3.2.3/redis.conf shell > redis-cli ping PONG shell > redis-cli -h 192.168.1.25 ping PONG
# 进入测试阶段,修改客户端 LogStash 配置文件,使其收集到的日志输出到该 Redis 中。
shell > vim /usr/local/logstash-2.4.0/logstash.conf # Logstash input { file { type => "nginx_access" path => ["/tmp/access.log"] start_position => "beginning" } } output { # stdout { # codec => rubydebug # } redis { host => "192.168.1.25" port => 6379 data_type => "list" key => "logstash-list" } }
# 回到 ELK-Stack:Redis 上查看数据,结果为空... 问题不大,因为客户端 LogStash 上次读取的日志文件是输出到屏幕,而不是 Redis
shell > redis-cli 127.0.0.1:6379> keys * (empty list or set)
# 重新导入一段日志测试
shell > redis-cli 127.0.0.1:6379> keys * 1) "logstash-list"
# 再次查看,redis 中已经有数据了。可以证明,工作流程中第一步正常。
shell > lrange logstash-list 0 0 1) "{"message":"58.39.181.33 - - [18/Sep/2016:14:06:45 +0800] \"POST / HTTP/1.0\" 200 352 \"-\" \"-\" \"ptv.giv.tv\" \"-\" tvtoken=78%3A0a%3Ac7%3A01%3A49%3A97&requestplatform=4&version=1.0&method=core.pic.list&sn=600000MW300D165J0484_8B59&apptoken=282340ce12c5e10fa84171660a2054f8&type=10&time=1474178780284&plateForm=bftv_android \"58.39.181.33\"","@version":"1","@timestamp":"2016-09-18T06:37:46.961Z","path":"/tmp/access.log","host":"king","type":"nginx_access"}"
# key:logstash-list 中第一个值得信息如上,正式日志信息。
# 查看该 key 的所有值:lrange logstash-list 0 -1 ,发现第一个到最后一个总共有 110 条,然后 wc -l < /tmp/access.log 发现也是 110 条,证明数据无误。
三、ELK-Stash:LogStash 服务端 LogStash 从 Redis 读取数据并存入 ElasticSearch
shell > vim /usr/local/logstash-2.4.0/logstash.conf # Logstash.conf input { redis { host => "127.0.0.1" port => 6379 data_type => "list" key => "logstash-list" } } output { stdout { codec => rubydebug } } shell > /usr/local/logstash-2.4.0/bin/logstash -f /usr/local/logstash-2.4.0/logstash.conf -t Configuration OK shell > /usr/local/logstash-2.4.0/bin/logstash -f /usr/local/logstash-2.4.0/logstash.conf # 调试
# 再次追加日志,查看有无输出
"message" => "125.118.134.85 - - [18/Sep/2016:14:06:58 +0800] "POST / HTTP/1.0" 200 8317 "-" "-" "ptv.giv.tv" "-" cid=3&tvtoken=78%3A0a%3Ac7%3A01%3A7b%3Ae1&year=0&tag=0&version=1.0&area=ideo.list&extend=isVip&sn=600000MW600T165J1764_5188&page=4&pageSize=20&apptoken=282340ce12c5e10fa84171660a2054f8&time=1474178818312&channel=0&hot=1&plateForm=bftv_android "125.", "@version" => "1", "@timestamp" => "2016-09-18T08:36:11.621Z", "path" => "/tmp/access.log", "host" => "king", "type" => "nginx_access" } { "message" => "59.172.199.235 - - [18/Sep/2016:14:06:58 +0800] "POST / HTTP/1.0" 200 822 "-" "-" "ptv.gv.tv" "-" tvtoken=78%3A0a%3Ac7%3A04%3Aa9%3Ac0&version=1.0&method=bftv.tv.topicpicli01D167B3808_214E&apptoken=282340ce12c5e10fa84171660a2054f8&time=1474178817053&plateForm=bftv_android "59.172.199.235"", "@version" => "1", "@timestamp" => "2016-09-18T08:36:11.622Z", "path" => "/tmp/access.log", "host" => "king", "type" => "nginx_access" }
# 至此,说明 ELK-Stack:LogStash 可以正常从 ELK-Stack:redis 读取数据,下面将读取到的数据写入到 ElasticSearch 。
# 注意:这只是个示例,将来还要将日志条目拆分,然后写入到 Kibana 才能更好的分析日志 ( filter )
# http://kibana.logstash.es/content/
shell > yum -y install java shell > cd /usr/local/src shell > wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz shell > tar zxf elasticsearch-2.4.0.tar.gz shell > mv elasticsearch-2.4.0 ../; cd ~ shell > useradd elast shell > chown -R elast.elast /usr/local/elasticsearch-2.4.0 shell > vim /usr/local/elasticsearch-2.4.0/config/elasticsearch.yml # 编辑配置文件 cluster.name: my-elk # 集群名称 node.name: node-1 # 节点名称 path.data: /data/esearch/data # 数据存放路径 path.logs: /data/esearch/logs # 日志 network.host: 192.168.1.26 # 监听地址 http.port: 9200 # 监听端口 discovery.zen.ping.unicast.hosts: ["192.168.1.26", "192.168.1.27"] # 指定谁可以成为 Master ,如不指定无法加入到集群中 shell > mkdir -p /data/esearch/{data,logs} shell > chown -R elast.elast /data/esearch shell > su - elast -c "/usr/local/elasticsearch-2.4.0/bin/elasticsearch -d" shell > echo 'su - elast -c "/usr/local/elasticsearch-2.4.0/bin/elasticsearch -d"' >> /etc/rc.local
# 简单测试 ElasticSearch
shell > curl -X GET 192.168.1.26:9200/_cat/nodes?v host ip heap.percent ram.percent load node.role master name 192.168.1.26 192.168.1.26 7 43 0.10 d * node-1
# 发现当前集群只有一个节点,名称为 node-1 ,角色为 master 。
# 好了,按照这个步骤开始安装下一台节点服务器。
# 注意:集群名称必须一致,节点名称必须不一致,防火墙开启 TCP 9200/9300,9300 用于节点间通信。
# node-2 安装略
shell > curl -X GET 192.168.1.27:9200/_cat/nodes?v host ip heap.percent ram.percent load node.role master name 192.168.1.27 192.168.1.27 6 42 0.23 d m node-2 192.168.1.26 192.168.1.26 7 43 0.04 d * node-1
# 搭建好 node-2,再次查看节点信息,显示集群中有两个节点,node-1 角色为 master,node-2 角色 m 代表可以成为 master。
curl -X GET 192.168.1.26:9200/_cat/nodes?v host ip heap.percent ram.percent load node.role master name 192.168.1.26 192.168.1.26 7 43 0.32 d * node-1 192.168.1.27 192.168.1.27 6 43 0.06 d m node-2
# 再次从 node-1 上查看节点信息,发现一致。
# 至此,ElasticSearch 集群安装完毕,默认 master 为 node-1,下面开始配置 LogStash 将数据存入 ElasticSearch 集群。
shell > vim /usr/local/logstash-2.4.0/logstash.conf output { # stdout { # codec => rubydebug # } elasticsearch { hosts => ["192.168.1.26:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" document_type => "%{type}" workers => 1 flush_size => 20000 idle_flush_time => 10 template_overwrite => true } }
# 记得重启服务
# 再次追加日志,然后查看 ElasticSearch 索引信息
shell > curl -X GET 192.168.1.26:9200/_cat/indices green open logstash-nginx_access-2016.09.18 5 1 79 0 65.3kb 39.8kb
# 查看 ElasticSearch 索引情况
shell > curl -X GET 192.168.1.26:9200/_cat/shards?v index shard prirep state docs store ip node logstash-nginx_access-2016.09.18 1 r STARTED 13 12.8kb 192.168.1.26 node-1 logstash-nginx_access-2016.09.18 1 p STARTED 13 12.8kb 192.168.1.27 node-2 logstash-nginx_access-2016.09.18 2 p STARTED 19 14.3kb 192.168.1.26 node-1 logstash-nginx_access-2016.09.18 2 r STARTED 19 14.3kb 192.168.1.27 node-2 logstash-nginx_access-2016.09.18 3 r STARTED 16 13.9kb 192.168.1.26 node-1 logstash-nginx_access-2016.09.18 3 p STARTED 16 13.9kb 192.168.1.27 node-2 logstash-nginx_access-2016.09.18 4 p STARTED 15 12.5kb 192.168.1.26 node-1 logstash-nginx_access-2016.09.18 4 r STARTED 15 12.5kb 192.168.1.27 node-2 logstash-nginx_access-2016.09.18 0 p STARTED 16 12.6kb 192.168.1.26 node-1 logstash-nginx_access-2016.09.18 0 r STARTED 16 12.6kb 192.168.1.27 node-2
# 至此,ELK:LogStash 读取 Redis 信息写入 ElasticSearch 完成。
四、ELK:Kibana 去读取 ElasticSearch 数据并展示
shell > cd /usr/local/src shell > wget https://download.elastic.co/kibana/kibana/kibana-4.6.1-linux-x86_64.tar.gz shell > tar zxf kibana-4.6.1-linux-x86_64.tar.gz shell > mv kibana-4.6.1-linux-x86_64 ../; cd shell > vim /usr/local/kibana-4.6.1-linux-x86_64/config/kibana.yml elasticsearch.url: "http://192.168.1.26:9200" shell > /usr/local/kibana-4.6.1-linux-x86_64/bin/kibana > /data/logs/kibana.log &
# 防火墙开放 TCP 5601 端口。访问 http://192.168.1.25:5601
# 剩下的工作就是根据业务,来抓取、拆分日志,并制作图标等。
五、留一份配置文件
1、客户端 LogStash
# Logstash input { file { type => "nginx_access" path => ["/tmp/access.log"] start_position => "beginning" } } output { # stdout { # codec => rubydebug # } redis { host => "192.168.1.25" port => 6379 data_type => "list" key => "logstash-list" } }
2、服务端 LogStash
# Logstash.conf input { redis { host => "127.0.0.1" port => 6379 data_type => "list" key => "logstash-list" } } output { # stdout { # codec => rubydebug # } elasticsearch { hosts => ["192.168.1.26:9200"] index => "logstash-%{type}-%{+YYYY.MM.dd}" document_type => "%{type}" workers => 1 flush_size => 20000 idle_flush_time => 10 template_overwrite => true } }
3、ElasticSearch
shell > grep -vP '^#|^$' /usr/local/elasticsearch-2.4.0/config/elasticsearch.yml cluster.name: my-elk node.name: node-1 path.data: /data/esearch/data path.logs: /data/esearch/logs network.host: 192.168.1.26 http.port: 9200 discovery.zen.ping.unicast.hosts: ["192.168.1.26","192.168.1.27"]
# End