简介:
ELK 是一套开源的日志管理平台,主要包括三个组件,可以用于日志的收集、分析、存储和展示工作。
ELK 成员:Elasticsearch 、Logstash 、Kibana( K4 )
ELK 平台特性:
1、处理方式灵活,Elasticsearch 采用实时全文索引,不需要像 storm 一样预先编程才能使用
2、配置简单、易上手,Elasticsearch 全部采用 Json 接口;Logstash 是 Ruby DSL 设计,都是业界最通用的配置语法设计
3、检索性能高效,虽然每次查询都是实时计算,但是优秀的设计和实现基本可以达到全天数据查询的秒级响应
4、集群线性扩展,Elasticsearch 、Logstash 集群都可以线性扩展
5、前端操作简单、界面漂亮,只需点击鼠标就可以完成搜索、聚合功能,生成炫酷的仪表盘
ELK 各成员工作职责:
1、Logstash 是一个开源的日志收集管理工具,支持各种日志类型的接收、处理、转发
2、Elasticsearch 是一个分布式、接近实时的搜索引擎,支持时间索引、全文索引,可以看作是一个文本数据库
3、Kibana 负责将 Elasticsearch 中的数据,按需求展示信息
ELK 扩展成员:
Redis : 一般情况下被用作 NoSQL 数据库,而这里是作为消息队列存在的,意思是:当客户端将消息写入队列后,服务端会把消息取走,所以不必担心会把机器内存占满
Nginx : 一般情况下被用作 Web 服务器或 Web 缓存、反向代理、负载均衡器,呃,这里也是用作将请求反向代理到 Kibana 的监听端口( 虽然 Kibana 也可以直接对外提供服务,但是考虑到性能、安全跟别的问题还是这样比较好 )
软件下载地址:https://www.elastic.co/downloads
Elasticsearch : https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.6.0.tar.gz
Logstash : https://download.elastic.co/logstash/logstash/logstash-1.5.2.tar.gz
Kibana : https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
## 这里使用的源码包,你也可以在这里下载对应的 RPM 包
环境描述:
192.168.214.20 服务端
192.168.214.30 客户端
## 操作系统:CentOS 6.6 x86_64 安装方式:minimal
一、客户端操作( 安装、配置 Logstash )
1、安装 Logstash
shell > yum -y install java ## 也可以下载源码的 jdk 包来安装,推荐这种简单暴力的方式 shell > cd /usr/local/src shell > wget https://download.elastic.co/logstash/logstash/logstash-1.5.2.tar.gz shell > tar zxf logstash-1.5.2.tar.gz shell > mv logstash-1.5.2 ../logstash shell > /usr/local/logstash/bin/logstash -e 'input{stdin{}} output{stdout{}}' Hello World 2015-07-14T11:36:28.287Z localhost.localdomain Hello World
## -e 参数可以直接从命令行获取信息,后面花括号中的意思为:从标准输入接收数据,输出到标准输出
## 当输入 Hello World 时,输出下面的信息:时间、主机名、内容
## 按 Ctrl+c 停止
shell > /usr/local/logstash/bin/logstash -e 'input{stdin{}} output{stdout{codec=>rubydebug}}' Hello World
## 与上次不同的是,添加了 codec 参数,改变了输出格式,可以在配置文件中定义 input 、codec 、filter 、output
{ "message" => "Hello World", "@version" => "1", "@timestamp" => "2015-07-14T11:55:55.235Z", "host" => "localhost.localdomain" }
## 这是输出信息
2、配置 Logstash
shell > vim /usr/local/logstash/logstash.conf # Logstash.conf input { file { path => '/tmp/access.log' start_position => 'beginning' } } output { redis { host => '192.168.214.20' port => '6379' data_type => "list" key => "logstash-list" } }
## logstash.conf 中至少要有一个 input 跟 output ,否则默认使用 stdin 跟 stdout
二、服务端操作( Redis 、Elasticsearch 、Logstash 、Kibana )
1、安装、配置、启动 Redis
shell > cd /usr/local/src shell > wget http://download.redis.io/releases/redis-3.0.2.tar.gz shell > tar zxf redis-3.0.2.tar.gz shell > cd redis-3.0.2 ; make ; make install shell > mkdir /usr/local/redis shell > cp /usr/local/src/redis-3.0.2/redis.conf /usr/local/redis/ shell > sed -i '/daemonize/s/no/yes/' /usr/local/redis/redis.conf shell > sed -i 's#dir ./#dir /usr/local/redis#' /usr/local/redis/redis.conf shell > redis-server /usr/local/redis/redis.conf shell > redis-cli ping PONG
## 至此,证明 Redis 启动正常( 想进一步了解 Redis 的小伙伴可以参考官方文档,http://blog.chinaunix.net/uid/30272825/cid-211045-list-1.html 这里也有!)
shell > iptables -I INPUT -p tcp --dport 6379 -j ACCEPT shell > service iptables save
## 要开放 TCP 6379 端口,不然客户端数据写不进来
2、测试一
## 客户端启动 Logstash ,然后服务端查看 Redis 中有没有数据
shell > /usr/local/logstash/bin/logstash -f /usr/local/logstash/logstash.conf & ( 客户端 ) shell > redis-cli 127.0.0.1:6379> keys * 1) "logstash-list" 127.0.0.1:6379> LRANGE logstash-list 0 -1 1) "{"message":"12.12.12.12 error","@version":"1","@timestamp":"2015-07-14T17:34:02.779Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 2) "{"message":" [02/Mar/2015:00:42:20 +0800] \"POST /include/dialog/select_soft_post.php HTTP/1.1\" 404 233","@version":"1","@timestamp":"2015-07-14T17:37:04.366Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 3) "{"message":"149.129.145.215 - - [02/Mar/2015:01:16:56 +0800] \"GET /tmUnblock.cgi HTTP/1.1\" 400 226","@version":"1","@timestamp":"2015-07-14T17:37:04.375Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 4) "{"message":"210.63.99.212 - - [02/Mar/2015:02:49:24 +0800] \"HEAD / HTTP/1.0\" 403 -","@version":"1","@timestamp":"2015-07-14T17:37:04.380Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 5) "{"message":"222.186.128.50 - - [02/Mar/2015:03:07:36 +0800] \"GET http://www.baidu.com/ HTTP/1.1\" 403 202","@version":"1","@timestamp":"2015-07-14T17:37:04.381Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 6) "{"message":"222.186.128.55 - - [02/Mar/2015:06:53:21 +0800] \"GET http://www.baidu.com/ HTTP/1.1\" 403 202","@version":"1","@timestamp":"2015-07-14T17:37:04.381Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 7) "{"message":"222.186.128.53 - - [02/Mar/2015:07:10:43 +0800] \"GET http://www.baidu.com/ HTTP/1.1\" 403 202","@version":"1","@timestamp":"2015-07-14T17:37:04.382Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 8) "{"message":"120.132.77.252 - - [02/Mar/2015:10:54:32 +0800] \"GET http://www.ly.com/ HTTP/1.1\" 403 202","@version":"1","@timestamp":"2015-07-14T17:37:04.383Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 9) "{"message":"123.59.33.27 - - [02/Mar/2015:11:15:36 +0800] \"GET http://www.ly.com/ HTTP/1.1\" 403 202","@version":"1","@timestamp":"2015-07-14T17:37:04.386Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 10) "{"message":"1.161.59.46 - - [02/Mar/2015:14:19:19 +0800] \"CONNECT mx2.mail2000.com.tw:25 HTTP/1.0\" 405 225","@version":"1","@timestamp":"2015-07-14T17:37:04.387Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 11) "{"message":"59.108.122.184 - - [29/Apr/2015:14:33:19 +0800] \"GET http://www.example.com/ HTTP/1.1\" 403 202","@version":"1","@timestamp":"2015-07-14T17:37:04.387Z","host":"localhost.localdomain","path":"/tmp/access.log"}" 12) "{"message":"","@version":"1","@timestamp":"2015-07-14T17:37:04.388Z","host":"localhost.localdomain","path":"/tmp/access.log"}"
## 很明显获取到了数据,这说明:客户端保存数据到服务端的 Redis 环节没有问题
3、安装、配置、启动 Elasticsearch
shell > cd /usr/local/src shell > wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.6.0.tar.gz shell > mv elasticsearch-1.6.0 /usr/local/elasticsearch shell > vim /usr/local/elasticsearch/config/elasticsearch.yml cluster.name: my_es node.name: "Master"
## cluster.name 集群名称,局域网内只要这个名称相同,那么就可以自动组成一个集群
## nod.name 节点名称
## 这些都是可以不用修改的,采用默认参数即可
shell > vim /usr/local/elasticsearch/bin/elasticsearch.in.sh if [ "x$ES_MIN_MEM" = "x" ]; then ES_MIN_MEM=64m fi if [ "x$ES_MAX_MEM" = "x" ]; then ES_MAX_MEM=128m fi
## 这里面需要注意一下,根据实际情况修改 Elasticsearch 可以使用的最大、小内存
shell > /usr/local/elasticsearch/bin/elasticsearch -d
## -d 后台启动,关闭命令:curl -X POST 127.0.0.1:9200/_shutdown
## 默认 HTTP 监听端口是 9200 ,可以通过 web 访问、也可以使用 curl 工具等
shell > curl -X GET 127.0.0.1:9200 { "status" : 200, "name" : "master", "cluster_name" : "my_es", "version" : { "number" : "1.6.0", "build_hash" : "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0", "build_timestamp" : "2015-06-09T13:36:34Z", "build_snapshot" : false, "lucene_version" : "4.10.4" }, "tagline" : "You Know, for Search" }
## 一些输出的状态信息,状态码、节点名、集群名、版本信息等等
shell > curl -X GET 127.0.0.1:9200/_cat/nodes?v host ip heap.percent ram.percent load node.role master name localhost.localdomain 127.0.0.1 25 92 2.07 d * master
## 搜索节点信息
4、安装、配置、启动 Logstash
## 安装跟客户端一样即可( 参考上面部分 )
shell > vim /usr/local/logstash/logstash.conf # Logstash.conf input { redis { host => '127.0.0.1' port => '6379' data_type => 'list' key => 'logstash-list' } } output { elasticsearch { host => '127.0.0.1' port => '9200' protocol => 'http' } }
## 服务端配置 Logstash 从本机的 Redis 取数据,存放到本机的 Elasticsearch
shell > /usr/local/logstash/bin/logstash -f /usr/local/logstash/logstash.conf &
5、测试二
shell > curl -X GET 127.0.0.1:9200/_cat/indices?v health status index pri rep docs.count docs.deleted store.size pri.store.size yellow open logstash-2015.07.14 5 1 1 0 4.1kb 4.1kb
## 可以看到已经有索引,名为 logstash-2015.07.14
shell > curl -X GET 127.0.0.1:9200/logstash-2015.07.14
## 可以这样简单查询一下具体数据
6、Kibana 展示阶段( K4 )
shell > cd /usr/local/src shell > wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz shell > tar zxf kibana-4.1.1-linux-x64.tar.gz shell > mv kibana-4.1.1-linux-x64 /usr/local/kibana shell > /usr/local/kibana > /usr/local/kibana/kibana.log &
## 好了,这样就算是启动成功了
## 默认监听 5601 端口,可以直接访问 http://192.168.214.20:5601
## 这部分内容其实挺多的,要想玩好这个,得单独去研究 Logstash 、Elasticsearch 、Kibana 这三样东西,先这样吧!( 感觉好难哦,到现在连个皮毛都没学会,丢人 )
## 参考文档:http://kibana.logstash.es/content/index.html
## https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns