一、ELK的组成
二、工作流程
三、环境准备
四、正式安装
一、ELK的组成
ELK由ElasticSearch、Logstash和Kibana三部分组成,每一部分的功能及特点如下图所示:
二、工作流程
在需要收集日志的所有服务上部署logstash,作为logstash agent(logstash shipper)用于监控并过滤收集日志,
将过滤后的内容发送到Redis,然后logstash indexer将日志收集在一起交给全文搜索服务ElasticSearch,
可以用ElasticSearch进行自定义搜索通过Kibana 来结合自定义搜索进行页面展示。
而Logstash 社区通常习惯用 shipper,broker 和 indexer 来描述数据流中不同进程各自的角色。如下图:
三、环境准备
操作系统:Red Hat Enterprise Linux Server release 7.2 本机IP地址:192.168.96.128
所需文件及版本:
logstash-2.4.0.tar.gz
kibana-4.6.1-linux-x86_64.tar.gz
elasticsearch-2.4.1.tar.gz
四、正式安装
注意:为了避免影响测试,临时将firewall与selinux关闭。
# systemctl stop firewalld.service # setenforce 0
4.1 JDK安装
确认系统是否装有JDK,如果系统默认已经装有JDK请忽略此步骤,若没安装请自行安装。
[root@localhost ~]# java -version openjdk version "1.8.0_101" OpenJDK Runtime Environment (build 1.8.0_101-b13) OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode)
4.2 Logstash安装
安装官网下载最新版的logstash,地址为https://www.elastic.co/downloads/logstash
[root@localhost ~]# wget https://download.elastic.co/logstash/logstash/logstash-2.4.0.tar.gz
将下载好的文件解压到/usr/local目录下
[root@localhost ~]# tar zxf logstash-2.4.0.tar.gz -C /usr/local/
配置logstash环境变量
[root@localhost ~]# echo "export PATH=$PATH:/usr/local/logstash-2.4.0/bin/" > /etc/profile.d/logstash.sh
[root@localhost ~]# . /etc/profile
启动Logstash
通过-e参数指定logstash的配置信息,用于快速测试,直接输出到屏幕。
[root@localhost ~]# logstash -e "input{stdin{}} output{stdout{}}" hello world \手动输入hello world后,稍等片刻会直接返回如下结果 Settings: Default pipeline workers: 1 Pipeline main started 2016-10-18T06:06:08.873Z localhost.localdomain hello world
通过-e参数指定logstash的配置信息,用于快速测试,以json格式输出到屏幕
[root@localhost ~]# logstash -e 'input{stdin{}} output{stdout{codec => rubydebug}}'
hello world \手动输入hello world后,稍等片刻会以json格式返回如下结果
Settings: Default pipeline workers: 1
Pipeline main started
{
"message" => "hello world",
"@version" => "1",
"@timestamp" => "2016-10-18T06:10:02.765Z",
"host" => "localhost.localdomain"
}
4.3 ElasticSearch下载并安装
[root@localhost ~]# wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.1/elasticsearch-2.4.1.tar.gz
[root@localhost ~]# tar zxf elasticsearch-2.4.1.tar.gz -C /usr/local/
修改ElasticSearch的配置文件elasticsearch.yml并做以下修改
[root@localhost ~]# cd /usr/local/elasticsearch-2.4.1/
[root@localhost elasticsearch-2.4.1]# vim config/elasticsearch.yml
编辑和新增以下内容
network.host: 192.168.96.128
启动elasticsearch,发现报错“don't run elasticsearch as root.”
[root@localhost config]# /usr/local/elasticsearch-2.4.1/bin/elasticsearch Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root. ...
出于系统安全考虑设置的条件。由于ElasticSearch可以接收用户输入的脚本并且执行,需要创建一个
单独的用户用来运行ElasticSearch 即可解决该问题。
[root@localhost config]# groupadd elsearch [root@localhost config]# useradd elsearch -g elsearch -p elasticsearch [root@localhost elasticsearch-2.4.1]# chown -R elsearch:elsearch /usr/local/elasticsearch-2.4.1/ [root@localhost elasticsearch-2.4.1]# su - elsearch \切换至新用户
[elsearch@localhost ~]$ cd /usr/local/elasticsearch-2.4.1/bin/
[elsearch@localhost bin]$ ./elasticsearch \再次尝试启动elasticsearch
[2016-10-18 16:08:32,546][INFO ][node ] [Alex] version[2.4.1], pid[71956], build[c67dc32/2016-09-27T18:57:55Z]
...
[2016-10-18 16:08:40,728][INFO ][node ] [Alex] started
在elasticsearch在2.*新版本中,output标签的host更改为了hosts,编辑配置文件时注意以下参数不要写错。
[root@localhost logstash]# cat logstash_agent.conf input { file { type => "messages" path => ["/var/log/messages"] } } output { elasticsearch { hosts => "192.168.96.128" } }
4.4 安装elasticsearch插件
Elasticsearch-kopf插件可以查询Elasticsearch中的数据,安装elasticsearch-kopf,只要在你安装Elasticsearch的目录中执行以下命令即可:
[elsearch@localhost elk]$ cd /usr/local/elasticsearch-2.4.1/bin/ [elsearch@localhost bin]$ ./plugin install lmenezes/elasticsearch-kopf ... Installed kopf into /usr/local/elasticsearch-2.4.1/plugins/kopf
浏览器访问kopf页面查看elasticsearch的数据
访问链接:http://192.168.96.128:9200/_plugin/kopf/#!/cluster
4.5 安装Kinaba
[root@localhost elk_file]# wget https://download.elastic.co/kibana/kibana/kibana-4.6.1-linux-x86_64.tar.gz [root@localhost elk_file]# tar zxf kibana-4.6.1-linux-x86_64.tar.gz -C /usr/local [root@localhost elk_file]# vim /usr/local/kibana-4.6.1-linux-x86_64/config/kibana.yml 编辑添加内容: elasticsearch.url: "http://192.168.96.128:9200"
启动kibana
[root@localhost elk_file]# /usr/local/kibana-4.6.1-linux-x86_64/bin/kibana log [18:30:42.720] [info][status][plugin:kibana@1.0.0] Status changed from uninitialized to green - Ready log [18:30:42.757] [info][status][plugin:elasticsearch@1.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [18:30:42.778] [info][status][plugin:kbn_vislib_vis_types@1.0.0] Status changed from uninitialized to green - Ready log [18:30:42.799] [info][status][plugin:markdown_vis@1.0.0] Status changed from uninitialized to green - Ready
浏览器访问
http://192.168.96.128:5601/#/settings/indices/?_g=()
使用默认的logstash-*的索引名称,并且是基于时间的,点击“Create”即可。看到如下界面说明索引创建完成。
点击“Discover”,可以搜索和浏览Elasticsearch中的数据。