5.1 通过Redis 队列扩展:
Redis 服务器是Logstash 官方推荐的Broker选项,Broker 角色也就意味着会同时存在输入和输出两个插件。
读取 Redis 数据:
Logstash::Input::Redis 支持三种data_type(实际上是redis_type),不同的数据类型会导致实际采用不用的Redis 命令操作:
[elk@node01 conf]$ cat redis01.conf
input {
redis {
data_type=>"pattern_channel"
key =>"logstash-redis"
host =>"192.168.137.3"
port =>6379
}
}
output {
stdout {
codec => rubydebug
}
}
[elk@node01 conf]$ logstash -f redis01.conf
Settings: Default pipeline workers: 4
Pipeline main started
2.命令行验证
首先确认你设置的host 服务器上已经运行了redis-server服务,然后打开终端运行Logstash进程等待输入数据,
然后打开另一个终端,输入redis-cli命令
[elk@node01 conf]$ logstash -f redis01.conf
Settings: Default pipeline workers: 4
Pipeline main started
JSON codec is expecting array or object/map {:data=>"1111111111111", :level=>:error}
{
"message" => "1111111111111",
"tags" => [
[0] "_jsonparsefailure"
],
"@version" => "1",
"@timestamp" => "2018-02-09T02:39:42.500Z"
}
即数据类型为频道发布订阅方式,这种方式在需要扩展Logstash 成多节点集群的时候,
127.0.0.1:6379>
127.0.0.1:6379> PUBLISH logstash-redis "abcdefg"
(integer) 2
此时订阅了该频道的Logstash 进程同时接受,然后重复输出内容:
"message" => "abcdefg",
"tags" => [
[0] "_jsonparsefailure"
],
"@version" => "1",
"@timestamp" => "2018-02-09T02:47:43.499Z"
}
{
"message" => "abcdefg",
"tags" => [
[0] "_jsonparsefailure"
],
"@version" => "1",
"@timestamp" => "2018-02-09T02:47:43.918Z"
}