• Elasticsearch优化 & filebeat配置文件优化 & logstash格式配置 & grok实践


    Elasticsearch优化 & filebeat配置文件优化 & logstash格式配置 & grok实践

    编码转换问题(主要就是中文乱码)

    (1)input 中的codec => plain 转码

    codec => plain {
             charset => "GB2312"
    }
    

    将GB2312 的文本编码,转为UTF-8 的编码

    (2)也可以在filebeat中实现编码的转换(推荐)

    filebeat.prospectors:
    - input_type: log
      paths:
        - c:UsersAdministratorDesktopperformanceTrace.txt
      encoding: GB2312
    

    删除多余日志中的多余行

    (1)logstash filter 中drop 删除

        if ([message] =~ "^20.*- task request,.*,start time.*") {   #用正则需删除的多余行
                drop {}
        } 
    

    (2)日志示例

    2020-03-20 10:44:01,523 [33]DEBUG Debug - task request,task Id:1cbb72f1-a5ea-4e73-957c-6d20e9e12a7a,start time:2018-03-20 10:43:59   #需删除的行
    -- Request String : {"UserName":"15046699923","Pwd":"ZYjyh727","DeviceType":2,"DeviceId":"PC-20170525SADY","EquipmentNo":null,"SSID":"pc","RegisterPhones":null,"AppKey":"ab09d78e3b2c40b789ddfc81674bc24deac","Version":"2.0.5.3"} -- End
    -- Response String : {"ErrorCode":0,"Success":true,"ErrorMsg":null,"Result":null,"WaitInterval":30} -- End
    

    grok 处理多种日志不同的行(重点)

    (1)日志示例:

    2020-03-20 10:44:01,523 [33]DEBUG Debug - task request,task Id:1cbb72f1-a5ea-4e73-957c-6d20e9e12a7a,start time:2018-03-20 10:43:59
    -- Request String : {"UserName":"15046699923","Pwd":"ZYjyh727","DeviceType":2,"DeviceId":"PC-20170525SADY","EquipmentNo":null,"SSID":"pc","RegisterPhones":null,"AppKey":"ab09d78e3b2c40b789ddfc81674bc24deac","Version":"2.0.5.3"} -- End
    -- Response String : {"ErrorCode":0,"Success":true,"ErrorMsg":null,"Result":null,"WaitInterval":30} -- End
    

    在logstash filter中grok 分别处理3行

    match => {
        "message" => "^20.*- task request,.*,start time:%{TIMESTAMP_ISO8601:RequestTime}"
    }
    
    match => {
        "message" => "^-- Request String : {"UserName":"%{NUMBER:UserName:int}","Pwd":"(?<Pwd>.*)","DeviceType":%{NUMBER:DeviceType:int},"DeviceId":"(?<DeviceId>.*)","EquipmentNo":(?<EquipmentNo>.*),"SSID":(?<SSID>.*),"RegisterPhones":(?<RegisterPhones>.*),"AppKey":"(?<AppKey>.*)","Version":"(?<Version>.*)"} -- End.*"    
    }
    
    match => {
        "message" => "^-- Response String : {"ErrorCode":%{NUMBER:ErrorCode:int},"Success":(?<Success>[a-z]*),"ErrorMsg":(?<ErrorMsg>.*),"Result":(?<Result>.*),"WaitInterval":%{NUMBER:WaitInterval:int}} -- End.*"
    }
    
    ... 等多行
    

    (2)日志示例:

    # 这是一条INFO 日志
    2018-09-06 21:21:40.536 [490343b4207b39e5,490343b4207b39e5] [reactor-http-epoll-4] INFO  c.w.w.p.i.config.SecurityFilter - [filter,75] - skipFlag:false  uri:/report-server/daily/queryDailyReportChannel authorization:GbUzq6IElKkvRswreIHd8Xv/YMDd885jyINObc543vx2H+0lhdu0p5bOu0Vd9PT+jgxJpXHYyZiPgQmyio5Sfg==
    
    # 这个一条ERROR日志
    2018-09-06 21:21:15.863 [548809be071dd887,548809be071dd887] [reactor-http-epoll-4] ERROR c.w.w.c.e.WebExceptionHandler - [handle,34] - 系统异常:/report-server/game/queryPartnerGameReport
    com.wbgg.wbcommon.core.base.exception.BusinessException: 您的账号未登录,请登录后再操作!
    	at com.wbgg.wbcommon.core.base.wrapper.Wrapper.check(Wrapper.java:155)
    	at com.wbgg.wbgateway.pc.infrastructure.config.SecurityFilter.filter(SecurityFilter.java:86)
    	at org.springframework.cloud.gateway.handler.FilteringWebHandler$GatewayFilterAdapter.filter(FilteringWebHandler.java:135)
    	at org.springframework.cloud.gateway.filter.OrderedGatewayFilter.filter(OrderedGatewayFilter.java:44)
    	at org.springframework.cloud.gateway.handler.FilteringWebHandler$DefaultGatewayFilterChain.lambda$filter$0(FilteringWebHandler.java:117)
    	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:44)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:3695)
    	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:172)
    	at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.innerNext(FluxConcatMap.java:275)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapInner.onNext(FluxConcatMap.java:849)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
    	at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241)
    	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2070)
    	at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:230)
    	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.innerNext(FluxConcatMap.java:275)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapInner.onNext(FluxConcatMap.java:849)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:192)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
    	at reactor.core.publisher.MonoFilterWhen$MonoFilterWhenMain.innerResult(MonoFilterWhen.java:193)
    	at reactor.core.publisher.MonoFilterWhen$FilterWhenInner.onNext(MonoFilterWhen.java:260)
    	at reactor.core.publisher.MonoFilterWhen$FilterWhenInner.onNext(MonoFilterWhen.java:228)
    	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2070)
    	at reactor.core.publisher.MonoFilterWhen$FilterWhenInner.onSubscribe(MonoFilterWhen.java:249)
    	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:3695)
    	at reactor.core.publisher.MonoFilterWhen$MonoFilterWhenMain.onNext(MonoFilterWhen.java:150)
    	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2070)
    	at reactor.core.publisher.MonoFilterWhen$MonoFilterWhenMain.onSubscribe(MonoFilterWhen.java:103)
    	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoFilterWhen.subscribe(MonoFilterWhen.java:56)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoPeek.subscribe(MonoPeek.java:71)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:3695)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:442)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:244)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxDematerialize$DematerializeSubscriber.onNext(FluxDematerialize.java:114)
    	at reactor.core.publisher.FluxDematerialize$DematerializeSubscriber.onNext(FluxDematerialize.java:42)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.drainAsync(FluxFlattenIterable.java:395)
    	at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.drain(FluxFlattenIterable.java:638)
    	at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.onNext(FluxFlattenIterable.java:242)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:204)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
    	at reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:118)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onComplete(ScopePassingSpanSubscriber.java:112)
    	at reactor.core.publisher.DrainUtils.postCompleteDrain(DrainUtils.java:131)
    	at reactor.core.publisher.DrainUtils.postComplete(DrainUtils.java:186)
    	at reactor.core.publisher.FluxMaterialize$MaterializeSubscriber.onComplete(FluxMaterialize.java:134)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onComplete(ScopePassingSpanSubscriber.java:112)
    	at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.drainAsync(FluxFlattenIterable.java:325)
    	at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.drain(FluxFlattenIterable.java:638)
    	at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.onComplete(FluxFlattenIterable.java:259)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onComplete(ScopePassingSpanSubscriber.java:112)
    	at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:144)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onComplete(ScopePassingSpanSubscriber.java:112)
    	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1508)
    	at reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:118)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onComplete(ScopePassingSpanSubscriber.java:112)
    	at reactor.core.publisher.FluxFlatMap$FlatMapMain.checkTerminated(FluxFlatMap.java:794)
    	at reactor.core.publisher.FluxFlatMap$FlatMapMain.drainLoop(FluxFlatMap.java:560)
    	at reactor.core.publisher.FluxFlatMap$FlatMapMain.drain(FluxFlatMap.java:540)
    	at reactor.core.publisher.FluxFlatMap$FlatMapMain.onComplete(FluxFlatMap.java:426)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onComplete(ScopePassingSpanSubscriber.java:112)
    	at reactor.core.publisher.FluxIterable$IterableSubscription.slowPath(FluxIterable.java:265)
    	at reactor.core.publisher.FluxIterable$IterableSubscription.request(FluxIterable.java:201)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.request(ScopePassingSpanSubscriber.java:79)
    	at reactor.core.publisher.FluxFlatMap$FlatMapMain.onSubscribe(FluxFlatMap.java:335)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onSubscribe(ScopePassingSpanSubscriber.java:71)
    	at reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:139)
    	at reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:63)
    	at reactor.core.publisher.FluxLiftFuseable.subscribe(FluxLiftFuseable.java:70)
    	at reactor.core.publisher.FluxFlatMap.subscribe(FluxFlatMap.java:97)
    	at reactor.core.publisher.FluxLift.subscribe(FluxLift.java:46)
    	at reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoFlattenIterable.subscribe(MonoFlattenIterable.java:101)
    	at reactor.core.publisher.FluxLiftFuseable.subscribe(FluxLiftFuseable.java:70)
    	at reactor.core.publisher.FluxMaterialize.subscribe(FluxMaterialize.java:40)
    	at reactor.core.publisher.FluxLift.subscribe(FluxLift.java:46)
    	at reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoPeekFuseable.subscribe(MonoPeekFuseable.java:74)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoFlattenIterable.subscribe(MonoFlattenIterable.java:101)
    	at reactor.core.publisher.FluxLiftFuseable.subscribe(FluxLiftFuseable.java:70)
    	at reactor.core.publisher.FluxDematerialize.subscribe(FluxDematerialize.java:39)
    	at reactor.core.publisher.FluxLift.subscribe(FluxLift.java:46)
    	at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:54)
    	at reactor.core.publisher.FluxLift.subscribe(FluxLift.java:46)
    	at reactor.core.publisher.FluxConcatMap.subscribe(FluxConcatMap.java:121)
    	at reactor.core.publisher.FluxLift.subscribe(FluxLift.java:46)
    	at reactor.core.publisher.MonoNext.subscribe(MonoNext.java:40)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoFlatMap.subscribe(MonoFlatMap.java:60)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoSwitchIfEmpty.subscribe(MonoSwitchIfEmpty.java:44)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:3695)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:442)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:244)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:96)
    	at reactor.core.publisher.FluxIterable$IterableSubscription.slowPath(FluxIterable.java:243)
    	at reactor.core.publisher.FluxIterable$IterableSubscription.request(FluxIterable.java:201)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.request(ScopePassingSpanSubscriber.java:79)
    	at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onSubscribe(FluxConcatMap.java:229)
    	at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onSubscribe(ScopePassingSpanSubscriber.java:71)
    	at reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:139)
    	at reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:63)
    	at reactor.core.publisher.FluxLiftFuseable.subscribe(FluxLiftFuseable.java:70)
    	at reactor.core.publisher.FluxConcatMap.subscribe(FluxConcatMap.java:121)
    	at reactor.core.publisher.FluxLift.subscribe(FluxLift.java:46)
    	at reactor.core.publisher.MonoNext.subscribe(MonoNext.java:40)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoSwitchIfEmpty.subscribe(MonoSwitchIfEmpty.java:44)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoFlatMap.subscribe(MonoFlatMap.java:60)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoFlatMap.subscribe(MonoFlatMap.java:60)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at org.springframework.cloud.sleuth.instrument.web.TraceWebFilter$MonoWebFilterTrace.subscribe(TraceWebFilter.java:180)
    	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.MonoPeekTerminal.subscribe(MonoPeekTerminal.java:61)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
    	at reactor.core.publisher.MonoLift.subscribe(MonoLift.java:45)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:3695)
    	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:172)
    	at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoPeekFuseable.subscribe(MonoPeekFuseable.java:70)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.core.publisher.MonoPeekTerminal.subscribe(MonoPeekTerminal.java:61)
    	at reactor.core.publisher.MonoLiftFuseable.subscribe(MonoLiftFuseable.java:55)
    	at reactor.netty.http.server.HttpServerHandle.onStateChange(HttpServerHandle.java:64)
    	at reactor.netty.tcp.TcpServerBind$ChildObserver.onStateChange(TcpServerBind.java:226)
    	at reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:434)
    	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
    	at reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:160)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
    	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
    	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328)
    	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302)
    	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
    	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
    	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
    	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799)
    	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:433)
    	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:330)
    	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
    	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    	at java.lang.Thread.run(Thread.java:748)
    

    在logstash filter中grok 规则进行匹配处理

    input {
      kafka {
        id => "test-kafka-input"
        bootstrap_servers => ["192.168.0.250:9092"] # kafka地址
        group_id => "logstash"						# kafka group
        topics => ["test", "filebeat"]				# kafka topics
        codec => json 								# 设定输入类型为json
      }
    }
    
    filter {
    
    #    mutate {
    #        gsub => [ "message", "
    ", "" ]		# 替换掉换行符
    #    }
    
    
        grok {
            match => ["message","%{TIMESTAMP_ISO8601:timestamp}s+%{SYSLOG5424SD:uid}s+%{SYSLOG5424SD:threadid}s+%{LOGLEVEL:loglevel}s+%{JAVACLASS:javaclass}s+.?s+%{SYSLOG5424SD}s+.?s+%{GREEDYDATA:message}"]	 # 配置正则表达式和标签匹配日志
            overwrite => ["message"]				# 将上面%{GREEDYDATA:message} 标签覆盖到message上
        }
    
        date {
            match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ] # 配置timestamp 时间格式
            target => "@timestamp"								# 将上面grok正则匹配的标签timestamp 覆盖到默认date "@timestamp" 上面,以便kibana中看到打印的最新时间
        }
        
        # 下面这段是为了解决Elasticsearch 默认时间是0时区,不是东八区,所以默认显示时间比东八区少8个小时,这时我们通过ruby 进行时间格式的修改,增加8个小时,示例如下:
        ruby { 
            code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" 
        }
     
        ruby {
            code => "event.set('@timestamp',event.get('timestamp'))"
        }
    
    	# 配置要删除的多余的一些字符串,通过mutate模块进行删除
        mutate {
            remove_field => ["timestamp","hostname","tags","stream","agent","ecs","input","[kubernetes][container][name]","[kubernetes][labels][pod-template-hash]","[kubernetes][pod][uid]","[kubernetes][replicaset]","@version","[log][offset]"]
        }
    
        json {
            source => "@fields"
            # 删除filebeat 自带的不需要的元数据
            remove_field => [ "beat","@fields","fields","index_name","offset","source","message","time","tags"]
          }
    
    
    
    #    json {
    #        source => "message" 
    #        remove_field => [ "message" ]
    #  }
    
    #  multiline {
    #    pattern => "^d{4}-d{1,2}-d{1,2}sd{1,2}:d{1,2}:d{1,2}" 
    #    negate => true   
    #    what => "previous" 
    #    }
    
    }
    
    
    output {
      elasticsearch {
        hosts => ["http://192.168.0.250:9200"]
        user => logstash_admin
        password => "YHkdypsPKqw5gaWKE"
        index => "game-filebeat-%{+YYYY.MM.dd}"
      }
      
      #file {
      #  path => "/test/bak/test.txt"
      #}
      
    }
    

    日志多行合并处理—multiline插件(重点)

    (1)示例:

    ① 日志

    2018-03-20 10:44:01,523 [33]DEBUG Debug - task request,task Id:1cbb72f1-a5ea-4e73-957c-6d20e9e12a7a,start time:2018-03-20 10:43:59
    -- Request String : {"UserName":"15046699923","Pwd":"ZYjyh727","DeviceType":2,"DeviceId":"PC-20170525SADY","EquipmentNo":null,"SSID":"pc","RegisterPhones":null,"AppKey":"ab09d78e3b2c40b789ddfc81674bc24deac","Version":"2.0.5.3"} -- End
    -- Response String : {"ErrorCode":0,"Success":true,"ErrorMsg":null,"Result":null,"WaitInterval":30} -- End
    

    ② logstash grok 对合并后多行的处理(合并多行后续都一样,如下)

    filter {
      grok {
        match => {
          "message" => "^%{TIMESTAMP_ISO8601:InsertTime} .*- task request,.*,start time:%{TIMESTAMP_ISO8601:RequestTime}
    -- Request String : {"UserName":"%{NUMBER:UserName:int}","Pwd":"(?<Pwd>.*)","DeviceType":%{NUMBER:DeviceType:int},"DeviceId":"(?<DeviceId>.*)","EquipmentNo":(?<EquipmentNo>.*),"SSID":(?<SSID>.*),"RegisterPhones":(?<RegisterPhones>.*),"AppKey":"(?<AppKey>.*)","Version":"(?<Version>.*)"} -- End
    -- Response String : {"ErrorCode":%{NUMBER:ErrorCode:int},"Success":(?<Success>[a-z]*),"ErrorMsg":(?<ErrorMsg>.*),"Result":(?<Result>.*),"WaitInterval":%{NUMBER:WaitInterval:int}} -- End"
        }
      }
    }
    

    (2)在filebeat中使用multiline 插件(推荐)

    ① 介绍multiline

    pattern:正则匹配从哪行合并

    negate:true/false,匹配到pattern 部分开始合并,还是不配到的合并

    match:after/before(需自己理解)

      after:匹配到pattern 部分后合并,注意:这种情况最后一行日志不会被匹配处理

      before:匹配到pattern 部分前合并(推荐)

    ② 5.5版本之后(before为例)

    filebeat.prospectors:
    - input_type: log
      paths:
        - /root/performanceTrace*
      fields:
        type: zidonghualog
      multiline.pattern: '.*"WaitInterval":.*-- End'
      multiline.negate: true
      multiline.match: before
    

    ③ 5.5版本之前(after为例)

    filebeat.prospectors:
    - input_type: log 
         paths:
          - /root/performanceTrace*
          input_type: log 
          multiline:
               pattern: '^20.*'
               negate: true
               match: after
    

    (3)在logstash input中使用multiline 插件(没有filebeat 时推荐)

    ① 介绍multiline

    pattern:正则匹配从哪行合并

    negate:true/false,匹配到pattern 部分开始合并,还是不配到的合并

    what:previous/next(需自己理解)

      previous:相当于filebeat 的after

      next:相当于filebeat 的before

    ② 用法

    input {
            file {
                    path => ["/root/logs/log2"]
                    start_position => "beginning"
                    codec => multiline {
                            pattern => "^20.*"
                            negate => true
                            what => "previous"
                    }
            }
    }
    

    (4)在logstash filter中使用multiline 插件(不推荐)

    (a)不推荐的原因:

      ① filter设置multiline后,pipline worker会自动将为1

      ② 5.5 版本官方把multiline 去除了,要使用的话需下载,下载命令如下:

      /usr/share/logstash/bin/logstash-plugin install logstash-filter-multiline

    (b)示例:

    filter {
      multiline {
        pattern => "^20.*"
        negate => true
        what => "previous"
      }
    } 
    

    logstash filter 中的date使用

    (1) 日志示例

    2018-03-20 10:44:01 [33]DEBUG Debug - task request,task Id:1cbb72f1-a5ea-4e73-957c-6d20e9e12a7a,start time:2018-03-20 10:43:59
    

    (2) date 使用

            date {
                    match => ["InsertTime","YYYY-MM-dd HH:mm:ss "]
                    remove_field => "InsertTime"
            }
    

    注:

    match => ["timestamp" ,"dd/MMM/YYYY H:m:s Z"]
    

    匹配这个字段,字段的格式为:日日/月月月/年年年年 时/分/秒 时区

    也可以写为:match => ["timestamp","ISO8601"](推荐)

    (3)date 介绍

      就是将匹配日志中时间的key 替换为@timestamp 的时间,因为@timestamp 的时间是日志送到logstash 的时间,并不是日志中真正的时间。

    6、对多类日志分类处理(重点)

    ① 在filebeat 的配置中添加type 分类

    filebeat:
      prospectors:
        - paths:
            - /mnt/data_total/WebApiDebugLog.txt*
          fields:
            type: WebApiDebugLog_total
        - paths:
            - /mnt/data_request/WebApiDebugLog.txt*
          fields:
            type: WebApiDebugLog_request
        - paths:
            - /mnt/data_report/WebApiDebugLog.txt*
          fields:
            type: WebApiDebugLog_report
    

    ② 在logstash filter中使用if,可进行对不同类进行不同处理

    filter {
       if [fields][type] == "WebApiDebugLog_request" {   #对request 类日志
            if ([message] =~ "^20.*- task report,.*,start time.*") {   #删除report 行
                    drop {}
            }
        grok {
            match => {"... ..."}
            }
    }
    

    ③ 在logstash output中使用if

    if [fields][type] == "WebApiDebugLog_total" {
        elasticsearch {
            hosts => ["6.6.6.6:9200"]
            index => "logstashl-WebApiDebugLog_total-%{+YYYY.MM.dd}"
            document_type => "WebApiDebugLog_total_logs"
    } 
    

    对elk 整体性能的优化

    性能分析

    (1)服务器硬件Linux:1cpu 4GRAM

    假设每条日志250 Byte

    (2)分析

    logstash硬件Linux:1cpu 4GRAM

    每秒500条日志
    
    去掉ruby每秒660条日志
    
    去掉grok后每秒1000条数据
    

    filebeat硬件Linux:1cpu 4GRAM

    每秒2500-3500条数据
    
    每天每台机器可处理:24h*60min*60sec*3000*250Byte=64,800,000,000Bytes,约64G
    

    ③ 瓶颈在logstash 从redis中取数据存入ES,开启一个logstash,每秒约处理6000条数据;开启两个logstash,每秒约处理10000条数据(cpu已基本跑满);

    ④ logstash的启动过程占用大量系统资源,因为脚本中要检查java、ruby以及其他环境变量,启动后资源占用会恢复到正常状态。

    关于收集日志的选择:logstash/filter

    (1)没有原则要求使用filebeat或logstash,两者作为shipper的功能是一样的,区别在于:

    logstash由于集成了众多插件,如grok,ruby,所以相比beat是重量级的;

    ② logstash启动后占用资源更多,如果硬件资源足够则无需考虑二者差异;

    ③ logstash基于JVM,支持跨平台;而beat使用golang编写,AIX不支持;

    ④ AIX 64bit平台上需要安装jdk(jre) 1.7 32bit,64bit的不支持;

    ⑤ filebeat可以直接输入到ES,但是系统中存在logstash直接输入到ES的情况,这将造成不同的索引类型造成检索复杂,最好统一输入到els 的源。

    (2)总结

      logstash/filter 总之各有千秋,但是,我推荐选择:在每个需要收集的日志服务器上配置filebeat,因为轻量级,用于收集日志;再统一输出给logstash,做对日志的处理;最后统一由logstash 输出给es。中间也开增加kafka消息队列进行缓存。

    logstash的优化相关配置

    (1)可以优化的参数,可根据自己的硬件进行优化配置

    ① pipeline 线程数,官方建议是等于CPU内核数

    默认配置 ---> pipeline.workers: 2

    可优化为 ---> pipeline.workers: CPU内核数(或几倍cpu内核数)

    ② 实际output 时的线程数

    默认配置 ---> pipeline.output.workers: 1

    可优化为 ---> pipeline.output.workers: 不超过pipeline 线程数

    ③ 每次发送的事件数

    默认配置 ---> pipeline.batch.size: 125

    可优化为 ---> pipeline.batch.size: 1000

    ④ 发送延时

    默认配置 ---> pipeline.batch.delay: 5

    可优化为 ---> pipeline.batch.size: 10

    (2)总结

      通过设置-w参数指定pipeline worker数量,也可直接修改配置文件logstash.yml。这会提高filter和output的线程数,如果需要的话,将其设置为cpu核心数的几倍是安全的,线程在I/O上是空闲的。

      默认每个输出在一个pipeline worker线程上活动,可以在输出output 中设置workers设置,不要将该值设置大于pipeline worker数。

      还可以设置输出的batch_size数,例如ES输出与batch size一致。

      filter设置multiline后,pipline worker会自动将为1,如果使用filebeat,建议在beat中就使用multiline,如果使用logstash作为shipper,建议在input 中设置multiline,不要在filter中设置multiline。

    (3)Logstash中的JVM配置文件

      Logstash是一个基于Java开发的程序,需要运行在JVM中,可以通过配置jvm.options来针对JVM进行设定。比如内存的最大最小、垃圾清理机制等等。JVM的内存分配不能太大不能太小,太大会拖慢操作系统。太小导致无法启动。默认如下:

    -Xms256m  # 最小使用内存
    -Xmx1g 	  # 最大使用内存
    

    引入Redis 的相关问题

    (1)filebeat可以直接输入到logstash(indexer),但logstash没有存储功能,如果需要重启需要先停所有连入的beat,再停logstash,造成运维麻烦;另外如果logstash发生异常则会丢失数据;引入Redis作为数据缓冲池,当logstash异常停止后可以从Redis的客户端看到数据缓存在Redis中;

    (2)Redis可以使用list(最长支持4,294,967,295条)或发布订阅存储模式;

    (3)redis 做elk 缓冲队列的优化:

    ​ ① bind 0.0.0.0 #不要监听本地端口

    ​ ② requirepass ilinux.io #加密码,为了安全运行

    ​ ③ 只做队列,没必要持久存储,把所有持久化功能关掉:快照(RDB文件)和追加式文件(AOF文件),性能更好

      save "" 禁用快照
      appendonly no 关闭RDB
    

    ​ ④ 把内存的淘汰策略关掉,把内存空间最大

      maxmemory 0 #maxmemory为0的时候表示我们对Redis的内存使用没有限制
    

    elasticsearch 节点优化配置

    (1)服务器硬件配置,OS 参数

    (a) /etc/sysctl.conf 配置

    vim /etc/sysctl.conf
    
    vm.swappiness = 1                     # ES 推荐将此参数设置为 1,大幅降低 swap 分区的大小,强制最大程度的使用内存,注意,这里不要设置为 0, 这会很可能会造成 OOM
    net.core.somaxconn = 65535     		# 定义了每个端口最大的监听队列的长度
    vm.max_map_count= 262144    			# 限制一个进程可以拥有的VMA(虚拟内存区域)的数量。虚拟内存区域是一个连续的虚拟地址空间区域。当VMA 的数量超过这个值,OOM
    fs.file-max = 518144                  # 设置 Linux 内核分配的文件句柄的最大数量
    
    [root@elasticsearch]# sysctl -p 生效一下
    

    (b)limits.conf 配置

    vim /etc/security/limits.conf
    elasticsearch    soft    nofile          65535
    elasticsearch    hard    nofile          65535
    elasticsearch    soft    memlock         unlimited
    elasticsearch    hard    memlock         unlimited
    

    (c)为了使以上参数永久生效,还要设置两个地方

    vim /etc/pam.d/common-session-noninteractive
    vim /etc/pam.d/common-session
    
    添加如下属性:
    session required pam_limits.so
    可能需重启后生效
    

    (2)elasticsearch 中的JVM配置文件

    -Xms2g
    -Xmx2g
    

    ① 将最小堆大小(Xms)和最大堆大小(Xmx)设置为彼此相等。

    ② Elasticsearch可用的堆越多,可用于缓存的内存就越多。但请注意,太多的堆可能会使您长时间垃圾收集暂停。

    ③ 设置Xmx为不超过物理RAM的50%,以确保有足够的物理内存留给内核文件系统缓存。

    ④ 不要设置Xmx为JVM用于压缩对象指针的临界值以上;确切的截止值有所不同,但接近32 GB。不要超过32G,如果空间大,多跑几个实例,不要让一个实例太大内存

    (3)elasticsearch 配置文件优化参数

    ① vim elasticsearch.yml

    bootstrap.memory_lock: true  #锁住内存,不使用swap
    #缓存、线程等优化如下
    bootstrap.mlockall: true
    transport.tcp.compress: true
    indices.fielddata.cache.size: 40%
    indices.cache.filter.size: 30%
    indices.cache.filter.terms.size: 1024mb
    threadpool:
        search:
            type: cached
            size: 100
            queue_size: 2000
    

    ② 设置环境变量

    vim /etc/profile.d/elasticsearch.sh export ES_HEAP_SIZE=2g #Heap Size不超过物理内存的一半,且小于32G

    (4)集群的优化(我未使用集群)

    ① ES是分布式存储,当设置同样的cluster.name后会自动发现并加入集群;

    ② 集群会自动选举一个master,当master宕机后重新选举;

    ③ 为防止"脑裂",集群中个数最好为奇数个

    ④ 为有效管理节点,可关闭广播 discovery.zen.ping.multicast.enabled: false,并设置单播节点组discovery.zen.ping.unicast.hosts: ["ip1", "ip2", "ip3"]

    性能的检查

    (1)检查输入和输出的性能

    Logstash和其连接的服务运行速度一致,它可以和输入、输出的速度一样快。

    (2)检查系统参数

    ① CPU

    注意CPU是否过载。在Linux/Unix系统中可以使用top -H查看进程参数以及总计。

    如果CPU使用过高,直接跳到检查JVM堆的章节并检查Logstash worker设置。

    ② Memory

    注意Logstash是运行在Java虚拟机中的,所以它只会用到你分配给它的最大内存。

    检查其他应用使用大量内存的情况,这将造成Logstash使用硬盘swap,这种情况会在应用占用内存超出物理内存范围时。

    ③ I/O 监控磁盘I/O检查磁盘饱和度

    使用Logstash plugin(例如使用文件输出)磁盘会发生饱和。

    当发生大量错误,Logstash生成大量错误日志时磁盘也会发生饱和。

    在Linux中,可使用iostat,dstat或者其他命令监控磁盘I/O

    ④ 监控网络I/O

    当使用大量网络操作的input、output时,会导致网络饱和。

    在Linux中可使用dstat或iftop监控网络情况。

    (3)检查JVM heap

    heap设置太小会导致CPU使用率过高,这是因为JVM的垃圾回收机制导致的。

    一个快速检查该设置的方法是将heap设置为两倍大小然后检测性能改进。不要将heap设置超过物理内存大小,保留至少1G内存给操作系统和其他进程。

    你可以使用类似jmap命令行或VisualVM更加精确的计算JVM heap

  • 相关阅读:
    vue ssr
    webpack-dev-server proxy代理
    PHP连数据库生成数据字典
    redis.rpm 安装
    centos 6.5安装NodeJS
    Jenkins + git + maven 安装
    最新版本GIT安装
    身份证校验
    快递100物流公司列表
    redis 安装
  • 原文地址:https://www.cnblogs.com/Serverlessops/p/13627083.html
Copyright © 2020-2023  润新知