Flume OutOfMemoryError错误
运行Flume没多久就报下面的异常:
2016-08-24 17:35:58,927 (Flume Thrift IPC Thread 8) [ERROR - org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:196)] Error while writing to required channel: org.apache.flume.channel.MemoryChannel{name: memoryChannel}
2016-08-24 17:35:59,332 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - kafka.utils.Logging$class.error(Logging.scala:97)] Failed to collate messages by topic, partition due to: GC overhead limit exceeded
2016-08-24 17:35:59,332 (Flume Thrift IPC Thread 8) [ERROR - org.apache.thrift.ProcessFunction.process(ProcessFunction.java:41)] Internal error processing appendBatch
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421)
at java.lang.StringBuffer.append(StringBuffer.java:272)
at java.io.StringWriter.write(StringWriter.java:112)
at java.io.PrintWriter.write(PrintWriter.java:456)
at java.io.PrintWriter.write(PrintWriter.java:473)
at java.io.PrintWriter.print(PrintWriter.java:603)
at java.io.PrintWriter.println(PrintWriter.java:756)
at java.lang.Throwable$WrappedPrintWriter.println(Throwable.java:764)
at java.lang.Throwable.printStackTrace(Throwable.java:658)
at java.lang.Throwable.printStackTrace(Throwable.java:721)
at org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:60)
at org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
at org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:576)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:196)
at org.apache.flume.source.ThriftSource$ThriftSourceHandler.appendBatch(ThriftSource.java:457)
at org.apache.flume.thrift.ThriftSourceProtocol$Processor$appendBatch.getResult(ThriftSourceProtocol.java:259)
at org.apache.flume.thrift.ThriftSourceProtocol$Processor$appendBatch.getResult(ThriftSourceProtocol.java:247)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
at org.apache.thrift.server.Invocation.run(Invocation.java:18)
显然是我采集的数据太大了,导致Flume的JVM内存不够用。
用 ps -aux | grep flume查找Flume进程也能看到Flume使用了多少内存。
解决方法:
1、 vim bin/flume-ng
在里面找到JAVA_OPTS="-Xmx20m",它默认启动时最大可用内存为20,只要将其调大一点就可以了。
2、 或者在Flume conf目录下找到flume-env.sh.template文件
cp flume-env.sh.template flume-env.sh
vim flume-env.sh
把下面这句配置的注释删掉就可了
# export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"