• Druid写入zookeeper数据太大的解决办法


    报错如下

    org.apache.zookeeper.ClientCnxn - Session 0x102c87b7f880003 for server cweb244/10.17.2.241:2181, unexpected error, closing socket connection and attempting reconnect
    java.io.IOException: Packet len6429452 is out of range!

    意思是数据包长度大于jute.maxBuffer允许的长度。

    源码详情

    private int packetLen = ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT;
    
    protected void initProperties() throws IOException {
        try {
            packetLen = clientConfig.getInt(
                ZKConfig.JUTE_MAXBUFFER,
                ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT);
            LOG.info("{} value is {} Bytes", ZKConfig.JUTE_MAXBUFFER, packetLen);
        } catch (NumberFormatException e) {
            String msg = MessageFormat.format(
                "Configured value {0} for property {1} can not be parsed to int",
                clientConfig.getProperty(ZKConfig.JUTE_MAXBUFFER),
                ZKConfig.JUTE_MAXBUFFER);
            LOG.error(msg);
            throw new IOException(msg);
        }
    }
    
    void readLength() throws IOException {
        int len = incomingBuffer.getInt();
        if (len < 0 || len >= packetLen) {
            throw new IOException("Packet len " + len + " is out of range!");
        }
        incomingBuffer = ByteBuffer.allocate(len);
    }

    zookeeper的默认值最大值为4M。所以Druid一些数据大于默认上限时就会报错。

    解决办法

      进入zk的conf目录下,新建一个java.env文件 将 -Djute.maxbuffer 设置为10M

      

    #!/bin/sh
    
    export JAVA_HOME=/...../
    
    # heap size MUST be modified according to cluster environment
    
    export JVMFLAGS="-Xms2048m -Xmx4096m $JVMFLAGS -Djute.maxbuffer=10485760 "

    同步修改所有节点后重启

    强烈建议Druid单独部署一套zookeeper集群

  • 相关阅读:
    BIEE建模参考规范
    informatica 学习日记整理
    Web Service 的工作原理
    Oracle 时间差计算
    oracle基础知识
    Oracle 外连接和 (+)号的用法
    根据appId匹配项目名称
    vue技术分享-你可能不知道的7个秘密
    echarts3 迁徙图 迁入迁出
    ES6学习笔记
  • 原文地址:https://www.cnblogs.com/successok/p/14203623.html
Copyright © 2020-2023  润新知