• Netty添加线程池实现异步处理


      tomcat 异步线程模型大概可以理解为:acceptor负责接受新来的连接,然后把连接初始化后丢给poller来做io,然后又交给处理业务的exec线程池异步处理业务逻辑。

      所以如果IO线程和handler 在一个线程里面,如果handler 执行某个逻辑比较耗时,比如查数据库、服务间通信等会严重影响整个netty的性能。这时候就需要考虑将耗时操作异步处理。

    netty 中加入线程池有两种方式:

    第一种是handler 中加入线程池

    第二种是Context 中加入线程池

    1. handler 加入线程池

    核心代码如下:

    1. 服务端相关代码

    EchoServer

    package cn.xm.netty.example.echo;
    
    import io.netty.bootstrap.ServerBootstrap;
    import io.netty.channel.*;
    import io.netty.channel.nio.NioEventLoopGroup;
    import io.netty.channel.socket.SocketChannel;
    import io.netty.channel.socket.nio.NioServerSocketChannel;
    import io.netty.handler.logging.LogLevel;
    import io.netty.handler.logging.LoggingHandler;
    import io.netty.handler.ssl.SslContext;
    import io.netty.handler.ssl.SslContextBuilder;
    import io.netty.handler.ssl.util.SelfSignedCertificate;
    
    public final class EchoServer {
    
        static final boolean SSL = System.getProperty("ssl") != null;
        static final int PORT = Integer.parseInt(System.getProperty("port", "8007"));
    
        public static void main(String[] args) throws Exception {
            final SslContext sslCtx;
            if (SSL) {
                SelfSignedCertificate ssc = new SelfSignedCertificate();
                sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
            } else {
                sslCtx = null;
            }
    
            EventLoopGroup bossGroup = new NioEventLoopGroup(1);
            EventLoopGroup workerGroup = new NioEventLoopGroup();
            final EchoServerHandler serverHandler = new EchoServerHandler();
            try {
                ServerBootstrap b = new ServerBootstrap();
                b.group(bossGroup, workerGroup)
                        .channel(NioServerSocketChannel.class)
                        .option(ChannelOption.SO_BACKLOG, 100)
                        .handler(new LoggingHandler(LogLevel.INFO))
                        .childHandler(new ChannelInitializer<SocketChannel>() {
                            @Override
                            public void initChannel(SocketChannel ch) throws Exception {
                                ChannelPipeline p = ch.pipeline();
                                if (sslCtx != null) {
                                    p.addLast(sslCtx.newHandler(ch.alloc()));
                                }
                                p.addLast(new EchoServerHandler2());
                                p.addLast(serverHandler);
                            }
                        });
    
                ChannelFuture f = b.bind(PORT).sync();
    
                f.channel().closeFuture().sync();
            } finally {
                bossGroup.shutdownGracefully();
                workerGroup.shutdownGracefully();
            }
        }
    }

    EchoServerHandler

    package cn.xm.netty.example.echo;
    
    import io.netty.buffer.ByteBuf;
    import io.netty.buffer.Unpooled;
    import io.netty.channel.ChannelHandlerContext;
    import io.netty.channel.ChannelInboundHandlerAdapter;
    import io.netty.channel.DefaultEventLoopGroup;
    import io.netty.util.CharsetUtil;
    
    public class EchoServerHandler extends ChannelInboundHandlerAdapter {
    
        private static final DefaultEventLoopGroup eventExecutors = new DefaultEventLoopGroup(16);
    
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            System.out.println("cn.xm.netty.example.echo.EchoServerHandler.channelRead thread: " + Thread.currentThread().getName());
            // 强转为netty的ByteBuffer(实际就是包装的ByteBuffer)
            ByteBuf byteBuf = (ByteBuf) msg;
            System.out.println("客户端发送的消息是:" + byteBuf.toString(CharsetUtil.UTF_8));
            System.out.println("客户端地址:" + ctx.channel().remoteAddress());
            ctx.writeAndFlush(Unpooled.copiedBuffer("hello, 客户端!0!", CharsetUtil.UTF_8));
    
    //        ctx.channel().eventLoop().execute(new Runnable() {
            eventExecutors.execute(new Runnable() {
                @Override
                public void run() {
                    // 比如这里我们将一个特别耗时的任务转为异步执行(也就是任务提交到NioEventLoop的taskQueue中)
                    System.out.println("java.lang.Runnable.run thread: " + Thread.currentThread().getName());
                    try {
                        Thread.sleep(10 * 1000);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    ctx.writeAndFlush(Unpooled.copiedBuffer("hello, 客户端!1!", CharsetUtil.UTF_8));
                }
            });
        }
    
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            // Close the connection when an exception is raised.
            cause.printStackTrace();
            ctx.close();
        }
    }

    EchoServerHandler2

    package cn.xm.netty.example.echo;
    
    import io.netty.channel.ChannelHandlerContext;
    import io.netty.channel.ChannelOutboundHandlerAdapter;
    import io.netty.channel.ChannelPromise;
    import io.netty.channel.DefaultEventLoopGroup;
    
    public class EchoServerHandler2 extends ChannelOutboundHandlerAdapter {
    
        private static final DefaultEventLoopGroup eventExecutors = new DefaultEventLoopGroup(16);
    
        @Override
        public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
            super.write(ctx, msg, promise);
            System.out.println("cn.xm.netty.example.echo.EchoServerHandler2.write called, threadName: " + Thread.currentThread().getName());
        }
    
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            // Close the connection when an exception is raised.
            cause.printStackTrace();
            ctx.close();
        }
    }

    2. client 代码

    EchoClient

    package cn.xm.netty.example.echo;
    
    import io.netty.bootstrap.Bootstrap;
    import io.netty.channel.*;
    import io.netty.channel.nio.NioEventLoopGroup;
    import io.netty.channel.socket.SocketChannel;
    import io.netty.channel.socket.nio.NioSocketChannel;
    import io.netty.handler.ssl.SslContext;
    import io.netty.handler.ssl.SslContextBuilder;
    import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
    
    public final class EchoClient {
    
        static final boolean SSL = System.getProperty("ssl") != null;
        static final String HOST = System.getProperty("host", "127.0.0.1");
        static final int PORT = Integer.parseInt(System.getProperty("port", "8007"));
    
        public static void main(String[] args) throws Exception {
            final SslContext sslCtx;
            if (SSL) {
                sslCtx = SslContextBuilder.forClient()
                        .trustManager(InsecureTrustManagerFactory.INSTANCE).build();
            } else {
                sslCtx = null;
            }
    
            // Configure the client.
            EventLoopGroup group = new NioEventLoopGroup();
            try {
                Bootstrap b = new Bootstrap();
                b.group(group)
                        .channel(NioSocketChannel.class)
                        .option(ChannelOption.TCP_NODELAY, true)
                        .handler(new ChannelInitializer<SocketChannel>() {
                            @Override
                            public void initChannel(SocketChannel ch) throws Exception {
                                ChannelPipeline p = ch.pipeline();
                                if (sslCtx != null) {
                                    p.addLast(sslCtx.newHandler(ch.alloc(), HOST, PORT));
                                }
                                p.addLast(new EchoClientHandler());
                            }
                        });
    
                // Start the client.
                ChannelFuture f = b.connect(HOST, PORT).sync();
    
                // Wait until the connection is closed.
                f.channel().closeFuture().sync();
            } finally {
                // Shut down the event loop to terminate all threads.
                group.shutdownGracefully();
            }
        }
    }

    EchoClientHandler

    package cn.xm.netty.example.echo;
    
    import io.netty.buffer.ByteBuf;
    import io.netty.buffer.Unpooled;
    import io.netty.channel.ChannelHandlerContext;
    import io.netty.channel.ChannelInboundHandlerAdapter;
    import io.netty.util.CharsetUtil;
    
    
    public class EchoClientHandler extends ChannelInboundHandlerAdapter {
    
        @Override
        public void channelActive(ChannelHandlerContext ctx) {
            System.out.println("ClientHandler ctx: " + ctx);
            ctx.writeAndFlush(Unpooled.copiedBuffer("hello, 服务器!", CharsetUtil.UTF_8));
        }
    
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            // 强转为netty的ByteBuffer(实际就是包装的ByteBuffer)
            ByteBuf byteBuf = (ByteBuf) msg;
            System.out.println("服务器会送的消息是:" + byteBuf.toString(CharsetUtil.UTF_8));
            System.out.println("服务器地址:" + ctx.channel().remoteAddress());
        }
    
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            cause.printStackTrace();
            ctx.close();
        }
    }

    3. 测试

    先启动服务端,然后启动客户端,然后查看服务端控制台如下:

    cn.xm.netty.example.echo.EchoServerHandler.channelRead thread: nioEventLoopGroup-3-1
    客户端发送的消息是:hello, 服务器!
    客户端地址:/127.0.0.1:54247
    cn.xm.netty.example.echo.EchoServerHandler2.write called, threadName: nioEventLoopGroup-3-1
    java.lang.Runnable.run thread: defaultEventLoopGroup-4-1
    cn.xm.netty.example.echo.EchoServerHandler2.write called, threadName: nioEventLoopGroup-3-1

    4. 分析

    可以看到上面的逻辑是:

    (1) 当IO线程轮询到一个socket 事件后,IO线程开始处理,当走到EchoServerHandler 比较耗时的操作之后,将耗时任务交给线程池。

    (2) 当耗时任务执行完毕再执行ctx.writeAndFlush 时,会将这个任务再交给IO线程,过程如下(也就是最终的写操作都会交给IO线程):

    1》io.netty.channel.AbstractChannelHandlerContext#write(java.lang.Object, boolean, io.netty.channel.ChannelPromise)

        private void write(Object msg, boolean flush, ChannelPromise promise) {
            AbstractChannelHandlerContext next = findContextOutbound();
            final Object m = pipeline.touch(msg, next);
            EventExecutor executor = next.executor();
            if (executor.inEventLoop()) {
                if (flush) {
                    next.invokeWriteAndFlush(m, promise);
                } else {
                    next.invokeWrite(m, promise);
                }
            } else {
                AbstractWriteTask task;
                if (flush) {
                    task = WriteAndFlushTask.newInstance(next, m, promise);
                }  else {
                    task = WriteTask.newInstance(next, m, promise);
                }
                safeExecute(executor, task, promise, m);
            }
        }

    这里走的是else 代码块的代码,因为 当前线程不属于IO线程里面, 所以就走else。 else 代码块的逻辑是创建一个写Task, 然后调用io.netty.channel.AbstractChannelHandlerContext#safeExecute:

        private static void safeExecute(EventExecutor executor, Runnable runnable, ChannelPromise promise, Object msg) {
            try {
                executor.execute(runnable);
            } catch (Throwable cause) {
                try {
                    promise.setFailure(cause);
                } finally {
                    if (msg != null) {
                        ReferenceCountUtil.release(msg);
                    }
                }
            }
        }

      可以看到是调用execotor.execute 方法加入自己的任务队列里面。io.netty.util.concurrent.SingleThreadEventExecutor#execute

        public void execute(Runnable task) {
            if (task == null) {
                throw new NullPointerException("task");
            }
    
            boolean inEventLoop = inEventLoop();
            if (inEventLoop) {
                addTask(task);
            } else {
                startThread();
                addTask(task);
                if (isShutdown() && removeTask(task)) {
                    reject();
                }
            }
    
            if (!addTaskWakesUp && wakesUpForTask(task)) {
                wakeup(inEventLoop);
            }
        }

    补充:Handler 中加异步还有一种方式就是创建一个任务,加入到自己的任务队列,这个实际也占用的是IO线程

    package cn.xm.netty.example.echo;
    
    import io.netty.buffer.ByteBuf;
    import io.netty.buffer.Unpooled;
    import io.netty.channel.ChannelHandlerContext;
    import io.netty.channel.ChannelInboundHandlerAdapter;
    import io.netty.util.CharsetUtil;
    
    public class EchoServerHandler extends ChannelInboundHandlerAdapter {
    
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            System.out.println("cn.xm.netty.example.echo.EchoServerHandler.channelRead thread: " + Thread.currentThread().getName());
            // 强转为netty的ByteBuffer(实际就是包装的ByteBuffer)
            ByteBuf byteBuf = (ByteBuf) msg;
            System.out.println("客户端发送的消息是:" + byteBuf.toString(CharsetUtil.UTF_8));
            System.out.println("客户端地址:" + ctx.channel().remoteAddress());
            ctx.writeAndFlush(Unpooled.copiedBuffer("hello, 客户端!0!", CharsetUtil.UTF_8));
    
            ctx.channel().eventLoop().execute(new Runnable() {
                @Override
                public void run() {
                    // 比如这里我们将一个特别耗时的任务转为异步执行(也就是任务提交到NioEventLoop的taskQueue中)
                    System.out.println("java.lang.Runnable.run thread: " + Thread.currentThread().getName());
                    try {
                        Thread.sleep(10 * 1000);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    ctx.writeAndFlush(Unpooled.copiedBuffer("hello, 客户端!1!", CharsetUtil.UTF_8));
                }
            });
        }
    
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            // Close the connection when an exception is raised.
            cause.printStackTrace();
            ctx.close();
        }
    }

    测试: 可以看出异步也用的是当前的IO线程

    cn.xm.netty.example.echo.EchoServerHandler.channelRead thread: nioEventLoopGroup-3-1
    客户端发送的消息是:hello, 服务器!
    客户端地址:/127.0.0.1:53721
    cn.xm.netty.example.echo.EchoServerHandler2.write called, threadName: nioEventLoopGroup-3-1
    java.lang.Runnable.run thread: nioEventLoopGroup-3-1
    cn.xm.netty.example.echo.EchoServerHandler2.write called, threadName: nioEventLoopGroup-3-1

    2. Context 中增加异步线程池

    1. 代码改造

    EchoServer 代码改造

    package cn.xm.netty.example.echo;
    
    import io.netty.bootstrap.ServerBootstrap;
    import io.netty.channel.*;
    import io.netty.channel.nio.NioEventLoopGroup;
    import io.netty.channel.socket.SocketChannel;
    import io.netty.channel.socket.nio.NioServerSocketChannel;
    import io.netty.handler.logging.LogLevel;
    import io.netty.handler.logging.LoggingHandler;
    import io.netty.handler.ssl.SslContext;
    import io.netty.handler.ssl.SslContextBuilder;
    import io.netty.handler.ssl.util.SelfSignedCertificate;
    
    public final class EchoServer {
    
        static final boolean SSL = System.getProperty("ssl") != null;
        static final int PORT = Integer.parseInt(System.getProperty("port", "8007"));
    
        public static void main(String[] args) throws Exception {
            final SslContext sslCtx;
            if (SSL) {
                SelfSignedCertificate ssc = new SelfSignedCertificate();
                sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
            } else {
                sslCtx = null;
            }
    
            EventLoopGroup bossGroup = new NioEventLoopGroup(1);
            EventLoopGroup workerGroup = new NioEventLoopGroup();
            DefaultEventLoopGroup group = new DefaultEventLoopGroup(16);
            final EchoServerHandler serverHandler = new EchoServerHandler();
            try {
                ServerBootstrap b = new ServerBootstrap();
                b.group(bossGroup, workerGroup)
                        .channel(NioServerSocketChannel.class)
                        .option(ChannelOption.SO_BACKLOG, 100)
                        .handler(new LoggingHandler(LogLevel.INFO))
                        .childHandler(new ChannelInitializer<SocketChannel>() {
                            @Override
                            public void initChannel(SocketChannel ch) throws Exception {
                                ChannelPipeline p = ch.pipeline();
                                if (sslCtx != null) {
                                    p.addLast(sslCtx.newHandler(ch.alloc()));
                                }
                                p.addLast(group, new EchoServerHandler2());
                                p.addLast(group, serverHandler);
                            }
                        });
    
                ChannelFuture f = b.bind(PORT).sync();
    
                f.channel().closeFuture().sync();
            } finally {
                bossGroup.shutdownGracefully();
                workerGroup.shutdownGracefully();
            }
        }
    }

      调用p.addLast 的时候指定使用的线程组。 如果不指定,默认使用的是IO线程组。 如果指定了就使用指定的线程组。 这样就类似于Tomcat8 的线程模型。接收请求-》IO-》处理  分别在不同的线程里面。

    EchoServerHandler代码改造: 正常处理,无需异步开线程

    package cn.xm.netty.example.echo;
    
    import io.netty.buffer.ByteBuf;
    import io.netty.buffer.Unpooled;
    import io.netty.channel.ChannelHandlerContext;
    import io.netty.channel.ChannelInboundHandlerAdapter;
    import io.netty.util.CharsetUtil;
    
    public class EchoServerHandler extends ChannelInboundHandlerAdapter {
    
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            System.out.println("cn.xm.netty.example.echo.EchoServerHandler.channelRead thread: " + Thread.currentThread().getName());
            // 强转为netty的ByteBuffer(实际就是包装的ByteBuffer)
            ByteBuf byteBuf = (ByteBuf) msg;
            System.out.println("客户端发送的消息是:" + byteBuf.toString(CharsetUtil.UTF_8));
            System.out.println("客户端地址:" + ctx.channel().remoteAddress());
            ctx.writeAndFlush(Unpooled.copiedBuffer("hello, 客户端!0!", CharsetUtil.UTF_8));
    
            // 比如这里我们将一个特别耗时的任务转为异步执行(也就是任务提交到NioEventLoop的taskQueue中)
            System.out.println("java.lang.Runnable.run thread: " + Thread.currentThread().getName());
            try {
                Thread.sleep(10 * 1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            ctx.writeAndFlush(Unpooled.copiedBuffer("hello, 客户端!1!", CharsetUtil.UTF_8));
        }
    
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            // Close the connection when an exception is raised.
            cause.printStackTrace();
            ctx.close();
        }
    }

    2. 测试结果:

    cn.xm.netty.example.echo.EchoServerHandler.channelRead thread: defaultEventLoopGroup-4-1
    客户端发送的消息是:hello, 服务器!
    客户端地址:/127.0.0.1:52966
    cn.xm.netty.example.echo.EchoServerHandler2.write called, threadName: defaultEventLoopGroup-4-1
    java.lang.Runnable.run thread: defaultEventLoopGroup-4-1
    cn.xm.netty.example.echo.EchoServerHandler2.write called, threadName: defaultEventLoopGroup-4-1

      可以看到都是在自己开的线程组里面完成的任务。

    3. 代码查看

    (1)从之前的源码查阅到, context 封装了handler、pipeline、executor 等信息。 在p.addLast 的时候我们指定了自己的线程组,查看源码

    io.netty.channel.DefaultChannelPipeline#addLast(io.netty.util.concurrent.EventExecutorGroup, io.netty.channel.ChannelHandler...)

        @Override
        public final ChannelPipeline addLast(EventExecutorGroup executor, ChannelHandler... handlers) {
            if (handlers == null) {
                throw new NullPointerException("handlers");
            }
    
            for (ChannelHandler h: handlers) {
                if (h == null) {
                    break;
                }
                addLast(executor, null, h);
            }
    
            return this;
        }
    
        @Override
        public final ChannelPipeline addLast(EventExecutorGroup group, String name, ChannelHandler handler) {
            final AbstractChannelHandlerContext newCtx;
            synchronized (this) {
                checkMultiplicity(handler);
    
                newCtx = newContext(group, filterName(name, handler), handler);
    
                addLast0(newCtx);
    
                // If the registered is false it means that the channel was not registered on an eventloop yet.
                // In this case we add the context to the pipeline and add a task that will call
                // ChannelHandler.handlerAdded(...) once the channel is registered.
                if (!registered) {
                    newCtx.setAddPending();
                    callHandlerCallbackLater(newCtx, true);
                    return this;
                }
    
                EventExecutor executor = newCtx.executor();
                if (!executor.inEventLoop()) {
                    newCtx.setAddPending();
                    executor.execute(new Runnable() {
                        @Override
                        public void run() {
                            callHandlerAdded0(newCtx);
                        }
                    });
                    return this;
                }
            }
            callHandlerAdded0(newCtx);
            return this;
        }

    io.netty.channel.DefaultChannelPipeline#newContext

        private AbstractChannelHandlerContext newContext(EventExecutorGroup group, String name, ChannelHandler handler) {
            return new DefaultChannelHandlerContext(this, childExecutor(group), name, handler);
        }

      可以看到使用了自定义的线程组。并且记录到了DefaultChannelHandlerContext 属性里。

    (2) 不指定线程组,默认使用的是null

    io.netty.channel.DefaultChannelPipeline#addLast(io.netty.channel.ChannelHandler...)

        public final ChannelPipeline addLast(ChannelHandler... handlers) {
            return addLast(null, handlers);
        }

    (3) io.netty.channel.AbstractChannelHandlerContext#invokeChannelRead(io.netty.channel.AbstractChannelHandlerContext, java.lang.Object)

        static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
            final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
            EventExecutor executor = next.executor();
            if (executor.inEventLoop()) {
                next.invokeChannelRead(m);
            } else {
                executor.execute(new Runnable() {
                    @Override
                    public void run() {
                        next.invokeChannelRead(m);
                    }
                });
            }
        }

    我们查看next属性如下: 

    1》 io.netty.channel.AbstractChannelHandlerContext#executor 获取executor 方法如下:

        @Override
        public EventExecutor executor() {
            if (executor == null) {
                return channel().eventLoop();
            } else {
                return executor;
            }
        }

      可以看到,如果指定了就返回指定的,未指定返回channel 的executor, 也就是IO线程。

    2》接下来executor.inEventLoop() 为false, 所以走else 代码块的异步逻辑。

    总结:

    第一种在handler中添加异步,比较灵活,可以只将耗时的代码块加入异步。异步也会延长接口响应时间,因为需要先加入队列。

    第二种方式是netty的标准方式,相当于整个handler 都异步操作。不论耗时不耗时,都加入队列异步进行处理。这样理解清洗,可能不够灵活。

    【当你用心写完每一篇博客之后,你会发现它比你用代码实现功能更有成就感!】
  • 相关阅读:
    python爬虫之urllib
    python 数据库操作类
    Vue学习之路第十篇:简单计算器的实现
    Vue学习之路第九篇:双向数据绑定 v-model指令
    Vue学习之路第八篇:事件修饰符
    Vue学习之路第七篇:跑马灯项目实现
    Vue学习之路第六篇:v-on
    Vue学习之路第五篇:v-bind
    Vue学习之路第四篇:v-html指令
    Vue学习之路第三篇:插值表达式和v-text的区别
  • 原文地址:https://www.cnblogs.com/qlqwjy/p/15129611.html
Copyright © 2020-2023  润新知