• ICE FAQ:Why do I not get concurrent invocations in a server?


    为什么我不能进行并发的调用?

    By default, the Ice server-side run time uses a thread pool to dispatch incoming requests. The number of requests that can execute concurrently in a server is limited to the number of threads in the pool. If more clients attempt to concurrently call operations than there are threads in the pool, the corresponding requests are not dispatched until a currently executing invocation completes and returns its thread to the pool; that thread then picks up the next pending request.

    默认情况下,ICE服务端运行环境使用一个线程池处理外来的请求。一个服务器中可以并发执行的请求数受限于线程池中的线程数,如果客户端进行并发调用的数目超过了线程池中线程的数目,对相应请求的响应将不会执行,除非正在执行的调用结束并将其使用的线程返回线程池中,这个线程接着才处理下一个请求。

    By default, the server-side thread pool has a size of one, meaning that only one operation can execute in the server at a time. If you don’t see concurrent invocations in a server, it is likely that the server is running with a thread pool containing only a single thread, thereby serializing all incoming invocations.

    默认情况下,服务端线程池的大小是1,意味这服务端同时只能处理一个调用请求。你不会看到在服务端进行的并发调用,就像服务端的线程池只运行着一个线程,串行化的处理着所有请求。

    The size of the server-side thread pool is controlled by a number of properties:

    服务端线程池大小由以下的属性控制:

    • Ice.ThreadPool.Server.Size
    • Ice.ThreadPool.Server.SizeMax
    • Ice.ThreadPool.Server.SizeWarn

    The Ice.ThreadPool.Server.Size property controls the number of threads in the pool. When you create a communicator, the specified number of threads are created and added to the pool; the size of the pool never drops below this number of threads.

    Ice.ThreadPool.Server.Size属性控制线程池中的线程数量。当communicator被创建时,指定数量的线程就被创建并加入到了线程池中;线程池中的线程数量从来不会小于这个数字。

    The Ice.ThreadPool.Server.SizeMax property has a default value that equals the size of the thread pool. However, you can set this property to a value that is larger than Ice. ThreadPool.Server.Size. If you do, the server-side run time will allow the thread pool to grow in size up to this value if enough requests arrive concurrently. The run time also dynamically shrinks the thread pool back to its initial size as demand on threads is reduced (with some hysteresis to avoid continuously creating and destroying threads).

    Ice.ThreadPool.Server.SizeMax 属性有一个与线程池大小相等的默认值,然而,你也可以让这个值比Ice. ThreadPool.Server.Size制定的值更大。如果这样做,在请求足够多的情况下,服务端运行时将允许线程池大小增加到这么大。运行时也会随着线程数的减少动态的收缩线程池到其初始尺寸。(有一定的滞后时间,避免频繁创建和销毁线程)

    Finally, the Ice.ThreadPool.Server.SizeWarn property sets a threshold. If the number of threads in use exceeds this value, the run time emits a warning via the communicator’s logger. The default value of this property is 80% of the value specified by Ice.ThreadPool.Server.SizeMax.

    最后,Ice.ThreadPool.Server.SizeWarn 属性设置了一个门限,如果线程的数量超过了这个值,服务器运行时通过communicator的Logger发出一个警告,这个值默认是Ice.ThreadPool.Server.SizeMax值的80%.

  • 相关阅读:
    Gradle
    ES6总结
    VSCode 开发Vue + ElementUI
    WEUIHalfscreen Dialog和mescroll冲突
    Raspberry Pi安装AdGuard Home
    浏览器跨域请求 原理和个人理解
    如何保证缓存与数据库双写时的数据一致性?
    关于软件系统中的高可用问题的碎碎念
    Android项目实战(六十三):as3.6+的一些警告解决方法
    引入官方uni.css样式
  • 原文地址:https://www.cnblogs.com/jans2002/p/832335.html
Copyright © 2020-2023  润新知