• epoll


    最近不时看到epoll这个关键词,然后我表示一无所知,只知道这个东西比select好,现在大部分并行server端都是用这个。先说点前置知识。

    一般来说I/O模型可以分为:同步阻塞,同步非阻塞,异步阻塞,异步非阻塞IO

    • 同步阻塞IO在此种方式下,用户进程在发起一个IO操作以后,必须等待IO操作的完成,只有当真正完成了IO操作以后,用户进程才能运行。JAVA传统的IO模型属于此种方式!
    • 同步非阻塞IO: 在此种方式下,用户进程发起一个IO操作以后边可返回做其它事情,但是用户进程需要时不时的询问IO操作是否就绪,这就要求用户进程不停的去询问,从而引入不必要的CPU资源浪费。其中目前JAVANIO就属于同步非阻塞IO
    • 异步阻塞IO此种方式下是指应用发起一个IO操作以后,不等待内核IO操作的完成,等内核完成IO操作以后会通知应用程序,这其实就是同步和异步最关键的区别,同步必须等待或者主动的去询问IO是否完成,那么为什么说是阻塞的呢?因为此时是通过select系统调用来完成的,而select函数本身的实现方式是阻塞的,而采用select函数有个好处就是它可以同时监听多个文件句柄,从而提高系统的并发性!
    • 异步非阻塞IO: 在此种模式下,用户进程只需要发起一个IO操作然后立即返回,等IO操作真正的完成以后,应用程序会得到IO操作完成的通知,此时用户进程只需要对数据进行处理就好了,不需要进行实际的IO读写操作,因为真正的IO读取或者写入操作已经由内核完成了。目前Java中还没有支持此种IO模型。

    两种I/O多路复用模式:Reactor和Proactor

    一般地,I/O多路复用机制都依赖于一个事件多路分离器(Event Demultiplexer)。分离器对象可将来自事件源的I/O事件分离出来,并分发到对应的read/write事件处理器(Event Handler)。开发人员预先注册需要处理的事件及其事件处理器(或回调函数);事件分离器负责将请求事件传递给事件处理器。两个与事件分离器有关的模 式是Reactor和Proactor。Reactor模式采用同步IO,而Proactor采用异步IO。

    Reactor框架中用户定义 的操作是在实际操作之前调用的。比如你定义了操作是要向一个SOCKET写数据,那么当该SOCKET可以接收数据的时候,你的操作就会被调用;而 Proactor框架中用户定义的操作是在实际操作之后调用的。比如你定义了一个操作要显示从SOCKET中读入的数据,那么当读操作完成以后,你的操作 才会被调用。

    Proactor和Reactor都是并发编程中的设计模式。在我看来,他们都是用于派发/分离IO操作事件的。这里所谓的 IO事件也就是诸如read/write的IO操作。"派发/分离"就是将单独的IO事件通知到上层模块。两个模式不同的地方在于,Proactor用于 异步IO,而Reactor用于同步IO。

    linux下的IO模型

    PPC(Process Per Connection)/TPC(Thread Per Connection)模型

    1. 这两种模型思想类似,就是让每一个到来的连接一边自己做事去,别再来烦我。只是PPC是为它开了一个进程,而TPC开了一个线程。可是别烦我是有代价的,它要时间和空间啊,连接多了之后,那么多的进程/线程切换,这开销就上来了;因此这类模型能接受的最大连接数都不会高,一般在几百个左右。

    select模型

    1. 最大并发数限制,因为一个进程所打开的FD(文件描述符)是有限制的,由FD_SETSIZE设置,默认值是1024/2048,因此Select模型的最大并发数就被相应限制了。自己改改这个FD_SETSIZE?想法虽好,可是先看看下面吧…
    2. 效率问题,select每次调用都会线性扫描全部的FD集合,这样效率就会呈现线性下降,把FD_SETSIZE改大的后果就是,大家都慢慢来,什么?都超时了??!!
    3. 内核/用户空间 内存拷贝问题,如何让内核把FD消息通知给用户空间呢?在这个问题上select采取了内存拷贝方法。

    Epoll的提升

    1. Epoll没有最大并发连接的限制,上限是最大可以打开文件的数目,这个数字一般远大于2048, 一般来说这个数目和系统内存关系很大,具体数目可以cat /proc/sys/fs/file-max察看(我的虚拟机上是98977)。
    2. 效率提升,Epoll最大的优点就在于它只管你“活跃”的连接,而跟连接总数无关,因此在实际的网络环境中,Epoll的效率就会远远高于select和poll。
    3. 内存拷贝,Epoll在这点上使用了“共享内存”,这个内存拷贝也省略了。

    Epoll

    水平触发(level-triggered,也被称为条件触发) LT: 只要满足条件,就触发一个事件(只要有数据没有被获取,内核就不断通知你)

    边缘触发(edge-triggered) ET: 每当状态变化时,触发一个事件

    • 对于ET来说,应用层向tcp缓冲区写,有可能应用层数据写完了,但是tcp缓冲没有写到EAGAIN事件,那么此时需要在应用层做个标记,表明tcp缓冲区是可写的,否则,由于et是只触发一次,应用层就再也不会被通知缓冲区可写了。
    • 对于LT来说,应用层确实会每次通知可写事件,问题在于,如果应用层没数据需要往Tcp缓冲区写的话,epoll还是会不停的通知你可写,这时候需要把描述符移出epoll,避免多次无效的通知

    常用的事件处理库很多都选择了 LT 模式,包括大家熟知的libeventboost::asio等,为什么选择LT呢?那就不得不从ET的弊端的弊端说起。

    ET模式下,当有事件发生时,系统只会通知你一次,也就是调用epoll_wait 返回fd后,不管事件你处理与否,或者处理完全与否,再调用epoll_wait 时,都不会再返回该fd,这样programmer要自己保证在事件发生时及时有效的处理完。比如此时fd发生了EPOLLIN事件,在调用epoll_wait 后发现此事件,programmer要保证在本次轮询中对此fd进行了读操作,并且还要循环调用recv操作,一直读到recv的返回值小于请求值,或者遇到EAGAIN错误,不然下次轮询时,如果此fd没有再次触发事件,你就没有机会知道这个fd需要你的处理。这样无形中就增加了programmer的负担和出错的机会。

    ET模式的短处正是LT模式的长处,无论此fd是否有事件发生,或者有事件未处理完,每次epoll_wait 时总会得到此fd供你处理。显而易见,OSLT模式下维护的 ready list 的大小肯定比ET模式下长,而且你自己轮询所有的fd时也要比ET下要多,这种消耗和ET模式下循环调用处理函数(recvsend),还要逻辑处理是否处理完毕,理论上应该是LT更大一些,不过个人感觉应该差别不会太大。但是LT模式下带来的逻辑处理的方便性和不易出错性,让我们有理由把它作为首选。我想这可能也是为什么epoll后来在ET的基础上又增加了LT,并且将其作为默认模式的原因吧。

     

    epoll的数据结构足够保存许多信息。

     

     1 tructepoll_event {
     2     __uint32_t events;      // Epoll events
     3     epoll_data_t data;      // User datavariable
     4 };
     5 
     6 typedef union epoll_data {
     7     void *ptr;
     8    int fd;
     9     __uint32_t u32;
    10     __uint64_t u64;
    11 } epoll_data_t;

     

    关于Epoll的api,man epoll可以得到很多有用的信息了。

    EPOLL(7)                                                  Linux Programmer's Manual                                                 EPOLL(7)
    
    NAME
           epoll - I/O event notification facility
    
    SYNOPSIS
           #include <sys/epoll.h>
    
    DESCRIPTION
           The epoll API performs a similar task to poll(2): monitoring multiple file descriptors to see if I/O is possible on any of them.  The
           epoll API can be used either as an edge-triggered or a level-triggered interface and scales well to large  numbers  of  watched  file
           descriptors.  The following system calls are provided to create and manage an epoll instance:
    
           *  epoll_create(2)  creates  an epoll instance and returns a file descriptor referring to that instance.  (The more recent epoll_cre‐
              ate1(2) extends the functionality of epoll_create(2).)
    
           *  Interest in particular file descriptors is then registered via epoll_ctl(2).  The set of file descriptors currently registered  on
              an epoll instance is sometimes called an epoll set.
    
           *  epoll_wait(2) waits for I/O events, blocking the calling thread if no events are currently available.
    
       Level-triggered and edge-triggered
           The  epoll  event  distribution  interface is able to behave both as edge-triggered (ET) and as level-triggered (LT).  The difference
           between the two mechanisms can be described as follows.  Suppose that this scenario happens:
    
           1. The file descriptor that represents the read side of a pipe (rfd) is registered on the epoll instance.
    
           2. A pipe writer writes 2 kB of data on the write side of the pipe.
    
           3. A call to epoll_wait(2) is done that will return rfd as a ready file descriptor.
    
           4. The pipe reader reads 1 kB of data from rfd.
    
           5. A call to epoll_wait(2) is done.
           If the rfd file descriptor has been added to the epoll interface using the EPOLLET (edge-triggered) flag, the call  to  epoll_wait(2)
           done  in step 5 will probably hang despite the available data still present in the file input buffer; meanwhile the remote peer might
           be expecting a response based on the data it already sent.  The reason for this is that edge-triggered mode delivers events only when
           changes  occur on the monitored file descriptor.  So, in step 5 the caller might end up waiting for some data that is already present
           inside the input buffer.  In the above example, an event on rfd will be generated because of the write done in 2  and  the  event  is
           consumed  in  3.  Since the read operation done in 4 does not consume the whole buffer data, the call to epoll_wait(2) done in step 5
           might block indefinitely.
    
           An application that employs the EPOLLET flag should use nonblocking file descriptors to avoid having a blocking read or write  starve
           a  task  that  is  handling multiple file descriptors.  The suggested way to use epoll as an edge-triggered (EPOLLET) interface is as
           follows:
    
                  i   with nonblocking file descriptors; and
    
                  ii  by waiting for an event only after read(2) or write(2) return EAGAIN.
    
           By contrast, when used as a level-triggered interface (the default, when EPOLLET is not specified), epoll is simply a faster poll(2),
           and can be used wherever the latter is used since it shares the same semantics.
    
           Since  even  with  edge-triggered epoll, multiple events can be generated upon receipt of multiple chunks of data, the caller has the
           option to specify the EPOLLONESHOT flag, to tell epoll to disable the associated file descriptor after the receipt of an  event  with
           epoll_wait(2).   When  the  EPOLLONESHOT  flag  is  specified,  it  is the caller's responsibility to rearm the file descriptor using
           epoll_ctl(2) with EPOLL_CTL_MOD.
    
       /proc interfaces
           The following interfaces can be used to limit the amount of kernel memory consumed by epoll:
    
           /proc/sys/fs/epoll/max_user_watches (since Linux 2.6.28)
                  This specifies a limit on the total number of file descriptors that a user can register across all epoll instances on the sys‐
                  tem.   The  limit is per real user ID.  Each registered file descriptor costs roughly 90 bytes on a 32-bit kernel, and roughly
                  160 bytes on a 64-bit kernel.  Currently, the default value for max_user_watches is 1/25 (4%) of  the  available  low  memory,
                  divided by the registration cost in bytes.
       Example for suggested usage
           While  the  usage  of  epoll when employed as a level-triggered interface does have the same semantics as poll(2), the edge-triggered
           usage requires more clarification to avoid stalls in the application event loop.  In this example, listener is a  nonblocking  socket
           on  which  listen(2) has been called.  The function do_use_fd() uses the new ready file descriptor until EAGAIN is returned by either
           read(2) or write(2).  An event-driven state machine application should, after having received EAGAIN, record  its  current  state  so
           that at the next call to do_use_fd() it will continue to read(2) or write(2) from where it stopped before.
    
               #define MAX_EVENTS 10
               struct epoll_event ev, events[MAX_EVENTS];
               int listen_sock, conn_sock, nfds, epollfd;
    
               /* Set up listening socket, 'listen_sock' (socket(),
                  bind(), listen()) */
    
               epollfd = epoll_create(10);
               if (epollfd == -1) {
                   perror("epoll_create");
                   exit(EXIT_FAILURE);
               }
    
               ev.events = EPOLLIN;
               ev.data.fd = listen_sock;
               if (epoll_ctl(epollfd, EPOLL_CTL_ADD, listen_sock, &ev) == -1) {
                   perror("epoll_ctl: listen_sock");
                   exit(EXIT_FAILURE);
               }
    
               for (;;) {
                   nfds = epoll_wait(epollfd, events, MAX_EVENTS, -1);
                   if (nfds == -1) {
                       perror("epoll_pwait");
                       exit(EXIT_FAILURE);
                   }
                   for (n = 0; n < nfds; ++n) {
                       if (events[n].data.fd == listen_sock) {
                           conn_sock = accept(listen_sock,
                                           (struct sockaddr *) &local, &addrlen);
                           if (conn_sock == -1) {
                               perror("accept");
                               exit(EXIT_FAILURE);
                           }
                           setnonblocking(conn_sock);
                           ev.events = EPOLLIN | EPOLLET;
                           ev.data.fd = conn_sock;
                           if (epoll_ctl(epollfd, EPOLL_CTL_ADD, conn_sock,
                                       &ev) == -1) {
                               perror("epoll_ctl: conn_sock");
                               exit(EXIT_FAILURE);
                           }
                       } else {
                           do_use_fd(events[n].data.fd);
                       }
                   }
               }
    
           When  used as an edge-triggered interface, for performance reasons, it is possible to add the file descriptor inside the epoll inter‐
           face (EPOLL_CTL_ADD) once by specifying (EPOLLIN|EPOLLOUT).  This allows you to avoid  continuously  switching  between  EPOLLIN  and
           EPOLLOUT calling epoll_ctl(2) with EPOLL_CTL_MOD.
       Questions and answers
           Q0  What is the key used to distinguish the file descriptors registered in an epoll set?
    
           A0  The  key is the combination of the file descriptor number and the open file description (also known as an "open file handle", the
               kernel's internal representation of an open file).
    
           Q1  What happens if you register the same file descriptor on an epoll instance twice?
    
           A1  You will probably get EEXIST.  However, it is possible to add a duplicate (dup(2), dup2(2), fcntl(2) F_DUPFD) descriptor  to  the
               same  epoll instance.  This can be a useful technique for filtering events, if the duplicate file descriptors are registered with
               different events masks.
    
           Q2  Can two epoll instances wait for the same file descriptor?  If so, are events reported to both epoll file descriptors?
    
           A2  Yes, and events would be reported to both.  However, careful programming may be needed to do this correctly.
    
           Q3  Is the epoll file descriptor itself poll/epoll/selectable?
    
           A3  Yes.  If an epoll file descriptor has events waiting then it will indicate as being readable.
    
           Q4  What happens if one attempts to put an epoll file descriptor into its own file descriptor set?
    
           A4  The epoll_ctl(2) call will fail (EINVAL).  However, you can add an epoll file descriptor inside  another  epoll  file  descriptor
               set.
    
           Q5  Can I send an epoll file descriptor over a UNIX domain socket to another process?
    
           A5  Yes,  but  it  does  not  make sense to do this, since the receiving process would not have copies of the file descriptors in the
               epoll set.
    
           Q6  Will closing a file descriptor cause it to be removed from all epoll sets automatically?
           A6  Yes, but be aware of the following point.  A file descriptor is a reference to an open file description (see open(2)).   Whenever
               a  descriptor  is  duplicated via dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a new file descriptor referring to the same open
               file description is created.  An open file description continues to exist until all file descriptors referring to  it  have  been
               closed.  A file descriptor is removed from an epoll set only after all the file descriptors referring to the underlying open file
               description have been closed (or before if the descriptor is explicitly removed using epoll_ctl(2)  EPOLL_CTL_DEL).   This  means
               that  even  after a file descriptor that is part of an epoll set has been closed, events may be reported for that file descriptor
               if other file descriptors referring to the same underlying file description remain open.
    
           Q7  If more than one event occurs between epoll_wait(2) calls, are they combined or reported separately?
    
           A7  They will be combined.
    
           Q8  Does an operation on a file descriptor affect the already collected but not yet reported events?
    
           A8  You can do two operations on an existing file descriptor.  Remove would be meaningless for this case.  Modify will reread  avail‐
               able I/O.
    
           Q9  Do I need to continuously read/write a file descriptor until EAGAIN when using the EPOLLET flag (edge-triggered behavior) ?
    
           A9  Receiving  an  event from epoll_wait(2) should suggest to you that such file descriptor is ready for the requested I/O operation.
               You must consider it ready until the next (nonblocking) read/write yields EAGAIN.  When and how you will use the file  descriptor
               is entirely up to you.
    
               For  packet/token-oriented  files  (e.g.,  datagram  socket,  terminal  in canonical mode), the only way to detect the end of the
               read/write I/O space is to continue to read/write until EAGAIN.
    
               For stream-oriented files (e.g., pipe, FIFO, stream socket), the condition that the read/write I/O space is exhausted can also be
               detected  by  checking the amount of data read from / written to the target file descriptor.  For example, if you call read(2) by
               asking to read a certain amount of data and read(2) returns a lower number of bytes, you can be sure of having exhausted the read
               I/O  space  for  the  file descriptor.  The same is true when writing using write(2).  (Avoid this latter technique if you cannot
               guarantee that the monitored file descriptor always refers to a stream-oriented file.)
       Possible pitfalls and ways to avoid them
           o Starvation (edge-triggered)
    
           If there is a large amount of I/O space, it is possible that by trying to drain it the other files will  not  get  processed  causing
           starvation.  (This problem is not specific to epoll.)
    
           The solution is to maintain a ready list and mark the file descriptor as ready in its associated data structure, thereby allowing the
           application to remember which files need to be processed but still round robin amongst all  the  ready  files.   This  also  supports
           ignoring subsequent events you receive for file descriptors that are already ready.
    
           o If using an event cache...
    
           If  you use an event cache or store all the file descriptors returned from epoll_wait(2), then make sure to provide a way to mark its
           closure dynamically (i.e., caused by a previous event's processing).  Suppose you receive 100 events from epoll_wait(2), and in event
           #47 a condition causes event #13 to be closed.  If you remove the structure and close(2) the file descriptor for event #13, then your
           event cache might still say there are events waiting for that file descriptor causing confusion.
    
           One solution for this is to call, during the processing of event 47,  epoll_ctl(EPOLL_CTL_DEL)  to  delete  file  descriptor  13  and
           close(2),  then  mark  its  associated  data  structure as removed and link it to a cleanup list.  If you find another event for file
           descriptor 13 in your batch processing, you will discover the file descriptor had been previously removed and there will be no confu‐
           sion.

    部分摘自:http://blog.csdn.net/sparkliang/article/details/4770655

    http://www.cppblog.com/wgcno7/archive/2010/05/20/115910.html netecho的实现

    http://blog.csdn.net/caiwenfeng_for_23/article/details/8458299

    http://www.cnblogs.com/dawen/archive/2011/05/18/2050358.html

    http://www.cnblogs.com/egametang/archive/2012/07/30/2615808.html

    http://www.cppblog.com/peakflys/archive/2012/08/26/188344.html

  • 相关阅读:
    Crash dump中需要重点关注的信息
    导致性能问题的常见情况
    关于性能调优
    通过jdt解析spring mvc中url-类-方法的对应关系
    springcloud zuul
    spring中实现自己的初始化逻辑
    nginx配置文件解析工具
    mac 识别压缩文件类型
    使用JDT转java代码为AST
    word中插入的代码库设置局部背景色
  • 原文地址:https://www.cnblogs.com/linyx/p/4000421.html
Copyright © 2020-2023  润新知