• LKD: Chapter 8 Bottom Halves and Deferring Work


      In 2.6.x, there are 3 mechanisms for implementing a bottom half: softirqs, tasklets and work queues. Here's the comparison:


    Softirqs:

      Softirqs are statically allocated at compile time. It is represented by the softirq_action structure, which is defined in <linux/interrupt.h>:

    struct softirq_action {
        void (*action)(struct softirq_action *);
    }

       A 32-entry array of this structure is declared in kernel/softirq.c:

    static struct softirq_action softirq_vec[NR_SOFTIRQS];

      But in the current kernel, only nine exist:(as we will discuss later, tasklets are built off softirqs)

      The prototype of a softirq handler looks like

    void softirq_handler(struct softirq_action *)

      A softirq never preempts another softirq. The only event that can preempt a softirq is an interrupt handler.

    Executing Softirqs:

      A registered softirq must be marked before it will execute. This is called raising the softirq.

      Softirq execution occurs in __do_softirq(), which is invoked by do_softirq(). If there are pending softirqs, __do_softirq() loops over each one, invoking its handler. Let's look at a simplified variant of the important part of __do_softirq():

    u32 pending;
    
    pending = local_softirq_pending();
    if (pending) {
        struct softirq_action *h;
    
        /* reset the pending bitmask */
        set_softirq_pending(0);
    
        h = softirq_vec;
        do {
            if (pending & 1)
                h->action(h);
            h++;
            pending >>= 1;
        } while (pending);
    }

    Using Softirqs:

      Softirqs are reserved for the most timing-critical and important bottom-half processing on the system.  Currently, only two subsystems - networking and block devices - directly use softirqs.

      Registering Your Handler:

      The softirq handler is registered at run-time via open_softirq():

    /* in net/core/dev.c */
    /* two parameters: the sfotirq's index and its handler function */
    open_softirq(NET_TX_SOFTIRQ, net_tx_action);

      Raising Your Softirq:

      To mark it pending, call raise_softirq():    

    raise_softirq(NET_TX_SOFTIRQ);

      Then it is run at the next invocation of do_softirq().

    asmlinkage void do_softirq(void)
    {
        __u32 pending;
        unsigned long flags;
    
        if (in_interrupt())
            return;
    
        local_irq_save(flags);
        
        pending = local_softirq_pending();
    
        if (pending)
            __do_softirq();
    
        local_irq_restore(flags);
    }

      I have a look at __do_softirq() and I think it's too long to show here, so I just pass it :)

      In general, pending softirqs are checked for and executed in the following places:

        In the return from hardware interrupt code path;

        In the ksoftirqd kernel thread;

        In any code that explicitly checks for and executes pending softirqs, such as the networking subsystem.


    Tasklets:

      Tasklets are built on top of softirqs and it's more popular. The difference is that two of the same type of tasklet cannot run simultaneously on different processors but softirqs can.

      As discussed, tasklets are represented by two softirqs: HI_SOFTIRQ and TASKLET_SOFTIRQ.

      The tasklet sturcture is declared in <linux/interrupt.h>:

    struct tasklet_struct {
        struct tasklet_struct *next;    
        unsigned long state;
        atomic_t count;
        void (*func)(unsigned long);    /* tasklet handler function */
        unsigned long data;                /* argument to the tasklet function */
    };

    Schedulling Tasklets:

      Tasklets are scheduled via the tasklet_schedule() and tasklet_hi_schedule()(for high-priority tasklets):

    static inline vid tasklet_schedule(struct tasklet_struct *t)
    {
        if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state))
            __tasklet_schedule(t);
    }

      Here's the __tasklet_schedule():

    void __tasklet_schedule(struct tasklet_struct *t)
    {
        unsigned long flags;
        
         /* save the state of interrupt system, and then disable local interrupts. */
        local_irq_save(flags);
        t->next = NULL;
        /* add the tasklet to be scheduled to the tail of the tasklet_vec linked list */
        *__get_cpu_var(tasklet_vec).tail = t;
        __get_cpu_var(tasklet-vec).tail = &(t->next);
        /* raise the TASKLET_SOFTIRQ, so do_softirq() executes this tasklet in the near future */
        local_irq_restore(flags);
    }

      Then the do_softirq() will execute the associated handlers tasklet_action() soon.  

     ksoftirqd:

      Softirq (and thus tasklet) processing is aided by a set of per-processor kernel threads. The kernel processes softirqs most commonly on return from handling an interrupt.

      There is one thread per processor. The threads are each named ksoftirqd/n where n is the processor number.  


    Work Queue:

      Work queues defer work into a kernel thread - this bottom half always runs in process context. Therefore, work queues are schedulable and can therefore sleep.

       In its most basic form, the work queue subsystem is an interface fro creating kernel threads, which are called worker threads, to handle work queued from elsewhere.

      The default worker threads are called events/n where n is the processor number.

  • 相关阅读:
    (续)在深度计算框架MindSpore中如何对不持续的计算进行处理——对数据集进行一定epoch数量的训练后,进行其他工作处理,再返回来接着进行一定epoch数量的训练——单步计算
    YAML文件简介
    训练集验证集测试集的概念
    泛化误差
    drawio的打开方法
    移动硬盘无法被电脑识别
    r5 3600相当于英特尔什么级别
    Ubuntu WPS字体缺失配置
    pytorch深度学习cpu占用太高
    常用的架构设计原则-云原生架构设计快速入门
  • 原文地址:https://www.cnblogs.com/justforfun12/p/5071664.html
Copyright © 2020-2023  润新知