• Linux进程调度与切换


    2016-04-15

    张超《Linux内核分析》MOOC课程http://mooc.study.163.com/course/USTC-1000029000 

    一、分析

    进程调度的时机与进程切换

      操作系统原理中介绍了大量进程调度算法,这些算法从实现的角度看仅仅是从运行队列中选择一个新进程,选择的过程中运用了不同的策略而已。对于理解操作系统的工作机制,反而是进程的调度时机与进程的切换机制更为关键。

    进程调度的时机:

    schedule()是个内核函数,不是内核函数。所以用户态的进程不能直接调用,只能间接调用。内核线程是只有内核态没有用户态的特殊进程。

    1.中断处理过程(包括时钟中断、I/O中断、系统调用和异常)中,直接调用schedule(),或者返回用户态时根据need_resched标记调用schedule();

    2.内核线程可以直接调用schedule()进行进程切换,也可以在中断处理过程中进行调度,也就是说内核线程作为一类的特殊的进程可以主动调度,也可以被动调度;

    3.用户态进程无法实现主动调度,仅能通过陷入内核态后的某个时机点进行调度,即在中断处理过程中进行调度。

    进程切换:

    1.为了控制进程的执行,内核必须有能力挂起正在CPU上执行的进程,并恢复以前挂起的某个进程的执行,这叫做进程切换、任务切换、上下文切换;

    2.挂起正在CPU上执行的进程,与中断时保存现场是不同的,中断前后是在同一个进程上下文中,只是由用户态转向内核态执行;

    3.进程上下文包含了进程执行需要的所有信息

       I 用户地址空间: 包括程序代码,数据,用户堆栈等   II 控制信息 :进程描述符,内核堆栈等

       III 硬件上下文(注意中断也要保存硬件上下文只是保存的方法不同)

    4.schedule()函数选择一个新的进程来运行,并调用context_switch进行上下文的切换,这个宏调用switch_to来进行关键上下文切换

    schedule 在/linux-3.18.6/kernel/sched/core.c

    2733/*
    2734 * __schedule() is the main scheduler function.
    2735 *
    2736 * The main means of driving the scheduler and thus entering this function are:
    2737 *
    2738 *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
    2739 *
    2740 *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
    2741 *      paths. For example, see arch/x86/entry_64.S.
    2742 *
    2743 *      To drive preemption between tasks, the scheduler sets the flag in timer
    2744 *      interrupt handler scheduler_tick().
    2745 *
    2746 *   3. Wakeups don't really cause entry into schedule(). They add a
    2747 *      task to the run-queue and that's it.
    2748 *
    2749 *      Now, if the new task added to the run-queue preempts the current
    2750 *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
    2751 *      called on the nearest possible occasion:
    2752 *
    2753 *       - If the kernel is preemptible (CONFIG_PREEMPT=y):
    2754 *
    2755 *         - in syscall or exception context, at the next outmost
    2756 *           preempt_enable(). (this might be as soon as the wake_up()'s
    2757 *           spin_unlock()!)
    2758 *
    2759 *         - in IRQ context, return from interrupt-handler to
    2760 *           preemptible context
    2761 *
    2762 *       - If the kernel is not preemptible (CONFIG_PREEMPT is not set)
    2763 *         then at the next:
    2764 *
    2765 *          - cond_resched() call
    2766 *          - explicit schedule() call
    2767 *          - return from syscall or exception to user-space
    2768 *          - return from interrupt-handler to user-space
    2769 */
    2770static void __sched __schedule(void)
    2771{
    2772    struct task_struct *prev, *next;
    2773    unsigned long *switch_count;
    2774    struct rq *rq;
    2775    int cpu;
    2776
    2777need_resched:
    2778    preempt_disable();
    2779    cpu = smp_processor_id();
    2780    rq = cpu_rq(cpu);
    2781    rcu_note_context_switch(cpu);
    2782    prev = rq->curr;
    2783
    2784    schedule_debug(prev);
    2785
    2786    if (sched_feat(HRTICK))
    2787        hrtick_clear(rq);
    2788
    2789    /*
    2790     * Make sure that signal_pending_state()->signal_pending() below
    2791     * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
    2792     * done by the caller to avoid the race with signal_wake_up().
    2793     */
    2794    smp_mb__before_spinlock();
    2795    raw_spin_lock_irq(&rq->lock);
    2796
    2797    switch_count = &prev->nivcsw;
    2798    if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
    2799        if (unlikely(signal_pending_state(prev->state, prev))) {
    2800            prev->state = TASK_RUNNING;
    2801        } else {
    2802            deactivate_task(rq, prev, DEQUEUE_SLEEP);
    2803            prev->on_rq = 0;
    2804
    2805            /*
    2806             * If a worker went to sleep, notify and ask workqueue
    2807             * whether it wants to wake up a task to maintain
    2808             * concurrency.
    2809             */
    2810            if (prev->flags & PF_WQ_WORKER) {
    2811                struct task_struct *to_wakeup;
    2812
    2813                to_wakeup = wq_worker_sleeping(prev, cpu);
    2814                if (to_wakeup)
    2815                    try_to_wake_up_local(to_wakeup);
    2816            }
    2817        }
    2818        switch_count = &prev->nvcsw;
    2819    }
    2820
    2821    if (task_on_rq_queued(prev) || rq->skip_clock_update < 0)
    2822        update_rq_clock(rq);
    2823
    2824    next = pick_next_task(rq, prev);
    2825    clear_tsk_need_resched(prev);
    2826    clear_preempt_need_resched();
    2827    rq->skip_clock_update = 0;
    2828
    2829    if (likely(prev != next)) {
    2830        rq->nr_switches++;
    2831        rq->curr = next;
    2832        ++*switch_count;
    2833
    2834        context_switch(rq, prev, next); /* unlocks the rq */
    2835        /*
    2836         * The context switch have flipped the stack from under us
    2837         * and restored the local variables which were saved when
    2838         * this task called schedule() in the past. prev == current
    2839         * is still correct, but it can be moved to another cpu/rq.
    2840         */
    2841        cpu = smp_processor_id();
    2842        rq = cpu_rq(cpu);
    2843    } else
    2844        raw_spin_unlock_irq(&rq->lock);
    2845
    2846    post_schedule(rq);
    2847
    2848    sched_preempt_enable_no_resched();
    2849    if (need_resched())
    2850        goto need_resched;
    2851}
    schedule

    我们看其中的两个,第一是第2824行的next = pick_next_task(rq, prev);  //完成找到下一个进程

    第二是第2834行的context_switch(rq, prev, next); /* unlocks the rq */  //完成切换

       I next = pick_next_task(rq, prev);//进程调度算法都封装这个函数内部

     pick_next_stack在/linux-3.18.6/kernel/sched/core.c

    2694/*
    2695 * Pick up the highest-prio task:
    2696 */
    2697static inline struct task_struct *
    2698pick_next_task(struct rq *rq, struct task_struct *prev)
    2699{
    2700    const struct sched_class *class = &fair_sched_class;
    2701    struct task_struct *p;
    2702
    2703    /*
    2704     * Optimization: we know that if all tasks are in
    2705     * the fair class we can call that function directly:
    2706     */
    2707    if (likely(prev->sched_class == class &&
    2708           rq->nr_running == rq->cfs.h_nr_running)) {
    2709        p = fair_sched_class.pick_next_task(rq, prev);
    2710        if (unlikely(p == RETRY_TASK))
    2711            goto again;
    2712
    2713        /* assumes fair_sched_class->next == idle_sched_class */
    2714        if (unlikely(!p))
    2715            p = idle_sched_class.pick_next_task(rq, prev);
    2716
    2717        return p;
    2718    }
    2719
    2720again:
    2721    for_each_class(class) {
    2722        p = class->pick_next_task(rq, prev);
    2723        if (p) {
    2724            if (unlikely(p == RETRY_TASK))
    2725                goto again;
    2726            return p;
    2727        }
    2728    }
    2729
    2730    BUG(); /* the idle class will always have a runnable task */
    2731}
    pick_next_stack

       II context_switch(rq, prev, next);//进程上下文切换,切换到新的内存和新的寄存器状态

    context_switch在 /linux-3.18.6/kernel/sched/core.c

    2331/*
    2332 * context_switch - switch to the new MM and the new
    2333 * thread's register state.
    2334 */
    2335static inline void
    2336context_switch(struct rq *rq, struct task_struct *prev,
    2337           struct task_struct *next)
    2338{
    2339    struct mm_struct *mm, *oldmm;
    2340
    2341    prepare_task_switch(rq, prev, next);
    2342
    2343    mm = next->mm;
    2344    oldmm = prev->active_mm;
    2345    /*
    2346     * For paravirt, this is coupled with an exit in switch_to to
    2347     * combine the page table reload and the switch backend into
    2348     * one hypercall.
    2349     */
    2350    arch_start_context_switch(prev);
    2351
    2352    if (!mm) {
    2353        next->active_mm = oldmm;
    2354        atomic_inc(&oldmm->mm_count);
    2355        enter_lazy_tlb(oldmm, next);
    2356    } else
    2357        switch_mm(oldmm, mm, next);
    2358
    2359    if (!prev->mm) {
    2360        prev->active_mm = NULL;
    2361        rq->prev_mm = oldmm;
    2362    }
    2363    /*
    2364     * Since the runqueue lock will be released by the next
    2365     * task (which is an invalid locking op but in the case
    2366     * of the scheduler it's an obvious special-case), so we
    2367     * do an early lockdep release here:
    2368     */
    2369    spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
    2370
    2371    context_tracking_task_switch(prev, next);
    2372    /* Here we just switch the register state and the stack. */
    2373    switch_to(prev, next, prev);
    2374
    2375    barrier();
    2376    /*
    2377     * this_rq must be evaluated again because prev may have moved
    2378     * CPUs since it called schedule(), thus the 'rq' on its stack
    2379     * frame will be invalid.
    2380     */
    2381    finish_task_switch(this_rq(), prev);
    2382}
    context_switch

    其中的第2341行的prepare_task_switch(rq, prev, next); //完成切换前的准备工作

    第2373行的switch_to(prev, next, prev);  //完成切换

       III switch_to利用了prev和next两个参数:prev指向当前进程,next指向被调度的进程

    /linux-3.18.6/arch/x86/include/asm/switch_to.h

    1#ifndef _ASM_X86_SWITCH_TO_H
    2#define _ASM_X86_SWITCH_TO_H
    3
    4struct task_struct; /* one of the stranger aspects of C forward declarations */
    5__visible struct task_struct *__switch_to(struct task_struct *prev,
    6                       struct task_struct *next);
    7struct tss_struct;
    8void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
    9              struct tss_struct *tss);
    10
    11#ifdef CONFIG_X86_32
    12
    13#ifdef CONFIG_CC_STACKPROTECTOR
    14#define __switch_canary                            \
    15    "movl %P[task_canary](%[next]), %%ebx\n\t"            \
    16    "movl %%ebx, "__percpu_arg([stack_canary])"\n\t"
    17#define __switch_canary_oparam                        \
    18    , [stack_canary] "=m" (stack_canary.canary)
    19#define __switch_canary_iparam                        \
    20    , [task_canary] "i" (offsetof(struct task_struct, stack_canary))
    21#else    /* CC_STACKPROTECTOR */
    22#define __switch_canary
    23#define __switch_canary_oparam
    24#define __switch_canary_iparam
    25#endif    /* CC_STACKPROTECTOR */
    26
    27/*
    28 * Saving eflags is important. It switches not only IOPL between tasks,
    29 * it also protects other tasks from NT leaking through sysenter etc.
    30 */
    31#define switch_to(prev, next, last)                    \
    32do {                                    \
    33    /*                                \
    34     * Context-switching clobbers all registers, so we clobber    \
    35     * them explicitly, via unused output variables.        \
    36     * (EAX and EBP is not listed because EBP is saved/restored    \
    37     * explicitly for wchan access and EAX is the return value of    \
    38     * __switch_to())                        \
    39     */                                \
    40    unsigned long ebx, ecx, edx, esi, edi;                \
    41                                    \
    42    asm volatile("pushfl\n\t"        /* save    flags */    \
    43             "pushl %%ebp\n\t"        /* save    EBP   */    \
    44             "movl %%esp,%[prev_sp]\n\t"    /* save    ESP   */ \
    45             "movl %[next_sp],%%esp\n\t"    /* restore ESP   */ \
    46             "movl $1f,%[prev_ip]\n\t"    /* save    EIP   */    \
    47             "pushl %[next_ip]\n\t"    /* restore EIP   */    \
    48             __switch_canary                    \
    49             "jmp __switch_to\n"    /* regparm call  */    \
    50             "1:\t"                        \
    51             "popl %%ebp\n\t"        /* restore EBP   */    \
    52             "popfl\n"            /* restore flags */    \
    53                                    \
    54             /* output parameters */                \
    55             : [prev_sp] "=m" (prev->thread.sp),        \
    56               [prev_ip] "=m" (prev->thread.ip),        \
    57               "=a" (last),                    \
    58                                    \
    59               /* clobbered output registers: */        \
    60               "=b" (ebx), "=c" (ecx), "=d" (edx),        \
    61               "=S" (esi), "=D" (edi)                \
    62                                           \
    63               __switch_canary_oparam                \
    64                                    \
    65               /* input parameters: */                \
    66             : [next_sp]  "m" (next->thread.sp),        \
    67               [next_ip]  "m" (next->thread.ip),        \
    68                                           \
    69               /* regparm parameters for __switch_to(): */    \
    70               [prev]     "a" (prev),                \
    71               [next]     "d" (next)                \
    72                                    \
    73               __switch_canary_iparam                \
    74                                    \
    75             : /* reloaded segment registers */            \
    76            "memory");                    \
    77} while (0)
    78
    79#else /* CONFIG_X86_32 */
    80
    81/* frame pointer must be last for get_wchan */
    82#define SAVE_CONTEXT    "pushf ; pushq %%rbp ; movq %%rsi,%%rbp\n\t"
    83#define RESTORE_CONTEXT "movq %%rbp,%%rsi ; popq %%rbp ; popf\t"
    84
    85#define __EXTRA_CLOBBER  \
    86    , "rcx", "rbx", "rdx", "r8", "r9", "r10", "r11", \
    87      "r12", "r13", "r14", "r15"
    88
    89#ifdef CONFIG_CC_STACKPROTECTOR
    90#define __switch_canary                              \
    91    "movq %P[task_canary](%%rsi),%%r8\n\t"                  \
    92    "movq %%r8,"__percpu_arg([gs_canary])"\n\t"
    93#define __switch_canary_oparam                          \
    94    , [gs_canary] "=m" (irq_stack_union.stack_canary)
    95#define __switch_canary_iparam                          \
    96    , [task_canary] "i" (offsetof(struct task_struct, stack_canary))
    97#else    /* CC_STACKPROTECTOR */
    98#define __switch_canary
    99#define __switch_canary_oparam
    100#define __switch_canary_iparam
    101#endif    /* CC_STACKPROTECTOR */
    102
    103/* Save restore flags to clear handle leaking NT */
    104#define switch_to(prev, next, last) \
    105    asm volatile(SAVE_CONTEXT                      \
    106         "movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */      \
    107         "movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */      \
    108         "call __switch_to\n\t"                      \
    109         "movq "__percpu_arg([current_task])",%%rsi\n\t"          \
    110         __switch_canary                          \
    111         "movq %P[thread_info](%%rsi),%%r8\n\t"              \
    112         "movq %%rax,%%rdi\n\t"                       \
    113         "testl  %[_tif_fork],%P[ti_flags](%%r8)\n\t"          \
    114         "jnz   ret_from_fork\n\t"                      \
    115         RESTORE_CONTEXT                          \
    116         : "=a" (last)                            \
    117           __switch_canary_oparam                      \
    118         : [next] "S" (next), [prev] "D" (prev),              \
    119           [threadrsp] "i" (offsetof(struct task_struct, thread.sp)), \
    120           [ti_flags] "i" (offsetof(struct thread_info, flags)),      \
    121           [_tif_fork] "i" (_TIF_FORK),                    \
    122           [thread_info] "i" (offsetof(struct task_struct, stack)),   \
    123           [current_task] "m" (current_task)              \
    124           __switch_canary_iparam                      \
    125         : "memory", "cc" __EXTRA_CLOBBER)
    126
    127#endif /* CONFIG_X86_32 */
    128
    129#endif /* _ASM_X86_SWITCH_TO_H */
    130
    switch_to

    完成进程切换

     二、分析进程切换:我们用switch_to中的部分代码分析

    27/*
    28 * Saving eflags is important. It switches not only IOPL between tasks,
    29 * it also protects other tasks from NT leaking through sysenter etc.
    30 */
    31#define switch_to(prev, next, last)                    \
    32do {                                    \
    33    /*                                \
    34     * Context-switching clobbers all registers, so we clobber    \
    35     * them explicitly, via unused output variables.        \
    36     * (EAX and EBP is not listed because EBP is saved/restored    \
    37     * explicitly for wchan access and EAX is the return value of    \
    38     * __switch_to())                        \
    39     */                                \
    40    unsigned long ebx, ecx, edx, esi, edi;                \
    41                                    \
    42    asm volatile("pushfl\n\t"        /* save    flags */    \
    43             "pushl %%ebp\n\t"        /* save    EBP   */    \
    44             "movl %%esp,%[prev_sp]\n\t"    /* save    ESP   */ \
    45             "movl %[next_sp],%%esp\n\t"    /* restore ESP   */ \
    46             "movl $1f,%[prev_ip]\n\t"    /* save    EIP   */    \
    47             "pushl %[next_ip]\n\t"    /* restore EIP   */    \
    48             __switch_canary                    \
    49             "jmp __switch_to\n"    /* regparm call  */    \
    50             "1:\t"                        \
    51             "popl %%ebp\n\t"        /* restore EBP   */    \
    52             "popfl\n"            /* restore flags */    \
    53                                    \
    54             /* output parameters */                \
    55             : [prev_sp] "=m" (prev->thread.sp),        \
    56               [prev_ip] "=m" (prev->thread.ip),        \
    57               "=a" (last),                    \
    58                                    \
    59               /* clobbered output registers: */        \
    60               "=b" (ebx), "=c" (ecx), "=d" (edx),        \
    61               "=S" (esi), "=D" (edi)                \
    62                                           \
    63               __switch_canary_oparam                \
    64                                    \
    65               /* input parameters: */                \
    66             : [next_sp]  "m" (next->thread.sp),        \
    67               [next_ip]  "m" (next->thread.ip),        \
    68                                           \
    69               /* regparm parameters for __switch_to(): */    \
    70               [prev]     "a" (prev),                \
    71               [next]     "d" (next)                \
    72                                    \
    73               __switch_canary_iparam                \
    74                                    \
    75             : /* reloaded segment registers */            \
    76            "memory");                    \
    77} while (0)

    利用了prev和next两个参数:prev指向当前进程,当前进程用X表示。next指向被调度的进程,即下一个进程,用Y表示。至于如何实现调度,看pick_next_task。

    看第42行:把flags压入到当前进程X的栈里面,保存flags。

    看第43行:把当前的ebp压入当前进程X的栈里,保存ebp。

    看第44行:把当前的esp保存到当前进程X的thread.sp里面。其中[prev_sp]是个标识,他在第55行,代替的是prev->thread.sp。

    看第45行:把下一个进行Y的thread.sp赋值给esp,这一步实现把本来指向X的栈指针esp,现在指向了Y。其中[next_sp]如上所述,在第66行。

    看第46行:把50行的位置存到X进程的thread_ip里面,保存eip。下一次可以从50行开始执行。其中[prev_ip]如上所述,在第56行。

    看第47行:把下一个进程Y的threat.ip压入Y进程的栈里面。其中[next_ip]如上所示,在第67行。

    看第49行:跳转到__swap_to

    看第51行:Y进程里面出栈操作,放到ebp里面。

    看第52行:把Y进程里面的出栈,弹出flags

    第51,52行正好和第42,43行操作互逆。

    三、实验:用gdb跟踪分析一个schedule()函数

     

    四、Linux系统的一般执行过程

    最一般情况:正在运行的用户态进程X切换到运行用户态进程Y的过程

      1.正在运行的用户态进程X

      2.发生中断——save cs:eip/esp/eflags(current) to kernel stack,then load cs:eip(entry of a specific ISR) and ss:esp(point to kernel stack).

      3. SAVE_ALL //保存现场

      4. 中断处理过程中或中断返回前调用了schedule(),其中的switch_to做了关键的进程上下文切换

      5. 标号1之后开始运行用户态进程Y(这里Y曾经通过以上步骤被切换出去过因此可以从标号1继续执行)

      6. restore_all   //恢复现场

      7. iret - pop cs:eip/ss:esp/eflags from kernel stack

      8. 继续运行用户态进程Y

    几种特殊的情况:

      1. 通过中断处理过程中的调度时机,用户态进程与内核线程之间互相切换和内核线程之间互相切换,与最一般的情况非常类似,只是内核线程运行过程中发生中断没有进程用户态和内核态的转换;

      2. 内核线程主动调用schedule(),只有进程上下文的切换,没有发生中断上下文的切换,与最一般的情况略简略;

      3. 创建子进程的系统调用在子进程中的执行起点及返回用户态,如fork;

      4. 加载一个新的可执行程序后返回到用户态的情况,如execve;

  • 相关阅读:
    前端获取后台传输过来是数据 {张三:12} 解析为[object object],获取其中内容
    Idea 配置Jrebel热部署
    软件清单
    js实现敲回车键登录
    myql忽略大小写问题解决
    磁盘分区(2):格式化与挂载文件系统
    磁盘分区(1):fdisk和parted
    初识Docker:BusyBox容器后台运行失败
    安装Docker:解决container-selinux >= 2.9问题
    重启nginx:端口被占用问题
  • 原文地址:https://www.cnblogs.com/zhangchao0515/p/5396687.html
Copyright © 2020-2023  润新知