转自:https://blog.csdn.net/Vince_/article/details/88982802
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/Vince_/article/details/88982802
长文慎入~~
调度系统是现代操作系统非常核心的基础子系统之一,尤其在多任务并行操作系统(Multitasking OS)上,系统可能运行于单核或者多核CPU上,进程可能处于运行状态或者在内存中可运行等待状态。如何实现多任务同时使用资源并且提供给用户及时的响应实现实时交互以及提供高流量并发等对现代操作系统的设计实现带来了巨大挑战,而Linux调度子系统的设计同样需要实现这些看似矛盾的要求,适应不同的使用场景。
我们看到Linux是一个复杂的现在操作系统,各个子系统之间相互合作才能完成高效的任务。本文从围绕调度子系统,介绍了调度子系统核心的概念,并且将其与Linux各个相关组件的关系进行探讨,尤其是与调度子系统息息相关的中断(softirq和irq)子系统以及定时器Timer,深入而全面地展示了调度相关的各个概念以及相互联系。
由于笔者最近在调试PowerPC相关的芯片,因此相关的介绍会以此为例提取相关的内核源代码进行解读展示。涉及的代码为Linux-4.4稳定发布版本,读者可以查看源码进行对照。
1. 相关概念
要理解调度子系统,首先需要总体介绍调度的流程,对系统有一个高屋建瓴的认识之后,再在整体流程中对各个节点分别深入分析,从而掌握丰富而饱满的细节。
在系统启动早期,会注册硬件中断,时钟中断是硬件中断中非常重要的一种,调度过程中需要不断地刷新进程的状态以及设置调度标志已决定是否抢占进程的执行进行调度。时钟中断就是周期性地完成此项工作。这里又引出另外一个现代OS的调度设计思想即抢占(preempt),而与其对应的概念则为非抢占或者合作(cooperate),后面会给出两者的详细区别。时钟中断属于硬件中断,Linux系统不支持中断嵌套,所以在中断发生时又会禁止本地中断(local_irq_disable),而为了尽快相应其他可能的硬件事件,必须要尽快完成处理并开启中断,因此引出了中断下半部,也就是softirq的概念。同样在调度过程中有很多定时器(Timer/Hrtimer)会被启动来完成相应的工作。在调度发生时,针对进程的资源需求类型又有不同的调度策略,因此出现了不同的调度类,以实现不同的调度算法完成不同场景下的需求。因此本文从中断和软中断,定时器和高精度定时器,抢占和非抢占,实时和普通进程调度,锁合并发等角度进行深入分析,并将相关的概念联系起来,以完成对Linux内核调度子系统的解读。
1.1 Preemptive
Preemptive Multitasking系统上,调度器决定运行中的进程何时中止运行换出而新的进程开始执行,该过程称为抢占Preemption,而抢占前的进程运行时间一般为提前设定的时间片(Timeslice),时间片的设定与进程优先级有关,根据实际的调度类方法决定,调度类后面会具体介绍。在定时器中断处理过程中对进程的运行时间vruntime进行刷新,如果已经超过了进程可运行的时间片,则设置当前进程current的thread_info flag的调度标志TIF_NEED_RESCHED,在下一个调度入口会调用need_resched函数判断该标志,如果被设置则会进入调度过程,换出当前进程并选择新进程开始执行。关于调度入口,下面章节会进行详细介绍。
1.2 Cooperative
非抢占的Cooperative Multitasking系统最大的特点就是进程只有在主动决定放弃CPU的时候才开始调度其他进程执行,称为yielding,调度器无法控制全局的进程运行状态和时间,这其中最大的缺点就是挂起的进程可能会导致整个系统停止运行,无法调度。进程在因为需要等待特定的信号活着事件发生时会放弃CPU而进入睡眠,通过主动调用schedule进入调度。
1.3 Nice
系统普通进程一般会设定一个数值Nice来决定其优先级,在用户空间可以通过nice系统调用设置进程的nice值。Nice取值范围在-20 ~ +19,进程时间片大小一般根据nice值进行调整,nice值越高则进程时间片一般会分配越小,通过ps -el可以查看。nice可以理解为对别的进程nice一些。
1.4 Real-time priority
进程实时优先级,与nice为两个不同维度的考量,取值范围为0 ~ 99,值越大则其优先级越高,一般实时进程real-time process的优先级高于普通进程normal process。ps -eo state,uid,pid,ppid,rtprio,time,comm可以查看具体信息,其中-代表进程非实时,数值代表实时优先级。
2. 调度器的类型
根据任务的资源需求类型可以将其分为IO-bounced和Processor-bounced进程,其中IO-bounced可以较为广义的理解,比如网络设备以及键盘鼠标等,实时性要求较高,但是CPU占用可能并不密集。Processor-bounced进程对CPU的使用较为密集,比如加密解密过程,图像处理等。针对任务类型区分调度,可以实现较好的体验,提高实时性的交互,同时可以预留大量的CPU资源给计算密集型的进程。所以在调度设计中采用了复杂的算法保证及时响应以及大吞吐量。
有五种调度类:
fair_sched_class,现在较高版本的Linux上也就是CFS(Completely Fair Scheduler),Linux上面主要的调度方式,由CONFIG_FAIR_GROUP_SCHED宏控制
rt_sched_class,由CONFIG_RT_GROUP_SCHED宏控制,实时调度类型。
dl_sched_class,deadline调度类,实时调度中较高级别的调度类型,一般之后在系统紧急情况下会调用;
stop_sched_class,最高优先级的调度类型,属于实时调度类型的一种,在系统终止时会在其上创建进程进入调度。
idle_sched_class,优先级最低,在系统空闲时才会进入该调度类型调度,一般系统中只有一个idle,那就是初始化进程init_task,在初始化完成后它将自己设置为idle进程,并不做更多工作。
3. 调度子系统的初始化
start_kernel函数调用sched_init进入调度的初始化。首先分配alloc_size大小的内存,初始化root_task_group,root_task_group为系统默认的task group,系统启动阶段每个进程都属于该task group需要注意root_task_group中的成员是针对perCPU的。初始化完成之后将init_task标记为idle进程。具体看下面函数中的注释。
void __init sched_init(void)
{
int i, j;
unsigned long alloc_size = 0, ptr;
/* calculate the size to be allocated for root_task_group items.
* some items in the struct task_group are per-cpu fields, so use
* no_cpu_ids here.
*/
#ifdef CONFIG_FAIR_GROUP_SCHED
alloc_size += 2 * nr_cpu_ids * sizeof(void **);
#endif
#ifdef CONFIG_RT_GROUP_SCHED
alloc_size += 2 * nr_cpu_ids * sizeof(void **);
#endif
if (alloc_size) {
/* allocate mem here. */
ptr = (unsigned long)kzalloc(alloc_size, GFP_NOWAIT);
#ifdef CONFIG_FAIR_GROUP_SCHED
root_task_group.se = (struct sched_entity **)ptr;
ptr += nr_cpu_ids * sizeof(void **);
root_task_group.cfs_rq = (struct cfs_rq **)ptr;
ptr += nr_cpu_ids * sizeof(void **);
#endif /* CONFIG_FAIR_GROUP_SCHED */
#ifdef CONFIG_RT_GROUP_SCHED
root_task_group.rt_se = (struct sched_rt_entity **)ptr;
ptr += nr_cpu_ids * sizeof(void **);
root_task_group.rt_rq = (struct rt_rq **)ptr;
ptr += nr_cpu_ids * sizeof(void **);
#endif /* CONFIG_RT_GROUP_SCHED */
}
#ifdef CONFIG_CPUMASK_OFFSTACK
/* Use dynamic allocation for cpumask_var_t, instead of putting them on the stack.
* This is a bit more expensive, but avoids stack overflow.
* Allocate load_balance_mask for every cpu below.
*/
for_each_possible_cpu(i) {
per_cpu(load_balance_mask, i) = (cpumask_var_t)kzalloc_node(
cpumask_size(), GFP_KERNEL, cpu_to_node(i));
}
#endif /* CONFIG_CPUMASK_OFFSTACK */
/* init the real-time task group cpu time percentage.
* the hrtimer of def_rt_bandwidth is initialized here.
*/
init_rt_bandwidth(&def_rt_bandwidth,
global_rt_period(), global_rt_runtime());
/* init the deadline task group cpu time percentage. */
init_dl_bandwidth(&def_dl_bandwidth,
global_rt_period(), global_rt_runtime());
#ifdef CONFIG_SMP
/* 初始化默认调度域,调度域包含一个或者多个CPU,负载均衡是在调度域之内执行,相互之间进行隔离 */
init_defrootdomain();
#endif
#ifdef CONFIG_RT_GROUP_SCHED
init_rt_bandwidth(&root_task_group.rt_bandwidth,
global_rt_period(), global_rt_runtime());
#endif /* CONFIG_RT_GROUP_SCHED */
#ifdef CONFIG_CGROUP_SCHED
/* 将分配并初始化好的邋root_task_group加入到錿ask_groups全局链表 */
list_add(&root_task_group.list, &task_groups);
INIT_LIST_HEAD(&root_task_group.children);
INIT_LIST_HEAD(&root_task_group.siblings);
/* 初始化自动分组 */
autogroup_init(&init_task);
#endif /* CONFIG_CGROUP_SCHED */
/* 遍历每个cpu的运行队列,对其进行初始化 */
for_each_possible_cpu(i) {
struct rq *rq;
rq = cpu_rq(i);
raw_spin_lock_init(&rq->lock);
/* CPU运行队列的所有调度实体(sched_entity)的数目 */
rq->nr_running = 0;
/* CPU负载 */
rq->calc_load_active = 0;
/* 负载更新时间 */
rq->calc_load_update = jiffies + LOAD_FREQ;
/* 分别初始化运行队列的cfs rt和dl队列 */
init_cfs_rq(&rq->cfs);
init_rt_rq(&rq->rt);
init_dl_rq(&rq->dl);
#ifdef CONFIG_FAIR_GROUP_SCHED
/* root的CPU总的配额 */
root_task_group.shares = ROOT_TASK_GROUP_LOAD;
INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
/*
* How much cpu bandwidth does root_task_group get?
*
* In case of task-groups formed thr' the cgroup filesystem, it
* gets 100% of the cpu resources in the system. This overall
* system cpu resource is divided among the tasks of
* root_task_group and its child task-groups in a fair manner,
* based on each entity's (task or task-group's) weight
* (se->load.weight).
*
* In other words, if root_task_group has 10 tasks of weight
* 1024) and two child groups A0 and A1 (of weight 1024 each),
* then A0's share of the cpu resource is:
*
* A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33%
*
* We achieve this by letting root_task_group's tasks sit
* directly in rq->cfs (i.e root_task_group->se[] = NULL).
*/
/* 初始化cfs_bandwidth,普通进程占有的CPU资源,初始化调度类相应的高精度定时器 */
init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
/* 当前CPU运行队列的cfs_rq的task_group指定为tg, 即root_task_group */
/* 指定cfs_rq的rq为当前CPU运行队列rq */
/* root_task_group在当前CPU上的cfs_rq */
/* 目前schedule_entity se是空 */
init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
#endif /* CONFIG_FAIR_GROUP_SCHED */
rq->rt.rt_runtime = def_rt_bandwidth.rt_runtime;
#ifdef CONFIG_RT_GROUP_SCHED
/* 类似前面init_tg_cfs_entry的初始化, 完成相互赋值 */
init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL);
#endif
/* 初始化该队列所保存的每个CPU的负载情况 */
for (j = 0; j < CPU_LOAD_IDX_MAX; j++)
rq->cpu_load[j] = 0;
/* 该队列最后更新CPU负载的时间 */
rq->last_load_update_tick = jiffies;
#ifdef CONFIG_SMP
/* 初始化负载均衡相关的参数 */
rq->sd = NULL;
rq->rd = NULL;
rq->cpu_capacity = rq->cpu_capacity_orig = SCHED_CAPACITY_SCALE;
rq->balance_callback = NULL;
rq->active_balance = 0;
rq->next_balance = jiffies;
rq->push_cpu = 0;
rq->cpu = i;
rq->online = 0;
rq->idle_stamp = 0;
rq->avg_idle = 2*sysctl_sched_migration_cost;
rq->max_idle_balance_cost = sysctl_sched_migration_cost;
INIT_LIST_HEAD(&rq->cfs_tasks);
/* CPU运行队列加入到默认调度域中 */
rq_attach_root(rq, &def_root_domain);
#ifdef CONFIG_NO_HZ_COMMON
/* 动态时钟使用标志位,初始时间未使用 */
rq->nohz_flags = 0;
#endif
#ifdef CONFIG_NO_HZ_FULL
/* 动态时钟使用的标志位,用于保存上次调度tick发生时间 */
rq->last_sched_tick = 0;
#endif
#endif
/* 运行队列高精度定时器的初始化,还未正式生效 */
init_rq_hrtick(rq);
atomic_set(&rq->nr_iowait, 0);
}
/* 设置初始化进程的load权重 */
set_load_weight(&init_task);
#ifdef CONFIG_PREEMPT_NOTIFIERS
/* init_task的抢占通知链初始化 */
INIT_HLIST_HEAD(&init_task.preempt_notifiers);
#endif
/*
* The boot idle thread does lazy MMU switching as well:
*/
atomic_inc(&init_mm.mm_count);
enter_lazy_tlb(&init_mm, current);
/*
* During early bootup we pretend to be a normal task:
*/
/* 设定初始化进程采用fair调度类 */
current->sched_class = &fair_sched_class;
/*
* Make us the idle thread. Technically, schedule() should not be
* called from this thread, however somewhere below it might be,
* but because we are the idle thread, we just pick up running again
* when this runqueue becomes "idle".
*/
/* 将当前进程变更为idle进程,将其各项信息重新初始化,调度类设置两位idle调度器 */
init_idle(current, smp_processor_id());
calc_load_update = jiffies + LOAD_FREQ;
#ifdef CONFIG_SMP
zalloc_cpumask_var(&sched_domains_tmpmask, GFP_NOWAIT);
/* May be allocated at isolcpus cmdline parse time */
if (cpu_isolated_map == NULL)
zalloc_cpumask_var(&cpu_isolated_map, GFP_NOWAIT);
idle_thread_set_boot_cpu();
set_cpu_rq_start_time();
#endif
/* 初始化fair调度类,其实实际上是注册SCHED_SOFTIRQ类型的软中断处理函数run_rebalance_domains,执行负载平衡过程 */
/* 这里的问题是SCHED_SOFTIRQ软中断是何时触发?*/
init_sched_fair_class();
/* 标记调度器开始运行,但是此时系统只有init_task一个进程,且为idle进程,
* 定时器暂时还未启动,不会调度到其它进程,所以继续回到start_kernel执行初始化过程。
*/
scheduler_running = 1;
}
在sched_init初始化之后,继续回到start_kernel执行,跟调度相关的内容是:
init_IRQ
该函数中会初始化IRQ的栈空间,包括系统中所有的软件中断和硬件中断。时钟中断是调度的驱动因素,包括硬件中断和软中断下半部,在这里也进行了初始化。中断相关的内容后面章节会有详细的介绍,此处需要了解整个初始化流程,知道这个点做了什么。
init_timers
此处会初始化timer,注册TIMER_SOFTIRQ软中断回调函数run_timer_softirq,关于softirq的内容我会在最后进行介绍。既然在这里注册了softirq,那么在哪里开始激活或启动该softirq呢?该softirq的作用是什么?
在时钟中断的注册章节我们会看到,tick_handle_periodic为时钟中断的事件回调函数,在time_init中被赋值到时钟中断的回调函数钩子处,发生时钟中断是会被调用做中断处理。该函数最终调用tick_periodic,继续调用update_process_times,进而再调用run_local_timers函数来打开TIMER_SOFTIRQ,同时run_local_timers也调用接口hrtimer_run_queues运行高精度定时器。这是中断处理的典型方式,即硬件中断处理关键部分,启动softirq后打开硬件中断响应,更多的事务在软中断下半部中处理。关于该软中断的具体作用后面会详细介绍,这里需要了解的是它会激活所有过期的定时器。
time_init
执行时钟相关的初始化,后面会看到,我们在系统初始化初期的汇编阶段会注册硬件中断向量表,但是中断设备和事件处理函数并未初始化,这里调用init_decrementer_clockevent初始化时钟中断设备,并初始化时间回调tick_handle_periodic;同时调用tick_setup_hrtimer_broadcast注册高精度定时器设备及其回调,在中断发生时实际会被执行。此时硬件中断被激活。
sched_clock_postinit和sched_clock_init
开启调度时间相关的定时器定期更新信息。
4. 调度的处理过程
4.1 schedule()接口
首先需要关闭抢占,防止调度重入,然后调用__schedule,进行current相关的处理,比如有待决信号,则继续标记状态为TASK_RUNNING,或者如果需要睡眠则调用deactivate_task将从运行队列移除后加入对应的等待队列,通过pick_next_task选择下一个需要执行的进程,进行context_switch进入新进程运行。
4.2 pick_next_task
首先判断当前进程调度类sched_class是否为fair_sched_calss,也就是CFS,如果是且当前cpu的调度队列下所有调度实体数目与其下面所有CFS调度类的下属群组内的调度实体数目总数相同,即无RT等其他调度类中有调度实体存在(rq->nr_running == rq->cfs.h_nr_running),则直接返回当前调度类fair_sched_class的pick_next_task选择结果,否则需要遍历所有调度类for_each_class(class),返回class->pick_next_task的非空结果。
这里需要关注的是for_each_class的遍历过程,从sched_class_highest开始,也就是stop_sched_class。
#define sched_class_highest (&stop_sched_class)
#define for_each_class(class)
for (class = sched_class_highest; class; class = class->next)
extern const struct sched_class stop_sched_class;
extern const struct sched_class dl_sched_class;
extern const struct sched_class rt_sched_class;
extern const struct sched_class fair_sched_class;
extern const struct sched_class idle_sched_class;
4.3 各个调度类的关联
按照优先级依次罗列组成单链表:
stop_sched_class->next->dl_sched_class->next->rt_sched_class->next->fair_sched_class->next->idle_sched_class->next=MULL
4.4 调度类的注册
在编译过程中通过early_initcall(cpu_stop_init)进行stop相关的注册,cpu_stop_init对cpu_stop_threads进行了注册,其create方法被调用时实际执行了cpu_stop_create->sched_set_stop_task,对stop的sched_class进行注册,create的执行路径如下:
cpu_stop_init->
smpboot_register_percpu_thread->
smpboot_register_percpu_thread_cpumask->
__smpboot_create_thread->
cpu_stop_threads.create(即cpu_stop_create)
现在回到pick_next_task,由于stop_sched_class作为最高级别调度类将所有系统中的调度类链接起来,遍历过程查看所有sched_class,从最高优先级开始,直到找到一个可以调度的进程返回,如果整个系统空闲,则之中会调度到系统初始化进程init_task,其最后被设置为idle进程在系统空闲时的调度执行,上面对sched_init的解释里面有详细说明。
5. 调度的入口
Timer interrupt is responsible for decrementing the running process’s timeslice count.When the count reaches zero, need_resched is set and the kernel runs the scheduler as soon as possible
在时钟中断中更新进程执行时间信息,如果时间片用完,则设置need_resched,在接下来的调度过程中换出正在执行的进程。
RTC(Real-Time Clock)
实时时钟,非易失性设备存储系统时间,在系统启动时,通过COMS连接设备到系统,读取对应的时间信息提供给系统设置。
System Timer
系统定时器由电子时钟以可编程频率实现,驱动系统时钟中断定期发生,也有部分架构通过减法器decrementer实现,通过计数器设定初始值,以固定频率减少直到为0,然后出发时钟中断。
The timer interrupt is broken into two pieces: an architecture-dependent and an architecture-independent
routine.
The architecture-dependent routine is registered as the interrupt handler for the system
timer and, thus, runs when the timer interrupt hits. Its exact job depends on the
given architecture, of course, but most handlers perform at least the following work:
1. Obtain the xtime_lock lock, which protects access to jiffies_64 and the wall
time value, xtime.
2. Acknowledge or reset the system timer as required.
3. Periodically save the updated wall time to the real time clock.
4. Call the architecture-independent timer routine, tick_periodic().
The architecture-independent routine, tick_periodic(), performs much more work:
1. Increment the jiffies_64 count by one. (This is safe, even on 32-bit architectures,
because the xtime_lock lock was previously obtained.)
2. Update resource usages, such as consumed system and user time, for the currently
running process.
3. Run any dynamic timers that have expired (discussed in the following section).
4. Execute scheduler_tick(), as discussed in Chapter 4.
5. Update the wall time, which is stored in xtime.
6. Calculate the infamous load average.
6. 时钟中断(Timer Interrupt)
时钟中断是系统中调度和抢占的驱动因素,在时钟中断中会进行进程运行时间的更新等,并更新调度标志,以决定是否进行调度。下面以Powerpc FSL Booke架构芯片ppce500为例来看具体代码,其他架构类似,设计思想相同。
6.1 时钟中断的注册
首先在系统最开始的启动阶段注册中断处理函数,这个过程发生在start_kernel执行之前的汇编初始化部分,在系统初始化完成后时钟中断发生时执行中断回调函数。
IBM的PowerPC架构的内核启动入口head文件在arch/powerpc/kernel/下,其中e500架构的内核入口文件为head_fsl_booke.S,其中定义了中断向量列表:
interrupt_base:
/* Critical Input Interrupt */
CRITICAL_EXCEPTION(0x0100, CRITICAL, CriticalInput, unknown_exception)
......
/* Decrementer Interrupt */
DECREMENTER_EXCEPTION
......
时钟中断的定义为DECREMENTER_EXCEPTION,实际展开过程在arch/powerpc/kernel/head_booke.h头文件中:
#define EXC_XFER_TEMPLATE(hdlr, trap, msr, copyee, tfer, ret)
li r10,trap;
stw r10,_TRAP(r11);
lis r10,msr@h;
ori r10,r10,msr@l;
copyee(r10, r9);
bl tfer;
.long hdlr;
.long ret
#define EXC_XFER_LITE(n, hdlr)
EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, NOCOPY, transfer_to_handler,
ret_from_except)
#define DECREMENTER_EXCEPTION
START_EXCEPTION(Decrementer)
NORMAL_EXCEPTION_PROLOG(DECREMENTER);
lis r0,TSR_DIS@h; /* Setup the DEC interrupt mask */
mtspr SPRN_TSR,r0; /* Clear the DEC interrupt */
addi r3,r1,STACK_FRAME_OVERHEAD;
EXC_XFER_LITE(0x0900, timer_interrupt)
再来看timer_interrupt函数:
static void __timer_interrupt(void)
{
struct pt_regs *regs = get_irq_regs();
u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
struct clock_event_device *evt = this_cpu_ptr(&decrementers);
u64 now;
trace_timer_interrupt_entry(regs);
if (test_irq_work_pending()) {
clear_irq_work_pending();
irq_work_run();
}
now = get_tb_or_rtc();
if (now >= *next_tb) {
*next_tb = ~(u64)0;
if (evt->event_handler)
evt->event_handler(evt);
__this_cpu_inc(irq_stat.timer_irqs_event);
} else {
now = *next_tb - now;
if (now <= DECREMENTER_MAX)
set_dec((int)now);
/* We may have raced with new irq work */
if (test_irq_work_pending())
set_dec(1);
__this_cpu_inc(irq_stat.timer_irqs_others);
}
#ifdef CONFIG_PPC64
/* collect purr register values often, for accurate calculations */
if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
struct cpu_usage *cu = this_cpu_ptr(&cpu_usage_array);
cu->current_tb = mfspr(SPRN_PURR);
}
#endif
trace_timer_interrupt_exit(regs);
}
/*
* timer_interrupt - gets called when the decrementer overflows,
* with interrupts disabled.
*/
void timer_interrupt(struct pt_regs * regs)
{
struct pt_regs *old_regs;
u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
/* Ensure a positive value is written to the decrementer, or else
* some CPUs will continue to take decrementer exceptions.
*/
set_dec(DECREMENTER_MAX);
/* Some implementations of hotplug will get timer interrupts while
* offline, just ignore these and we also need to set
* decrementers_next_tb as MAX to make sure __check_irq_replay
* don't replay timer interrupt when return, otherwise we'll trap
* here infinitely :(
*/
if (!cpu_online(smp_processor_id())) {
*next_tb = ~(u64)0;
return;
}
/* Conditionally hard-enable interrupts now that the DEC has been
* bumped to its maximum value
*/
may_hard_irq_enable();
#if defined(CONFIG_PPC32) && defined(CONFIG_PPC_PMAC)
if (atomic_read(&ppc_n_lost_interrupts) != 0)
do_IRQ(regs);
#endif
old_regs = set_irq_regs(regs);
irq_enter();
__timer_interrupt();
irq_exit();
set_irq_regs(old_regs);
}
在__timer_interrupt函数中执行了evt->event_handler函数调用,此处event_handler是什么,究竟是怎么注册的呢?
答案是tick_handle_periodic,该函数实际上为中断事件真正的处理过程,前面的interrupt handler仅仅是为中断做一些准备工作,如完成寄存器等相关信息的保存等,做好了入口工作,二下面的event_handler则完成了中断事件实际想做的事情,其函数定义如下:
/*
* Event handler for periodic ticks
*/
void tick_handle_periodic(struct clock_event_device *dev)
{
int cpu = smp_processor_id();
ktime_t next = dev->next_event;
tick_periodic(cpu);
#if defined(CONFIG_HIGH_RES_TIMERS) || defined(CONFIG_NO_HZ_COMMON)
/*
* The cpu might have transitioned to HIGHRES or NOHZ mode via
* update_process_times() -> run_local_timers() ->
* hrtimer_run_queues().
*/
if (dev->event_handler != tick_handle_periodic)
return;
#endif
if (!clockevent_state_oneshot(dev))
return;
for (;;) {
/*
* Setup the next period for devices, which do not have
* periodic mode:
*/
next = ktime_add(next, tick_period);
if (!clockevents_program_event(dev, next, false))
return;
/*
* Have to be careful here. If we're in oneshot mode,
* before we call tick_periodic() in a loop, we need
* to be sure we're using a real hardware clocksource.
* Otherwise we could get trapped in an infinite
* loop, as the tick_periodic() increments jiffies,
* which then will increment time, possibly causing
* the loop to trigger again and again.
*/
if (timekeeping_valid_for_hres())
tick_periodic(cpu);
}
}
tick_handle_periodic的注册和执行流程如下:
start_kernel->time_init->init_decrementer_clockevent->register_decrementer_clockevent->clockevents_register_device->tick_check_new_device->tick_setup_periodic->tick_set_periodic_handler->tick_handle_periodic->tick_periodic->update_process_times->scheduler_tick
后面一段为tick_handle_periodic的执行流程调用,可以看到在scheduler_tick中又调用了调度类的task_tick函数接口,如果当前采用CFS调度策略则执行fair_sched_class->task_tick,同样的在rt_sched_class中实现为task_tick_rt,实现如下:
static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
{
struct sched_rt_entity *rt_se = &p->rt;
update_curr_rt(rq);
watchdog(rq, p);
/*
* RR tasks need a special form of timeslice management.
* FIFO tasks have no timeslices.
*/
if (p->policy != SCHED_RR)
return;
if (--p->rt.time_slice)
return;
p->rt.time_slice = sched_rr_timeslice;
/*
* Requeue to the end of queue if we (and all of our ancestors) are not
* the only element on the queue
*/
for_each_sched_rt_entity(rt_se) {
if (rt_se->run_list.prev != rt_se->run_list.next) {
requeue_task_rt(rq, p, 0);
resched_curr(rq);
return;
}
}
}
可以看到,如果当前时间片还未用完,则直接返回,否则将进程实时时间片设置为sched_rr_timeslice,并且将调度实体的进程放置到调度队列rq的末尾,调用resched_curr设置调度信息后返回,这里实际上是实时调度的RR(Round Robin)思想。
现在又有新的问题,设置了进程的调度标志TIF_NEED_RESCHED之后,实际的调度何时发生呢?
调度的入口分为四个:
中断返回;
系统调用返回用户空间;
进程主动放弃cpu执行调度;
信号处理完成后返回内核空间;
时钟中断返回导致进程调度为第1种,此处以ppce500为例来看调度如何发生:
各种异常返回的入口RET_FROM_EXC_LEVEL,调用user_exc_return而进入do_work
而do_work作为总的入口点进入执行过程:
do_work: /* r10 contains MSR_KERNEL here */
andi. r0,r9,_TIF_NEED_RESCHED
beq do_user_signal
可以看到,如果未设置调度标志,则会执行restore_user返回之前的调用栈
do_user_signal: /* r10 contains MSR_KERNEL here */
ori r10,r10,MSR_EE
SYNC
MTMSRD(r10) /* hard-enable interrupts */
/* save r13-r31 in the exception frame, if not already done */
lwz r3,_TRAP(r1)
andi. r0,r3,1
beq 2f
SAVE_NVGPRS(r1)
rlwinm r3,r3,0,0,30
stw r3,_TRAP(r1)
2: addi r3,r1,STACK_FRAME_OVERHEAD
mr r4,r9
bl do_notify_resume
REST_NVGPRS(r1)
b recheck
调用do_resched的地方为同样定义在entry_32.S的recheck函数:
recheck:
/* Note: And we don't tell it we are disabling them again
* neither. Those disable/enable cycles used to peek at
* TI_FLAGS aren't advertised.
*/
LOAD_MSR_KERNEL(r10,MSR_KERNEL)
SYNC
MTMSRD(r10) /* disable interrupts */
CURRENT_THREAD_INFO(r9, r1)
lwz r9,TI_FLAGS(r9)
andi. r0,r9,_TIF_NEED_RESCHED
bne- do_resched
andi. r0,r9,_TIF_USER_WORK_MASK
beq restore_user
在entry_32.S中可以看到在函数do_resched中调用了schedule函数执行了调度:
do_resched: /* r10 contains MSR_KERNEL here */
/* Note: We don't need to inform lockdep that we are enabling
* interrupts here. As far as it knows, they are already enabled
*/
ori r10,r10,MSR_EE
SYNC
MTMSRD(r10) /* hard-enable interrupts */
bl schedule
6.2 时钟中断的执行过程
在前面的中断向量定义中可以看到有一个处理过程为bl tfer;这里的tfer为transfer_to_handler或者transfer_to_handler_full,在时钟中断中为transfer_to_handler,主要做了一些中断处理函数调用之前的准备处理过程,然后跳转到中断执行过程hdlr,最后进入ret执行,ret对应函数ret_from_except或者ret_from_except_full,在时钟中断中对应为ret_from_except,进而调用resume_kernel后进入preempt_schedule_irq执行调度过程:
/*
* this is the entry point to schedule() from kernel preemption
* off of irq context.
* Note, that this is called and return with irqs disabled. This will
* protect us against recursive calling from irq.
*/
asmlinkage __visible void __sched preempt_schedule_irq(void)
{
enum ctx_state prev_state;
/* Catch callers which need to be fixed */
BUG_ON(preempt_count() || !irqs_disabled());
prev_state = exception_enter();
do {
preempt_disable();
local_irq_enable();
__schedule(true);
local_irq_disable();
sched_preempt_enable_no_resched();
} while (need_resched());
exception_exit(prev_state);
}
接下来看看函数preempt_disable和local_irq_disable
static __always_inline volatile int *preempt_count_ptr(void)
{
return ¤t_thread_info()->preempt_count;
}
其实关闭抢占只是将当前进程状态信息preempt_count增加相应的值1,在此调用之后又barrier()操作,防止编译器优化和内存访问顺序问题,达到同步的目的。
/*
* Wrap the arch provided IRQ routines to provide appropriate checks.
*/
#define raw_local_irq_disable() arch_local_irq_disable()
#define raw_local_irq_enable() arch_local_irq_enable()
#define raw_local_irq_save(flags)
do {
typecheck(unsigned long, flags);
flags = arch_local_irq_save();
} while (0)
#define raw_local_irq_restore(flags)
do {
typecheck(unsigned long, flags);
arch_local_irq_restore(flags);
} while (0)
#define raw_local_save_flags(flags)
do {
typecheck(unsigned long, flags);
flags = arch_local_save_flags();
} while (0)
#define raw_irqs_disabled_flags(flags)
({
typecheck(unsigned long, flags);
arch_irqs_disabled_flags(flags);
})
#define raw_irqs_disabled() (arch_irqs_disabled())
#define raw_safe_halt() arch_safe_halt()
#define local_irq_enable() do { raw_local_irq_enable(); } while (0)
#define local_irq_disable() do { raw_local_irq_disable(); } while (0)
#define local_irq_save(flags)
do {
raw_local_irq_save(flags);
} while (0)
#define local_irq_restore(flags) do { raw_local_irq_restore(flags); } while (0)
#define safe_halt() do { raw_safe_halt(); } while (0)
跟架构相关的irq操作定义如下:
static inline void arch_local_irq_restore(unsigned long flags)
{
#if defined(CONFIG_BOOKE)
asm volatile("wrtee %0" : : "r" (flags) : "memory");
#else
mtmsr(flags);
#endif
}
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags = arch_local_save_flags();
#ifdef CONFIG_BOOKE
asm volatile("wrteei 0" : : : "memory");
#else
SET_MSR_EE(flags & ~MSR_EE);
#endif
return flags;
}
static inline void arch_local_irq_disable(void)
{
#ifdef CONFIG_BOOKE
asm volatile("wrteei 0" : : : "memory");
#else
arch_local_irq_save();
#endif
}
static inline void arch_local_irq_enable(void)
{
#ifdef CONFIG_BOOKE
asm volatile("wrteei 1" : : : "memory");
#else
unsigned long msr = mfmsr();
SET_MSR_EE(msr | MSR_EE);
#endif
}
static inline bool arch_irqs_disabled_flags(unsigned long flags)
{
return (flags & MSR_EE) == 0;
}
static inline bool arch_irqs_disabled(void)
{
return arch_irqs_disabled_flags(arch_local_save_flags());
}
#define hard_irq_disable() arch_local_irq_disable()
6.3 IRQ介绍
这里来分析一下ppce500的irq相关内容:
e500为booke架构芯片,与经典的powerpc架构有所差别,对于外部中断异常处理而言,主要是获取中断向量地址的方式差异。其中经典架构中是根据异常类型得到偏移 offset, 异常向量的物理地址为 :
MSR[IP]=0 时,Vector = offset ;
MSR[IP]=1 时,Vector = offset | 0xFFF00000;
其中 MSR[IP] 代表 Machine State Register 的 Interrupt Prefix 比特,该比特用来选择中断向量的地址前缀。
而booke架构芯片则是从异常类型对应的 IVOR(Interrupt Vector Offset Register) 得到偏移 ( 只取低 16 比特 , 最低 4 比特清零 ),加上 IVPR(Interrupt Prefix Register) 的高 16 比特,构成中断向量的地址:
Vector = (IVORn & 0xFFF0) | (IVPR & 0xFFFF0000);
值得注意的是,跟经典 PowerPC 不同,Book E 的中断向量是 Effective Address, 对应 Linux 内核的虚拟地址。Book E架构的MMU是一直开启的,所以不会运行在实模式(real mode),在初始化过程中是通过在TLB中手动创建地址转换条目实现地址转换,建立页表之后会根据页表信息更新TLB。这里可以列出来在内核源代码里面的注释信息:
/*
* Interrupt vector entry code
*
* The Book E MMUs are always on so we don't need to handle
* interrupts in real mode as with previous PPC processors. In
* this case we handle interrupts in the kernel virtual address
* space.
*
* Interrupt vectors are dynamically placed relative to the
* interrupt prefix as determined by the address of interrupt_base.
* The interrupt vectors offsets are programmed using the labels
* for each interrupt vector entry.
*
* Interrupt vectors must be aligned on a 16 byte boundary.
* We align on a 32 byte cache line boundary for good measure.
*/
下面是手册上面关于Fixed-Interval Timer Interrupt的章节说明:
Fixed-Interval Timer Interrupt, A fixed-interval timer interrupt occurs when no higher priority exception exists, a fixed-interval timer exception exists (TSR[FIS] = 1), and the interrupt is enabled (TCR[FIE] = 1 and (MSR[EE] = 1 or (MSR[GS] = 1 ))). See Section 9.5, “Fixed-Interval Timer.”
The fixed-interval timer period is determined by TCR[FPEXT] || TCR[FP], which specifies one of 64 bit locations of the time base used to signal a fixed-interval timer exception on a transition from 0 to 1. TCR[FPEXT] || TCR[FP] = 0b0000_00 selects TBU[0]. TCR[FPEXT] || TCR[FP] = 0b1111_11 selects TBL[63].
NOTE: Software Considerations
MSR[EE] also enables other asynchronous interrupts.
TSR[FIS] is set when a fixed-interval timer exception exists.
SRR0, SRR1, and MSR, are updated as shown in this table.
Register Setting
SRR0 Set to the effective address of the next instruction to be executed.
SRR1 Set to the MSR contents at the time of the interrupt.
MSR
• CM is set to EPCR[ICM]
• RI, ME, DE, CE are unchanged
• All other defined MSR bits are cleared
TSR FIS is set when a fixed-interval timer exception exists, not as a result of the interrupt. See Section 4.7.2, “Timer Status Register (TSR).”
Instruction execution resumes at address IVPR[0–47] || IVOR11[48–59] || 0b0000.
NOTE: Software Considerations
To avoid redundant fixed-interval timer interrupts, before reenabling MSR[EE], the interrupt handler must clear TSR[FIS] by writing a word to TSR using mtspr with a 1 in any bit position to be cleared and 0 in all others. Data written to the TSR is not direct data, but a mask. Writing a 1 to this bit causes it to be cleared; writing a 0 has no effect.
https://www.ibm.com/developerworks/cn/linux/l-cn-powerpc-mpic/index.html
https://www.nxp.com/files-static/32bit/doc/ref_manual/EREF_RM.pdf
https://blog.51cto.com/13578681/2073499
https://www.cnblogs.com/tolimit/p/4303052.html
http://www.haifux.org/lectures/299/netLec7.pdf
https://ggaaooppeenngg.github.io/zh-CN/2017/05/07/cgroups-%E5%88%86%E6%9E%90%E4%B9%8B%E5%86%85%E5%AD%98%E5%92%8CCPU/
https://blog.csdn.net/pwl999/article/details/78817899
https://blog.csdn.net/zhoudaxia/article/details/7375780
————————————————
版权声明:本文为CSDN博主「LoneHugo」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/Vince_/article/details/88982802