1. How can the operating system provide the illusion of a nearly-endless supply of said CPUs?
The OS creates this illusion by virtualizing the CPU. The basic technique, known as the time-
sharing the CPU, allows users to run as many concurrent processes as they would like. Potential
cost is performance, as each will run slowly if the CPU(s) must be shared. To implement virtualization
of the CPU, and to implement it well, the OS will need both some low-level machinery mechanisms as
well as some high-level intelligence. We call the low-level machinery mechanisms. Mechanisms are
low-level methods or protocals that implement a needed piece of functionality. For example, we will learn
later how to implement a context swithch, which gives the OS the ability to stop running one program
and start running another on a given CPU; this time-sharing mechanism is employed by all modern OSes.
On top of these mechanisms resides some of the intelligence in the OS, in the form of policies. Policies
are algorithms for making some kind of decision within the OS. For example, given a number of possible
programs to run on a CPU, which program should the OS to run? A scheduling policy in the OS will make
this decision, likely using historical information (which program has run more over the last minute?),
workload knowledge (what types of program are run?) and performance metrics (is the system optimizing
for interactive performance, or throughtput?) to make this decision.
2. Tips: Separate Policy and Mechanism
In many operating systems, a common design paradigm is to separate high-level policies from low-level
mechanisms. You can think of the mechanism as providing the answer to a how question about a system;
For example, how does an operating system perform an context switch? The policy provides the answer to
a which question; For example, which process should the operating system run right now? Separating the
two allows one easily to change policies without having to rethink the mechanism and is thus a form of
modularity, a general software design principle.
3. Tips: Time Sharing and Space Sharing
Time sharing is one of the most basic techniques used by OS to share a resource. By allowing the resource
to be used for a little while by one entity, and then a little while by another, and so forth. The resource in
question (e.g. the CPU, or a network link) can be shared by many. The natural counterpart of time sharing
is space sharing, where a resource is divided (in space) among those who want to use it. For example, disk
space is natually a space-shared resource, as once a block is assigned to a file, it is not likely to be assigned
to another file until the user deletes it.
4. The abstraction --- Process
The abstraction provided by the OS of a running program is something we will call a process. To understand
what constitutes a process, we thus have to understand its machine state: what a program can read or update
when it is running? At any given time, what parts of the machine are important to the execution of this program?
One obvious component of machine state that comprises a process is its memory. Instructions lie in memory; The
data that the running program reads and writes sits in memory as well. Thus the memory that the process can
address called it address space is part of the process.
Also part of the process's machine state are registers; many instructions explicitly read or update registers and
thus clearly they are important to the execution of the process. Note that there are some particularly special
registers that form part of this machine state. For example, the Program Counter (PC) (sometimes called the
instruction pointer IP) tells us which instruction of the program is currently being executed; similarly a stack pointer
and associated frame pointer are usde to manage the stack for function parameters, local variables and return addresses.
Finally, programs often access persistent storage devices too.