之前在一本介绍操作系统的书上看到说,由MMU内存管理单元负责把虚拟内存地址转换为物理内存地址。
后来在研究 Page Table的时候有看到介绍这么说,When a process requests access to its memory, it is the responsibility of the operating system to map the virtual address provided by the process to the physical address where that memory is stored. The page table is where the operating system stores its mappings of virtual addresses to physical addresses. 这么说,是操作系统负责把虚拟内存地址转换为物理内存地址。
究竟是MMU硬件还是OS操作系统负责?
事实上,它们都是对的,在现代计算机中都存在。
请看:
A translation lookaside buffer (TLB) is a CPU cache that memory management hardware uses to improve virtual address translation speed.
The TLB is typically implemented as content-addressable memory (CAM). The CAM search key is the virtual address and the search result is a physical address.
Software TLB and hardware TLB
A software TLB or a software loaded TLB, is a TLB that relies on the operating system to load entries from the pagetable. Instruction sets of CPUs that implement TLB have instructions that allow loading entries into any slot in the TLB. The format of the TLB entry is defined as a part of the ISA.[1] SPARC and MIPS provide examples of designs that implement a software loaded TLB.
A hardware loaded TLB (or hardware walked TLB), like on x86, is designed to load entries from the pagetable invisibly to the operating system. Filling the TLB with address translations is under the control of dedicated hardware. In fact, if the TLB were removed from the CPU (not that it is a good idea), the programs would exhibit no difference, except that the time for executing them would increase. Moreover, since it is invisible to the OS and the ISA, the format of the TLB entries can be defined as needed and this definition can change from CPU to CPU without causing loss of compatibility for the programs. Processors with a hardware TLB may still require TLB flushing instructions, such as ARM processors, whilst other processor may implicitly flush the TLB on certain operations, such as x86.
The Itanium architecture provides an option of using either software or hardware managed TLBs.[2]
顺便,继续看看CAM:
Content-addressable memory (CAM) is a special type of computer memory used in certain very high speed searching applications. It is also known as associative memory, associative storage, or associative array, although the last term is more often used for a programming data structure. (Hannum et al., 2004) Several custom computers, like the Goodyear STARAN, were built to implement CAM, and were referred to as associative computers.
OK,继续看 associative array:
An associative array (also associative container, map, mapping, dictionary, finite map, and in query-processing an index or index file) is an abstract data type composed of a collection of unique keys and a collection of values, where each key is associated with one value (or set of values).
Associative arrays are usually used when lookup is the most frequent operation. For this reason, implementations are usually designed to allow speedy lookup, at the expense of slower insertion and a larger storage footprint than other data structures (such as association lists).
There are two main efficient data structures used to represent associative arrays, the hash table and the self-balancing binary search tree (such as a red-black tree or an AVL tree). Skip lists are also an alternative, though relatively new and not as widely used. B-trees (and variants) can also be used, and are commonly used when the associative array is too large to reside entirely in memory, for instance in a simple database.