• 8 Great Ideas in Computer Architecture


    8 Great Ideas in Computer Architecture
    Application of the following great ideas has accounted for much of the tremendous growth in computing capabilities over the past 50 years.

    1. Design for Moore's law
    2. Use abstraction to simplify design
    3. Make the common case fast
    4. Performance via parallelism
    5. Performance via pipelining
    6. Performance via prediction
    7. Hierarchy of memories
    8. Dependability via redundancy

    Design for Moore's Law

    Gordon Moore, one of the founders of Intel made a prediction in 1965 that integrated circuit resources would double every 18–24 months. This prediction has held approximately true for the past 50 years. It is now known as Moore's Law.

    When computer architects are designing or upgrading the design of a processor they must anticipate where the competition will be in 3-5 years when the new processor reaches the market. Targeting the design to be just a little bit better than today's competition is not good enough.

    Use Abstraction to Simplify Design

    Abstraction uses multiple levels with each level hiding the details of levels below it. For example:

    The instruction set of a processor hides the details of the activities involved in executing an instruction.
    High-level languages hide the details of the sequence of instructions need to accomplish a task.
    Operating systems hide the details involved in handling input and output devices.

    Make the Common Case Fast


    The most significant improvements in computer performance come from improvements to the common case areas where the current design is spending the most time.

    This idea is sometimes called Amdahl's law, though it is preferable to use that term to refer to a mathematical law for analyzing improvements. The mathematical law also is closely related to the law of diminishing returns.

    Performance via Parallelism


    Doing different parts of a task in parallel accomplishes the task in less time than doing them sequentially. A processor engages in several activities in the execution of an instruction. It runs faster if it can do these activities in parallel.

    Performance via Pipelining


    This idea is an extension of the idea of parallelism. It is essentially handling the activities involved in instruction execution as an assembly line. As soon as the first activity of an instruction is done you move it to the second activity and start the first activity of a new instruction. This results in executing more instructions per unit time compared to waiting for all activities of the first instruction to complete before starting the second instruction.


    Performance via Prediction


    A conditional branch is a type of instruction determines the next instruction to be executed based on a condition test. Conditional branches are essential for implementing high-level language if statements and loops.

    Unfortunately, conditional branches interfere with the smooth operation of a pipeline — the processor does not know where to fetch the next instruction until after the condition has been tested.

    Many modern processors reduce the impact of branches with speculative execution: make an informed guess about the outcome of the condition test and start executing the indicated instruction. Performance is improved if the guesses are reasonably accurate and the penalty of wrong guesses is not too severe.


    Hierarchy of Memories


    The principle of locality states that memory that has been accessed recently is likely to be accessed again in the near future. That is, accessing recently accessed data is a common case for memory accesses. To make this common case faster you need a cache — a small high-speed memory designed to hold recently accessed data.

    Modern processors use as many as 3 levels of caches. This is motivated by the large difference in speed between processors and memory.


    Dependability via Redundancy


    One of the most important ideas in data storage is the Redundant Array of Inexpensive Disks (RAID) concept. In most versions of RAID, data is stored redundantly on multiple disks. The redundancy insures that if one disk fails the data can be recovered from other disks.

    参考:https://www.d.umn.edu/~gshute/arch/great-ideas.html

  • 相关阅读:
    catch tcl tk
    C语言的指针深入理解外加一精华帖
    Linux Shell编程4
    shell之测试语法
    linux 用户空间 和 内核空间 延时函数
    linux 用户空间 和 内核空间 延时函数
    C语言的指针深入理解外加一精华帖
    面向对象的编程技巧
    awk用法小结
    awk用法小结
  • 原文地址:https://www.cnblogs.com/profesor/p/14661976.html
Copyright © 2020-2023  润新知