• 1.4 Under the covers


    1.4 Under the cover (机盖下面)

    Now that we have looked below your program to uncover1 the underlying software, let's open the covers of your computer to learn about the underlying hardware. The underlying hardware in any computer performs the same basic functions: inputting data, outputting data, processing data, and storing data. How these functions are performed is the primary topic of this book, and subsequent2 chapters deal with different parts of these four tasks.

    1. uncover /ʌn'kʌvə/ vt. 发现;揭开;揭露;vi, 发现;揭示;揭去盖子

      1. VT  If you uncover something, especially something that has been kept secret, you discover or find out about it. 找出

      例: Auditors said they had uncovered evidence of fraud. 审计员说他们已找到了诈骗的证据。

      2. VT  To uncover something means to remove something that is covering it. 移去(某物的)遮盖物

      例: When the seedlings sprout, uncover the tray. 幼苗发芽后,揭开盘上的遮盖物。

      1. 揭露;泄露: The police have uncovered a plot. 警方破获了一项阴谋。

      2. 移去...的覆盖物;使露出:He uncovered the box. 他将盒拿掉。

      3. 脱掉...的帽子表示敬意: He uncovered his head when he met his boss in the street. 在街上遇到老板时他脱帽致敬。

      vi. 1. 脱帽致敬: Everyone uncovered when the king appeared. 国王出现时, 人人脱帽致敬。

      2. 移去覆盖物;揭掉盖子

      CET4: uncover  vt. 揭露,暴露,揭开...的盖子   un(打开)+cover(盖子)---揭开盖子 ---揭露

      You'll be able to uncover some new clues. 你将会发现一些新的线索。

      One interesting correlation  Manton uncovered is that better-educated people are likely to live longer. (译文:曼顿揭示了一种有趣的关联,即人们受教育程度越高,寿命可能越长) 。出自《剑桥雅思真题集6》 page 44 第六段;

    2. subsequent /'sʌbsikwənt/ adj. 后来的;随后的:

      1. adj.  You use subsequent to describe something that happened or existed after the time or even that just been referred to. 随后

      例: ... the increase of population in subsequent  years. ... 随后几鼐

      

      When we come to an important point in this book, a point so important that we hope you will remember it forever, we emphasize it by identifying it as a Big picture item. We have about a dozen Big Pictures in this book, the first being the five components of a computer that perform the tasks of inputting, outputting processing, and storing data.

      Two key components of computers are input devices, such as the microphone, and output devices, such as the speaker. As the names suggest, input feeds the computer, and output is the result of computation sent to the user. Some devices such as wireless networks, provide both input and output to the computer.

      Chapters 5 and 6 describe input/output (I/O) devices in more detail, but let's take an introductory tour through the computer hardware, starting with the external I/O devices.

      The five classic components of a computer are input, memory, datapath, and control, with the last two sometimes combined and called the processor. Figure 1.5 shows the standard organization of a computer. This organization is independent of hardware technology: you can place every piece of every computer, past and present, into one of these five categories. To help you keep all this in perspective, the five components of a copmputer are shown on the front page of each of the following chapters, with the portion of interest to that chapter highlighted.

     

    Through the looking glass

    The most fascinating I/O device is probably the graphics display. Most personal mobile devices use liquid crystal display (LCDs) to get a thin, low-power display. The LCD is not the source of light; instead, it controls the transmission of light. A typical LCD includes rod-shaped molecules in a liquid that form a twisting helix that bends light entering the display, from either a light source behind the display or less often from reflected light. The road straighten out when a current is applied or less often from reflected light. Since the liquid crystal material is between two screens polarized at 90 degrees, the light can not pass through unless it is bent. Today, most LCD displays use an active matrix that has a tiny transistor switch at each piex to precisely control current and make sharper images. A red-green-blue mask associated with each dot on the display determines the intensity of the three-color components in the final image; in a color active matrix LCD, there are three transistor switches at eah point.

      The image is composed of a matrix of picture elements, or pixels, which can be represented as a matrix of bits, called a bit map. Depending on the size of the screen and the the resolution, the display matrix in a typical tablet ranges in size from 1024*1024 to 2048*1536. A color display might use 8 bits for each of the three colors (red, blue, green), for 24 bits per pixel, permitting millions of different colors to be displayed.

      The computer hardware suppor for graphics consists mainly of a raster refresh buffer, or frame buffer, to store the bit map. The image to be represented onscreen is stored in the frame buffer, and the bit pattern per pixel is read out to the graphics display at the refresh rate. Figure 1.6 shows a frame buffer with a simplified design of just 4 bits per pixel.

      The goal of the bit map is to faithfully represent what is on the screen. The challenges in graphics systems arise because the human eye is very good at detecting even subtle changes on the screen.

    Touchscreen

    While PCs also use LCD displays, the tablets and smartphones of the PostPC era have replaced the keyboard and mouse with touch sensitive displays, which has the wonderful user interface advantage of users pointing directly what they are interested in rather than indirectly with a mouse.

      While there are a variety of ways to implement a touch screen, many tablets today use capacitive sensing. Since people are electrical conductors the electrostatic field of the screen, which results in change in capacitance. This technology can allow multiple touches simultaneously, which allows gestures that can lead to attractive user interfaces.

    Opening the Box

    Figure 1.7 shows the contents of the Apple iPad 2 tablet computer. Unsurprisingly, of the five classic components of the computer, I/O dominates this reading device. The list of I/O devices includes a capacitive multitouch LCD display, front facing camera, rear facing camera, micophone, headphone jack, speakers, accelerometer, gryoscope, Wi-Fi network, and Bluetooth network. The datapath, control, and memory are a tiny portion of the components.

      The small rectangles in Figure 1.8 contain the devices that drive our advancing technology, called integrated circuits and nicknamed chips. The A5 package seen in the middle of in Figure 1.8 contains two ARM processors that operate with a clock rate of 1 GHz. The processor is the active part of th computer, following the instructions of a program to the letter. It adds numbers, tests numbers, signals I/O devices to activate, and so on. Occasionally, people call the processor the CPU, for the more bureaucratic-sounding central processor unit.

      Descending even lower into the hardware, Figure 1.9 reveals details of a microprocessor. The processor logically comprises two main components: datapath and control, the respective brawn brain of the processor. The datapath performs the arithmetic operations, and control tells the datapath, memory, and I/O devices what to do according to the wishes of the instructions of the program. Chapter 4 explains the datapath and control for a higher-performance design.

      The A5 package in Figure 1.8 also includes two memory chips, each with 2 gibibits of capacity, thereby supplying 512 MIB. The memory is where  the programs are kept when they are running; it also contains the data needed by the running programs. The memory is built from DRAMs are used together to contain the instructions and data of a program. In contrast to sequential access memories, such as magnetic tapes, the RAM portion of the term DRAM means that memory access take basically the same amount of time no matter what portion of memory is read.

      Descending into the depths of any component of the hardware reveals insights into the computer. Inside the processor is another type of memory---cache memory.

      Cache memory consists of a small, fast memory that acts as a buffer for thr DRAM memory. (The non-technical definition of cache is a safe place for hiding things). Cache is built using a different memory technology, static random accesss memory (SRAM). SRAM is faster but less dense, and hence more expensive, than DRAM (see Chapter 5). SRAM and DRAM are two layers of the memory hierarchy.

      As mentioned above, one of the great ideas to improve design abstraction. One of the most important abstractions is the interface between the hardware and the lowest-level software. Because of its importance, it is given a special name: the instruction set architecture, or simply architecture, of a computer. The instruction set architecture includes anything programmers need to know to make a binary machine language program work correctly, including instructions, I/O devices, and so on. Typically, the operating system will encapsulate the details of doing I/O, allocating memory, and other lowe-level system functions so that application programmers do not need to worry about such details. The combination of the basic instruction set and the oprating system interface provided for application programmers is called the application binary interface (ABI).

      An instruction set architecture allows computer designers to talk about functions independently from the hardware that performs them. For example, we can talk about the functions of a digital clock (keeping time, displaying the time, setting the alarm) independently from the clock hardware (quartz crystal, LED displays, plastic buttons). Computer designers distinguish architecture from an implementation of an architecture along the same lines: an implementation is hardware that obeys the architecture abstraction. These ideas brings us to another Big Picture.

    Both hardware and software consist of hierarchical layers using abstraction, with each lower layer hiding details from the level above. One key interface between the level of abstraction is the instruction set architecture---the interface between the hardware and low-level software. This abstract interface enables many implementations of varying cost and performance to run identical software.

    A safe place for data

    Thus far, we have seen how to input data, compute using the data, and display data. If we were to lose power to computer, however, everything would be lost because the memory inside the computer is volatitle---that is, when it loses power, it forgets. In constrast, a DVD disk doesn't forget the movie when you  turn off the power to the DVD player, and is thus a non-volatitle memory technology.

      To distinguish between the volatile memory used to hold data and programs while they are running and this non-volatile memory used to store data and programs between runs, the term main memory or primary memory is used  for the former, and secondery memory  for the latter. Secondary memory forms the next lower layer of memory hierarchy. DRAMs have dominated main memory since 1975, but magnetic disks dominated secondary memory starting even earlier. Because of their size and factor, personal Mobile Devices use flash memory, a non-volatile semiconductor memory, instead of disks. Figure 1.8 shows the chip containing the flash memory of the iPad 2. While slower than DRAM, it is much cheaper than DRAM in addition to being non-volatile. Although costing more per bit than disks, it is smaller, it comes in much smaller capacities, it is more rugged and it is more power efficient than disks. Hence, flash memory is the standard secondary for PMDs, Alas, unlike disks and DRAM, flash memory bits wear out after 100,000 to 1,000,000 writes. Thus, file systems must keep track of the number of writes and have a strategy to avoid wearing out storage, such as by moving popular data. Chapter 5 describes disks and flash memory in more detail.

    Communicating with other computers

    We have how we can input, compute, display, and save data, but there is still one missing item found in today's computer: computer networks. Just as the processor shown in Figure 1.5 is connected to memory and I/O devices, networks inter-connect whole computers, allowing computer users to extend the power of computing by including communication. Networks have become so popular that they are the backbone of current computer systems; a new personal mobile device or server without a network interface would be ridiculed. Networked computers have several major advantages:

    1. Communication: Information is exchanged between computers at high speeds.

    2. Resource sharing: Rather than each computer having its own I/O devices, computers on the network can share I/O devices, computers on the network can share I/O devices.

    3. Non-local access: By connecting computers over long distances, users need not be near the computer they are using.

    Networks vary in length and performance, with the cost of communication increasing according to both the speed of communication and the distance that information travels. Perhaps the most popular type of network is Ethernet. It can be up to a kilometer long and transfer at up to 40 gigabits per second. Its length and speed make Ethernet useful to connect computers on the same floor of a building; hence, it is an example of what is generically called a local area network. Local area networks are inter-connected with switches that can also provide routing services and security. Wide area networks cross continents and are the backbone of the Internet, Which supports the web. They are typically based on optical fibers and are leased from telecommunication companies.

      Networks have changed the face of computing in the last 30 years, both by becoming much more ubiquitous and by making dramatic increases  in performance. In the 1970s, very few individuals had access to electronic mail, the Internet and web did not exist, and physically mailing magnetic tapes was the primary way to transfer large amounts of data between two locations. local area networks were almost nonexistent, and the few existing wide area networks had limited capacity and restricted access.

      As networking technology improved, it became much cheaper and had a much higher capacity. For example, the first standardized local area network technology, developed about  30 years ago, was a version of Ethernet that had a maximum capacity  (also called bandwidth) of 10 million bits per second, typically shared by tens of, if not a hundred, computers. Today, local area network technology offers a capacity of from 1 to 40 gigabits per second, usually shared by at most a few computers. Optical communications technology has allowed similar growth in the capacity of wide area networks, from hundreds of kilobits to gigabits and from hundreds of computers connected to a worldwide network to millions of computers connected. This combination of dramatic rise in deployment of networking combined with increases in capacity have made network technology central to the information revolution of the last 30 years.

      For the last decade another innovation in networking is resharping the way computers communicate. Wireless technology is widespread, which enabled the PostPC Era. The ability to make a radio in the same low-cost semiconductor technology (CMOS) used for memory and micoprocessors enabled a significant improvement in price, leading to an explosion in deployment. Currently available wrieless technologies, called by the IEEE standard name 802.11, allow for transmission rates from 1 to nearly 100 million bits per second. Wireless technology is quite a bit different from wire-based networks, since all users in an immediate area share the airwaves.

    Semiconductor DRAM memory, flash memory, and disk storage differ significantly. For each technology, list its volatility, approximate relative access time, and approximate relative cost compared to DRAM.

    The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. Albert Einstein, What I Believe, 1930!
  • 相关阅读:
    全面了解Nginx主要应用场景
    手把手教你构建 C 语言编译器
    Docker镜像原理和最佳实践
    Docker网络深度解读
    PostgreSQL 10.0 preview 功能增强
    阿里沈询:分布式事务原理与实践
    CPU、内存、IO虚拟化关键技术及其优化探索
    原理、方法双管齐下,大神带你细解Redis内存管理和优化---场景研读
    ASP.NET 5已终结,迎来ASP.NET Core 1.0和.NET Core 1.0 转
    RabbitMQ学习系列
  • 原文地址:https://www.cnblogs.com/666638zhangqiang/p/4989041.html
Copyright © 2020-2023  润新知