• A Taxonomy for Performance


    A Taxonomy for Performance

    In this section, we introduce some basic performance metrics. These provide a
    vocabulary for performance analysis and allow us to frame the objectives of a
    tuning project in quantitative terms. These objectives are the non-functional requirements that define our performance goals. One common basic set of performance metrics is:


    • Throughput
    • Latency
    • Capacity
    • Degradation
    • Utilization
    • Efficiency
    • Scalability


    Throughput

    Throughput is a metric that represents the rate of work a system or subsystem
    can perform. This is usually expressed as number of units of work in some time
    period. For example, we might be interested in how many transactions per second a system can execute.
    For the throughput number to be meaningful in a real performance exercise,
    it should include a description of the reference platform it was obtained on. For
    example, the hardware spec, OS and software stack are all relevant to throughput, as is whether the system under test is a single server or a cluster.


    Latency

    Performance metrics are sometimes explained via metaphors that evokes
    plumbing. If a water pipe can produce 100l per second, then the volume produced in 1 second (100 litres) is the throughput. In this metaphor, the latency is
    effectively the length of the pipe. That is, it’s the time taken to process a single
    transaction.
    It is normally quoted as an end-to-end time. It is dependent on workload, so
    a common approach is to produce a graph showing latency as a function of increasing workload.


    Capacity

    The capacity is the amount of work parallelism a system possesses. That is, the
    number units of work (e.g. transactions) that can be simultaneously ongoing in
    the system.
    Capacity is obviously related to throughput, and we should expect that as
    the concurrent load on a system increases, that throughput (and latency) will
    be affected. For this reason, capacity is usually quoted as the processing available at a given value of latency or throughput.


    Utilisation

    One of the most common performance analysis tasks is to achieve efficient use of a systems resources. Ideally, CPUs should be used for handling units of work, rather than being idle (or spending time handling OS or other housekeeping
    tasks).
    Depending on the workload, there can be a huge difference between the utilisation levels of different resources. For example, a computation-intensive workload (such as graphics processing or encryption) may be running at close
    to 100% CPU but only be using a small percentage of available memory.


    E€iciency

    Dividing the throughput of a system by the utilised resources gives a measure of the overall efficiency of the system. Intuitively, this makes sense, as requiring more resources to produce the same throughput, is one useful definition of being less efficient.
    It is also possible, when dealing with larger systems, to use a form of cost accounting to measure efficiency. If Solution A has a total dollar cost of ownership (TCO) as solution B for the same throughput then it is, clearly, half as efficient.


    Scalability

    The throughout or capacity of a system depends upon the resources available for processing. The change in throughput as resources are added is one measure of the scalability of a system or application. The holy grail of system scalability is to have throughput change exactly in step with resources.
    Consider a system based on a cluster of servers. If the cluster is expanded,
    for example, by doubling in size, then what throughput can be achieved? If the
    new cluster can handle twice the volume of transactions, then the system is exhibiting “perfect linear scaling”. This is very difficult to achieve in practice, especially over a wide range of posible loads.
    System scalability is dependent upon a number of factors, and is not normally a simple constant factor. It is very common for a system to scale close to linearly for some range of resources, but then at higher loads, to encounter
    some limitation in the system that prevents perfect scaling.

    Degradation

    If we increase the load on a system, either by increasing the number of requests (or clients) or by increasing the speed requests arrive at, then we may see a change in the observed latency and/or throughput.
    Note that this change is dependent on utilisation. If the system is underutilised, then there should be some slack before observables change, but if resources are fully utilised then we would expect to see throughput stop increasing, or latency increase. These changes are usually called the degradation of the system under additional load.


    Connections between the observables

    The behaviour of the various performance observables is usually connected in some manner. The details of this connection will depend upon whether the system is running at peak utility. For example, in general, the utilisation will
    change as the load on a system increases. However, if the system is underutilised, then increasing load may not apprciably increase utilisation. Conversely, if the system is already stressed, then the effect of increasing load may be
    felt in another observable.
    As another example, scalability and degradation both represent the change in behaviour of a system as more load is added. For scalability, as the load is increased, so are available resources, and the central question is whether the
    system can make use of them. On the other hand, if load is added but additional resources are not provided, degradation of some performance observable (e.g. latency) is the expected outcome.
    In rare cases, additional load can cause counter-intuitive results. For example, if the change in load causes some part of the system to switch to a more resource intensive, but higher performance mode, then the overall
    effect can be to reduce latency, even though more requests are being received.


    读书笔记:

    Optimizing Java


    by Benjamin J Evans and James Gough


    Copyright © 2016 Benjamin Evans, James Gough. All rights reserved.


    Printed in the United States of America.


    Published by O’Reilly Media, Inc. , 1005 Gravenstein Highway North, Sebastopol, CA 95472.


  • 相关阅读:
    dpdk 连接错误
    strace 跟踪文件
    鲲鹏服务器 centos 升级gcc + 安装qemu
    centos 升级gcc
    undefined reference to `shm_open'
    Golang与C互用
    [ TIME ] Timed out waiting for device dev-ttyS0.device. [DEPEND] Dependency failed for Serial Getty on ttyS0.
    大型 Web 应用插件化架构探索
    网易游戏基于 Flink 的流式 ETL 建设
    基于WASM的无侵入式全链路A/B Test实践
  • 原文地址:https://www.cnblogs.com/brucemengbm/p/7130287.html
Copyright © 2020-2023  润新知