• how processor caches work


    注:  这篇文章是对Igor Ostrovsky博客gallery-of-processor-cache-effects的总结学习。

        原文介绍了高速缓存在程序中的工作方式,尤其是cache line的介绍醍醐灌顶。文中还举了很多通过例子来解释处理器是如何读取cache的,也让人印象深刻。

        这篇文章总结是用英语写的,一来是希望借此锻炼一下我的英语实用能力,第二则是原文中很多词汇,我不知如何恰当的翻译成汉语,直接实用英语读起来更通顺。

    this is a summery of reading the blog of Igor Ostrovsky.

     

     

    example1:        cache line

    int[] arr = new int[64 * 1024 * 1024];
    
    // Loop 1
    for (int i = 0; i < arr.Length; i++) arr[i] *= 3;
    
    // Loop 2
    for (int i = 0; i < arr.Length; i += 16) arr[i] *= 3;

      the two loops cost 80 and 78ms respectively on the same machine. 

      the running time of this loops is dominated by the memory accesses to array, not the integer multiplications.

    for (int i = 0; i < arr.Length; i += K) arr[i] *= 3;

        the running times of tloop for different step values(K) is:

        Today's CPUs donot access memory byte by byte. Instead, they fetch memory in chunks of 64 bytes, which is called cache lines.

        So proper alignment of the data is important for reducing the touches of the cache lines.

     example2. cache size

        L1 caches are per-core

        L2 caches are shared between pairs of cores.

        In this test environment, we use the core with a 32KB data cache, a32KB instruction cache and a 4MB L2 data cache.

    int steps = 64 * 1024 * 1024; // Arbitrary number of steps
    int lengthMod = arr.Length - 1;
    for (int i = 0; i < steps; i++)
    {
        arr[(i * 16) & lengthMod]++; // (x & lengthMod) is equal to (x % arr.Length) 
    }    

    the process time changs with array size:

       

        example3:

    int steps = 256 * 1024 * 1024;
    int[] a = new int[2];
    
    // Loop 1
    for (int i=0; i<steps; i++) { a[0]++; a[0]++; }
    
    // Loop 2
    for (int i=0; i<steps; i++) { a[0]++; a[1]++; }

        loop2 is twice faster than loop1

        which can be explained in the perspective of pipe line, a write-read hazard occurs in loop1.

        example4:

    public static long UpdateEveryKthByte(byte[] arr, int K)
    {
        Stopwatch sw = Stopwatch.StartNew();
        const int rep = 1024*1024; // Number of iterations – arbitrary
    
    int p = 0;
        for (int i = 0; i < rep; i++)
        {
            arr[p]++;
            p += K;
            if (p >= arr.Length) p = 0;
        }
    
        sw.Stop();
        return sw.ElapsedMilliseconds;
    }

        generally speaking, 16-way set associative cache are adopted in L2 cache.

    that is to say, each cache line can be stored in any one of 16 particular slots in the cache.

    4MB = 16K * 64Bytes

    16Bytes for each cache line and 16 slots for each set and 4K set.

    example5:

        false cache line sharing 

        multi-cores machine, caches must be used safely. That means when one processor modifies a value in its cache, other processors cannot operate the same vaule anymore. Sine cache are worked in cache lines, the entire cache line will be invalidated in all caches.

  • 相关阅读:
    在阿里云服务器(ECS)上从零开始搭建nginx服务器
    HTML5和CSS3新特性一览
    【react】---手动封装一个简易版的redux
    【react】---17新增的生命周期
    vue单页面应用刷新网页后vuex的state数据丢失的解决方案
    [VUE]object.defineProperty的基本使用
    JavaScript / 本地存储
    转载--httpclient原理和应用
    关于mybatis mapper.xml中的if判断
    idea maven install时,打包找不到微服务common中公用的包
  • 原文地址:https://www.cnblogs.com/leohan2013/p/3310733.html
Copyright © 2020-2023  润新知