• Process/Thread Pinning Overview


    https://www.nas.nasa.gov/hecc/support/kb/ProcessThread-Pinning-Overview_259.html

    Pinning, the binding of a process or thread to a specific core, can improve the performance of your code by increasing the percentage of local memory accesses.

    Once your code runs and produces correct results on a system, the next step is performance improvement. For a code that uses multiple cores, the placement of processes and/or threads can play a significant role in performance.

    Given a set of processor cores in a PBS job, the Linux kernel usually does a reasonably good job of mapping processes/threads to physical cores, although the kernel may also migrate processes/threads. Some OpenMP runtime libraries and MPI libraries may also perform certain placements by default. In cases where the placements by the kernel or the MPI or OpenMP libraries are not optimal, you can try several methods to control the placement in order to improve performance of your code. Using the same placement from run to run also has the added benefit of reducing runtime variability.

    Pay attention to maximizing data locality while minimizing latency and resource contention, and have a clear understanding of the characteristics of your own code and the machine that the code is running on.

    Characteristics of NAS Systems

    NAS provides two distinctly different types of systems: Pleiades, Aitken, Electra, and Merope are cluster systems, and Endeavour is a global shared-memory system. Each type is described in this section.

    Pleiades, Aitken, Electra, and Merope

    On Pleiades, Aitken, Electra, and Merope, memory on each node is accessible and shared only by the processes and threads running on that node. Pleiades is a cluster system consisting of different processor types: Sandy Bridge, Ivy Bridge, Haswell, and Broadwell. Merope is a cluster system that currently consists of Westmere nodes that have been repurposed from Pleiades. Electra is a cluster system that consists of Broadwell and Skylake nodes, and Aitken is a cluster system that consists of Cascade Lake nodes.

    Each node contains two sockets, with a symmetric memory system inside each socket. These nodes are considered non-uniform memory access (NUMA) systems, and memory is accessed across the two sockets through the Quick Path Interconnect. So, for optimal performance, data locality should not be overlooked on these processor types.

    However, compared to a global shared-memory NUMA system such as Endeavour, data locality is less of a concern on the cluster systems. Rather, minimizing latency and resource contention will be the main focus when pinning processes/threads on these systems.

    For more information on Pleiades, Aitken, Electra, and Merope, see the following articles:

    Endeavour

    Endeavour comprises two hosts (endeavour1 and endeavour2). Each host is a NUMA system that contains several dozen Sandy Bridge nodes, with memory located physically at various distances from the processors that access the data on memory. A process/thread can access the local memory on its node and the remote memory across nodes through the NUMAlink, with varying latencies. So, data locality is critical for achieving good performance on Endeavour.

    Note: When developing an application, we recommend that you initialize data in parallel so that each processor core initializes the data it is likely to access later for calculation.

    For more information, see Endeavour Configuration Details.

    Methods for Process/Thread Pinning

    Several pinning approaches for OpenMP, MPI and MPI+OpenMP hybrid applications are listed below. We recommend using the Intel compiler (and its runtime library) and the SGI MPT software on NAS systems, so most of the approaches pertain specifically to them. You can also use the mbind tool for multiple OpenMP libraries and MPI environments.

    OpenMP codes

    MPI codes

    MPI+OpenMP hybrid codes

    Checking Process/Thread Placement

    Each of the approaches listed above provides some verbose capability to print out the tool's placement results. In addition, you can check the placement using the following approaches.

    Use the ps Command

    ps -C executable_name -L -opsr,comm,time,pid,ppid,lwp 

    In the generated output, use the core ID under the PSR column, the process ID under the PID column, and the thread ID under the LWP column to find where the processes and/or threads are placed on the cores.

    Note: The ps command provides a snapshot of the placement at that specific time. You may need to monitor the placement from time to time to make sure that the processes/threads do not migrate.

    Instrument your code to get placement information

    • Call the mpi_get_processor_name function to get the name of the processor an MPI process is running on
    • Call the Linux C function sched_getcpu() to get the processor number that the process or thread is running on

    For more information, see Instrumenting your Fortran Code to Check Process/Thread Placement.

  • 相关阅读:
    ExtJs2.0学习系列(1)Ext.MessageBox
    PDF加水印
    ExtJs2.0学习系列(2)Ext.Panel
    负载均衡(续)
    位运算设置权限续(转)
    Excel导出问题解决方案(导出时前面的0自动被去掉)
    SELECT INTO 和 INSERT INTO SELECT 两种表复制语句
    位运算设置权限
    WCF开发实战系列一:创建第一个WCF服务
    通过SQL Server的位运算功能巧妙解决多选查询
  • 原文地址:https://www.cnblogs.com/dhcn/p/12272871.html
Copyright © 2020-2023  润新知