• A Pattern Language for Parallel Programming


    The pattern language is organized into four design spaces.  Generally one starts at the top in the Finding Concurrency design space and works down through the other design spaces in order until a detailed design for a parallel program is obtained.

    Click on a design space name in the figure or list for more details.

    glarule.gif

    ds_overview.gif

    glabul1.gif

    Finding Concurrency

    This design space is concerned with structuring the problem to expose exploitable concurrency. The designer working at this level focuses on high-level algorithmic issues and reasons about the problem to expose potential concurrency. 

    glabul2.gif

    Introduction to Finding Concurrency

    glabul2.gif

    Task Decomposition

    glabul2.gif

    Data Decomposition

    glabul2.gif

    Group Tasks

    glabul2.gif

    Order Tasks

    glabul2.gif

    Data Sharing

    glabul2.gif

    Design Evaluation

    glabul1.gif

    Algorithm Structure  

    This design space is concerned with structuring the algorithm to take advantage of potential concurrency. That is, the designer working at this level reasons about how to use the concurrency exposed in working with the Finding Concurrency patterns. The Algorithm Structure patterns describe overall strategies for exploiting concurrency. 

    glabul2.gif

    Introduction to Algorithm Structure

    glabul2.gif

    Task Parallelism 

    glabul2.gif

    Divide and Conquer

    glabul2.gif

    Geometric Decomposition 

    glabul2.gif

    Recursive Data

    glabul2.gif

    Pipeline

    glabul2.gif

    Event-Based Coordination

    glabul1.gif

    Supporting Structures

    This design space represents an intermediate stage between the Algorithm Structure  and Implementation Mechanisms design spaces. Two important groups of patterns in this space are those that represent program-structuring approaches and those that represent commonly used shared data structures. 

    glabul2.gif

    Introduction to Supporting Structures

    glabul2.gif

    SPMD

    glabul2.gif

    Master/Worker

    glabul2.gif

    Loop Parallelism 

    glabul2.gif

    Fork/Join

    glabul2.gif

    Shared Data 

    glabul2.gif

    Shared Queue

    glabul2.gif

    Distributed Array

    glabul2.gif

    Other supporting structures

    glabul1.gif

    Implementation Mechanisms

    The Implementation Mechanisms design space is concerned with how the patterns of the higher-level spaces are mapped into particular programming environments. We use it to provide descriptions of common mechanisms for process/thread management and interaction. The items in this design space are not presented as patterns since in many cases they map directly onto elements within particular parallel programming environments. We include them in our pattern language anyway, however, to provide a complete path from problem description to code. 

    glabul2.gif

    Introduction to Implementation Mechanisms

    glabul2.gif

    UE Management

    glabul3.gif

    Thread Creation/Destruction

    glabul3.gif

    Process Creation/Destruction

    glabul2.gif

    Synchronization

    glabul3.gif

    Memory Synchronization and Fences

    glabul3.gif

    Barriers

    glabul3.gif

    Mutual Exclusion

    glabul2.gif

    Communication

    glabul3.gif

    MPI: Message Passing

    glabul3.gif

    OpenMP: Message Passing

    glabul3.gif

    Java: Message Passing

    glabul3.gif

    Collective Communication

    glabul3.gif

    Other Communication Constructs

    Before starting to work with the patterns in this design space, the algorithm designer must first consider the problem to be solved and make sure the effort to create a parallel program will be justified: Is the problem sufficiently large, and the results sufficiently significant, to justify expending effort to solve it faster? If so, the next step is to make sure the key features and data elements within the problem are well understood. Finally, the designer needs to understand which parts of the problem are most computationally intensive, since it is on those parts of the problem that the effort to parallelize the problem should be focused.

    Once this analysis is complete, the patterns in the Finding Concurrency  design space can be used to start designing a parallel algorithm. The patterns in this design space can be organized into three groups as shown in the figure.

    FindingConcurrencyFig.gif,

    glabul1.gif

    Decomposition Patterns  There are two decomposition patterns. These patterns are used to decompose the problem into pieces that can execute concurrently.

    glabul2.gif

    Task Decomposition   

    How can a problem be decomposed into tasks that can execute concurrently?

    glabul2.gif

    Data Decomposition 

    How can a problem's data be decomposed into units that can be operated on relatively independently?

    glabul1.gif

    Dependency Analysis Patterns. This group contains three patterns that help group the tasks and analyze the dependencies among them

    glabul2.gif

    Group Tasks 

    How can the tasks that make up a problem be grouped to simplify the job of managing dependencies?

    glabul2.gif

    Order Tasks

    Given a way of decomposing a problem into tasks and a way of collecting these tasks into logically related groups, how must these groups of tasks be ordered to satisfy constraints among tasks?

    glabul2.gif

    Data Sharing

    Given a data and task decomposition for a problem, how is data shared among the tasks?

    Nominally, the patterns are applied in this order. In practice, however, it is often necessary to work back and forth between them, or possibly even revisit the decomposition patterns.

    glabul1.gif

    The Design Evaluation  Pattern.  

    Is the decomposition and dependency analysis so far good enough to move on to the Algorithm Structure design space, or should the design be revisited?

    After analyzing the concurrency in a problem, perhaps by using the patterns in the Finding Concurrency design space, the next task is to  refine the design and move it closer to a program that can execute tasks concurrently by mapping the concurrency onto multiple units of execution (UEs) running on a parallel computer.

    Of the countless ways to define an algorithm structure, most follow one of six basic design patterns. These patterns make up the Algorithm Structure  design space.  The figure shows the patterns in the designs space and the relationship to the other spaces. 

      algorithmStructureFig.gif

    The key issue at this stage is to decide which pattern or patterns are most appropriate for the problem.  In making this decision, various forces such as simplicity, portability, scalability, and efficiency may pull the design in different directions.  The features of the target platform must also be taken into account.

    There is usually a major organizing principle implied by the concurrency that helps choose a pattern. This usually falls into one of three categories: 

    glabul1.gif

    Organization by tasks

    glabul2.gif

    Task Parallelism

    How can an algorithm be organized around a collection of tasks that can execute concurrently?

    glabul2.gif

    Divide and Conquer

    Suppose the problem is formulated using the sequential divide and conquer strategy. How can the potential concurrency be exploited?

    glabul1.gif

    Organization by data decomposition

    glabul2.gif

    Geometric Decomposition 

    How can an algorithm be organized around a data structure that has been decomposed into concurrently updateable ``chunks''? 

    glabul2.gif

    Recursive Data

    Suppose the problem involves an operation on a recursive data structure (such as a list, tree, or graph) that appears to require sequential processing. How can operations on these data structures be performed in parallel?

    glabul1.gif

    Organization by flow of data

    glabul2.gif

    Pipeline 

    Suppose that the overall computation involves performing a calculation on many sets of data, where the calculation can be viewed in terms of data flowing through a sequence of stages. How can the potential concurrency be exploited?

    glabul2.gif

    Event-based Coordination

    Suppose the application can be decomposed into groups of semi-independent tasks interacting in an irregular fashion. The interaction is determined by the flow of data between them which implies ordering constraints between the tasks. How can these tasks and their interaction be implemented so they can execute concurrently?

    The most effective parallel algorithm design may make use of multiple algorithm structures (combined hierarchically, compositionally, or in sequence). For example, it often happens that the very top level of the design is a sequential composition of one or more Algorithm Structure patterns. Other designs may be organized hierarchically, with one pattern used to organize the interaction of the major task groups and other patterns used to organize tasks within the groups -- for example, an instance of Pipeline in which individual stages are instances of Task Parallelism.

    https://www.cise.ufl.edu/research/ParallelPatterns/overview.htm

  • 相关阅读:
    [knowledge][perl][pcre][sed] sed / PCRE 语法/正则表达式
    [knowledge][模式匹配] 字符匹配/模式匹配 正则表达式 自动机
    [daily] 内存越界的分析与定位
    [DPI] Cisco Application Visibility and Control
    [bigdata] palantir
    [daily][nfs] nfs客户端设置
    [knowledge][ETA] Encrypted Traffic Analytics
    [tcpreplay] tcpreplay高级用法--使用tcpreplay-edit进行循环动态发包
    [redhat][centos] 让不同小版本的CentOS7使用相同的内核版本
    [grub2] grub2修改启动顺序
  • 原文地址:https://www.cnblogs.com/feng9exe/p/11937818.html
Copyright © 2020-2023  润新知