0% found this document useful (0 votes)
17 views

Unit 2

Uploaded by

Baig Awad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Unit 2

Uploaded by

Baig Awad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Unit 2

Parallel Programming
Syllabus
• Principles of Parallel Algorithm Design: Preliminaries,
Decomposition Techniques, Characteristics of Tasks and
Interactions, Mapping Techniques for Load Balancing, Methods for
Containing Interaction Overheads, Parallel Algorithm Models,
• Processor Architecture, Interconnect, Communication, Memory
Organization, and Programming Models in high performance
computing architecture examples: IBM CELL BE, Nvidia Tesla GPU,
Intel Larrabee Micro architecture and Intel Nehalem micro
architecture
• Memory hierarchy and transaction specific memory design, Thread
Organization
Preliminaries: Decomposition, Tasks, and
Dependency Graphs

• The first step in developing a parallel algorithm is to decompose the


problem into tasks that can be executed concurrently

• A given problem may be decomposed into tasks in many different


ways.

• Tasks may be of same, different, or even indeterminate sizes.

• A decomposition can be illustrated in the form of a directed graph.


– Such a graph is called a task-dependency graph.
– Nodes correspond to tasks and edges indicate dependencies
Example: Multiplying a Dense Matrix with a Vector

Computation of each element of output vector y is independent of other elements. Based


on this, a dense matrix-vector product can be decomposed into n tasks. The figure
highlights the portion of the matrix and vector accessed by Task 1.

• Observations:
– Tasks share the vector b but they have no control dependencies.
– There are zero edges in the task-dependency graph
– All tasks are of the same size in terms of number of operations .
• Is this the maximum number of tasks we could decompose this
problem into?
Example: Database Query Processing
Consider the execution of the query:
MODEL = “CIVIC” AND YEAR = “2001” AND
(COLOR = “GREEN” OR COLOR = “WHITE”)
on the following database:
ID# Model Year Color Dealer Price
4523 Civic 2002 Blue MN $18,000
3476 Corolla 1999 White IL $15,000
7623 Camry 2001 Green NY $21,000
9834 Prius 2001 Green CA $18,000
6734 Civic 2001 White OR $17,000
5342 Altima 2001 Green FL $19,000
3845 Maxima 2001 Blue NY $22,000
8354 Accord 2000 Green VT $18,000
4395 Civic 2001 Red CA $17,000
7352 Civic 2002 Red WA $18,000
Example: Database Query Processing
• Assume the query is divided into four subtasks
– Each task generates an intermediate table of entries

Edges in this graph denote dependencies.


Granularity of Task Decompositions

• The number of tasks into which a problem is decomposed


determines its granularity.
– Fine granularity: Decomposition into a large number of tasks
– Coarse granularity: Decomposition into a small number of tasks

A coarse grained counterpart to the dense matrix-vector product


example. Each task in this example corresponds to the computation of three
elements of the result vector.
Degree of Concurrency

• The number of tasks that can be executed in parallel is the degree


of concurrency of a decomposition.

• Since the number of tasks that can be executed in parallel may


change over program execution, the maximum degree of
concurrency is the maximum number of such tasks at any point
during execution.

• The average degree of concurrency is the average number of tasks


that can be processed in parallel over the execution of the program.

• The degree of concurrency increases as the decomposition


becomes finer in granularity and vice versa.
Critical Path, Critical Path Length

• A directed path in the task


dependency graph represents a
sequence of tasks that must be
processed one after the other.

• The longest such path between


any pair of zero in-degree to zero
out-degree node is known as the
critical path.

• The length of the longest path in a


task dependency graph is called
the critical path length.
Limits on Parallel Performance

• Parallel time cannot be made arbitrarily small by making the decomposition


finer in granularity.
– A parallel algorithm will inherently have a limited number of decomposable tasks
– For example, in the case of multiplying a dense matrix with a vector, there can be
no more than (n2) concurrent tasks.

• Tasks interaction is another limiting factor on parallel performance

• Task interaction graph: an undirected graph that captures the pattern of


interactions among tasks

• Note that task interaction graphs represent data dependencies, whereas


task dependency graphs represent control dependencies.

• The edge-set of a task-interaction graph is a superset of the edge-set of a


task-dependency graph. Why?
Task Interaction Graphs: An Example
• Consider the problem of multiplying a sparse matrix A with a vector
b. The following observations can be made:
• Notes
– Decomposition is as before; each y[i] computation is a task.
– Only non-zero elements of matrix A participate in the computation, in this case.
– We also partition b across tasks; b[i] is held by Task i.
Processes and Mapping

• In general, the number of tasks in a decomposition exceeds the


number of processing elements available.
– Thus, a parallel algorithm must also provide a mapping of tasks to
processes.

• Note: mapping is from tasks to processes, as opposed to processors.


– Because typical programming APIs do not allow easy binding of tasks to physical
processors.
– We aggregate tasks into processes and rely on the system to map these
processes to physical processors.

• Processes (no in UNIX sense): logical computing agents that perform tasks
– Task + task data + task code required to produce the task’s output

• Processors: physical hardware units that perform tasks


Processes and Mapping

• An appropriate mapping must minimize parallel execution time by:

1. Mapping independent tasks to different processes.

2. Assigning tasks on critical path to processes as soon as they become


available.

3. Minimizing interaction between processes by mapping tasks with dense


interactions to the same process.

• These criteria often conflict with each other.


– E.g., a decomposition into one task (or no decomposition at all)
minimizes interaction but does not result in a speedup at all!

• Can you think of other such conflicting cases?


Processes and Mapping: Example

Mapping tasks in the database query decomposition to


processes. These mappings were arrived at by viewing the
dependency graph in terms of levels (no two nodes in a level have
dependencies). Tasks within a single level are then assigned to
different processes.
Decomposition Techniques

• Decomposition:
– The process of dividing the computation into smaller pieces of work i.e., tasks

• Tasks are programmer defined and are considered to be indivisible

• So how does one decompose a task into various subtasks?

• There is no single recipe that works for all problems!

• Commonly used techniques that apply to broad classes of problems:


– Recursive decomposition
– Data decomposition
– Exploratory decomposition
– Speculative decomposition
– Hybrid decomposition
Recursive Decomposition

• Generally suited to problems that are solved using the divide-and-


conquer strategy.

• A given problem is first decomposed into a set of sub-problems.

• These sub-problems are recursively decomposed further until a


desired granularity is reached.
Example: Quicksort

Figure 3.8 The quicksort task-dependency graph based on recursive


decomposition for sorting a sequence of 12 numbers.
Example: Finding the Minimum

1. procedure SERIAL_MIN(A,n)
2. begin
3. min =A[0];
4. for i:= 1 to n − 1 do
5. if (A[i] < min) min := A[i];
6. endfor;
7. return min;
8. end SERIAL_MIN
Example: Finding the Minimum

1. procedure RECURSIVE_MIN (A, n)


2. begin
3. if ( n = 1 ) then
4. min := A [0] ;
5. else
6. lmin := RECURSIVE_MIN ( A, n/2 );
7. rmin := RECURSIVE_MIN ( &(A[n/2]), n - n/2 );
8. if (lmin < rmin) then
9. min := lmin;
10. else
11. min := rmin;
12. endelse;
13. endelse;
14. return min;
15. end RECURSIVE_MIN
Example: Finding the Minimum (cont’d)

The code in the previous foil can be decomposed naturally using a


recursive decomposition strategy. We illustrate this with the
following example of finding the minimum number in the set {4, 9, 1,
7, 8, 11, 2, 12}. The task dependency graph associated with this
computation is as follows:
Data Decomposition

• Used to derive concurrency for problems that operate on large amounts of


data

• The idea is to derive the tasks by focusing on the multiplicity of data

• Data decomposition is often performed in two steps


– Step 1: Partition the data
– Step 2: Induce a computational partitioning from the data partitioning

• Which data should we partition?


– Input/Output/Intermediate?
• All these—leading to different data decomposition methods

• How do induce a computational partitioning?


– Owner-computes rule
Example: Matrix-matrix Multiplication

• Partitioning the output data


– Applied when each element of the output data can be computed
independently of the others, as a function of the input
Example: Database Transaction
• Problem: counting the instances of given itemsets in a database of transactions.
– the output (itemset frequencies) can be partitioned across tasks.
Output Data Decomposition: Example

• From the previous example, the following observations can be


made:
– If the database of transactions is replicated across the processes, each
task can be independently accomplished with no communication.
– If the database is partitioned across processes as well (for reasons of
memory utilization), each task first computes partial counts. These
counts are then aggregated at the appropriate task.
Input Data Partitioning

• Output data partitioning is applicable if each output can be naturally


computed as a function of the input.
• In many algorithms it is not possible or desirable to partition the
output data
– e.g., the problem of finding the minimum in a list, sorting a given list, etc.
• In such cases, it is sometimes possible to partition the input data,
and then use this partitioning to induce concurrency
• A task is associated with each input data partition. The task
performs as much of the computation with its part of the data.
Subsequent processing combines these partial results.
Input Data Partitioning: Example

• In the frequency counting example, the input (i.e., the transaction set) can
be partitioned.
– This induces a task decomposition in which each task generates partial counts
for all itemsets. These are combined subsequently for aggregate counts.
Partitioning Input and Output Data
Intermediate Data Partitioning

• Computation can often be viewed as a sequence of transformation


from the input to the output data.
• In these cases, it is often beneficial to use one of the intermediate
stages as a basis for decomposition.
Intermediate Data Partitioning: Example

• Revisiting the dense matrix multiplication example.


Intermediate Data Partitioning: Example
A decomposition of intermediate data structure leads to the following
decomposition into 8 + 4 tasks:
Stage I

Stage II

Task 01: D1,1,1= A1,1 B1,1 Task 02: D2,1,1= A1,2 B2,1
Task 03: D1,1,2= A1,1 B1,2 Task 04: D2,1,2= A1,2 B2,2
Task 05: D1,2,1= A2,1 B1,1 Task 06: D2,2,1= A2,2 B2,1
Task 07: D1,2,2= A2,1 B1,2 Task 08: D2,2,2= A2,2 B2,2
Task 09: C1,1 = D1,1,1 + D2,1,1 Task 10: C1,2 = D1,1,2 + D2,1,2
Task 11: C2,1 = D1,2,1 + D2,2,1 Task 12: C2,,2 = D1,2,2 + D2,2,2
Intermediate Data Partitioning: Example

The task dependency graph for the decomposition (shown in


previous foil) into 12 tasks is as follows:
Data Decomposition

• Is the most widely-used decomposition technique


– After all parallel processing is often applied to problems that have a lot
of data
– Splitting the work based on this data is the natural way to extract high-
degree of concurrency
• It is used alone or in conjunction with other decomposition methods
– Hybrid decomposition
The Owner Computes Rule

• The Owner Computes Rule generally states that the process


assigned a particular data item is responsible for all computation
associated with it.

• In the case of input data decomposition, the owner computes rule


implies that all computations that use the input data are performed
by the process.

• In the case of output data decomposition, the owner computes rule


implies that the output is computed by the process to which the
output data is assigned.
Exploratory Decomposition

• Used to decompose computations that correspond to a search of a


space of solutions
– Examples theorem proving, game playing, etc.
• Example: solution of the 15-puzzle problem
• We show a sequence of three moves that transform a given initial
state (a) to desired final state (d).
Exploratory Decomposition: Example
The state space can be explored by generating various successor
states of the current state and to view them as independent tasks.
Exploratory Decomposition: Anomalous Computations

• In many instances of exploratory decomposition, the decomposition


technique may change the amount of work done by the parallel
formulation.
• This change results in super- or sub-linear speedups.
Speculative Decomposition

• Used to extract concurrency in problems in which the next step is


one of many possible actions that can only be determined when the
current tasks finishes

• This decomposition assumes a certain outcome of the currently


executed task and executes some of the next steps
– Just like speculative execution at the microprocessor level

• Performs more or the same aggregate work (but not less) than the
sequential algorithm
Example: Discrete Event Simulation
Speculative Execution

• If predictions are wrong…


– Work is wasted
– work may need to be undone
• state-restoring overhead
– memory/computations

• However, it may be the only way to extract concurrency!


Characteristics of Tasks Affecting Good Mapping

• Characteristics of tasks influencing suitability of a mapping scheme


– Are the tasks available a priori?
• Static vs dynamic task generation
– How about their computational requirements?
• Are they uniform or non-uniform?
– Do we know them a priori?
– How much data is associated with each task?
• Characteristics of inter-task interactions
– Are they static or dynamic?
– Are they regular or irregular?
– Are they read-only or read-write?
Example: Regular Static Task Interaction Pattern (Simple)
• An example: Image dithering application
• dithering - the process of representing intermediate colors by patterns of tiny colored dots
that simulate the desired color
Example: Irregular Static Task Interaction Pattern (Complex)
Mapping Techniques

• Mapping: assigning tasks to processes with the objective that all


tasks should complete in the shortest amount of elapsed time

• Thus, a mapping scheme must address the major sources of


overhead
– Load imbalance
– Inter-process communication
• Synchronization/data-sharing

• Minimizing these overheads often represents contradicting


objectives.
– E.g., assign all interdependent tasks to the same process

• Note: Assigning a balanced aggregate load to each process is a


necessary but not sufficient condition for reducing idling.
Mapping Techniques for Minimum Idling

• Merely balancing load does not minimize idling:


Mapping Techniques for Minimum Idling

• Static Mapping: Tasks are mapped to processes a priori (ahead of


executing algorithm).
– Applicable for tasks that are
• generated statically
• known and/or have uniform computational requirements

• Dynamic Mapping: Tasks are mapped to processes at runtime.


– Applicable for tasks that are
• generated dynamically

• unknown and/or have non-uniform computational requirements


Static Mapping – Array Distribution Schemes

• Suitable for algorithms that


– use data decomposition
– their underlying input/output/intermediate data are in the form of arrays

• Block Distribution
– Used to load-balance a variety of parallel computations that operate on
multi-dimensional arrays

• Cyclic Distribution

• Block-Cyclic Distribution

• Randomized Block Distributions


Examples: Block Distributions
Examples: Block Distributions
1-Dimensional vs k-Dimensional Distributions

• Which distribution allows the use of more processes?


– One-dimensional?
– Multi-dimensions?

• Which distribution reduces the amount of interaction among


processes?
– One-dimensional?
• How many data elements are accessed in part (a) on the previous slide?
– Multi-dimensions?
• How many data elements are accessed in part (b) on the previous slide?
Cyclic and Block Cyclic Distributions

• If the amount of computation associated with data items varies, a


block decomposition may lead to significant load imbalances.
• A simple example of this is in LU decomposition (or Gaussian
Elimination) of dense matrices.
• Block cyclic distribution schemes can be used to alleviate the load-
imbalance and idling problems.
• Block cyclic distributions:
– Partition an array into many more blocks than the number of available
processes.
– Blocks are assigned to processes in a round-robin manner so that each
process gets several non-adjacent blocks.
LU Factorization of a Dense Matrix
Example: Block-Cyclic Distributions
Mappings Based on Task Partitioning

• Partitioning a given task-dependency graph across processes.

• Determining an optimal mapping for a general task-dependency


graph is an NP-complete problem.

• Excellent heuristics exist for structured graphs.


Task Partitioning: Mapping a Binary Tree Dependency
Graph
• Consider a task-dependency graph that is a perfect binary tree
– Occurs in practical problems with recursive decomposition
Hierarchical Mappings

• Sometimes a single mapping technique is inadequate.

• For example, the task mapping of the binary tree (quicksort) cannot
use a large number of processes.

• For this reason, task mapping can be used at the top level and data
partitioning within each level.
Hierarchical Mappings

• An example of task partitioning at top level with data partitioning at


the lower level.
Schemes for Dynamic Mapping

• Dynamic mapping is sometimes also referred to as dynamic load


balancing, since load balancing is the primary motivation for
dynamic mapping.

• Dynamic mapping schemes can be centralized or distributed.


Centralized Dynamic Mapping

• Processes are designated as masters or slaves.

• When a process runs out of work, it requests the master for more work.

• When the number of processes increases, the master may become the
bottleneck.

• To alleviate this, a process may pick up a number of tasks (a chunk) at one


time. This is called Chunk scheduling.

• Selecting large chunk sizes may lead to significant load imbalances as well.

• A number of schemes have been used to gradually decrease chunk size as


the computation progresses.
Distributed Dynamic Mapping

• Each process can send or receive work from other processes.

• This alleviates the bottleneck in centralized schemes.

• There are four critical questions:


– How are sending and receiving processes paired together,
– Who initiates work transfer,
– How much work is transferred, and
– When is a transfer triggered?
Methods for Containing Interaction Overheads
• Maximize data locality: Where possible, reuse intermediate data.
Restructure computation so that data can be reused in smaller time
windows.

• Minimize volume of data exchange: There is a cost associated


with each word that is communicated. For this reason, we must
minimize the volume of data communicated.

• Minimize frequency of interactions: There is a startup cost


associated with each interaction. Therefore, try to merge multiple
interactions to one, where possible.

• Minimize contention and hot-spots: Use decentralized


techniques, replicate data where necessary.
Methods for Containing Interaction Overheads
• Overlapping computations with interactions: Use non-blocking
communications, multithreading, and prefetching to hide latencies.

• Replicating data or computations.

• Using group communications instead of point-to-point primitives.

• Overlap interactions with other interactions.


Parallel Algorithm Models

• An algorithm model is a way of structuring a parallel algorithm by selecting


a decomposition and mapping technique and applying the appropriate
strategy to minimize interactions.
• Data Parallel Model: Tasks are statically (or semi-statically) mapped to
processes and each task performs similar operations on different data.
– Usually based on data decomposition followed by static mapping
– Uniform partitioning of data followed by static mapping guarantees load balance
– Example algorithm: dense matrix multiplication
• Task Graph Model: Starting from a task dependency graph, the
interrelationships among the tasks are utilized to promote locality or to
reduce interaction costs.
– Typically used to solve problems where amount of data associated with a task is
large relative to computation
– Static mapping usually used to optimize data movement costs
– Example algorithm: parallel quicksort, sparse matrix factorization
Parallel Algorithm Models

• Master-Slave Model: One or more processes generate work and


allocate it to worker processes. This allocation may be static or
dynamic.

• Pipeline / Producer-Consumer Model: A stream of data is passed


through a succession of processes, each of which perform some
task on it.

• Hybrid Models: A hybrid model may be composed either of multiple


models applied hierarchically or multiple models applied
sequentially to different phases of a parallel algorithm.

• The work Pool Model:


Parallel Algorithm Models

• Work Pool Model: The work pool or the task pool model is characterized
by a dynamic mapping of tasks onto processes for load balancing in which
any task may potentially be performed by any process. There is no desired
premapping of tasks onto processes. The mapping may be centralized or
decentralized. Pointers to the tasks may be stored in a physically shared
list, priority queue, hash table, or tree, or they could be stored in a physically
distributed data structure. The work may be statically available in the
beginning, or could be dynamically generated; i.e., the processes may
generate work and add it to the global (possibly distributed) work pool. If the
work is generated dynamically and a decentralized mapping is used, then a
termination detection algorithm would be required so that all processes can
actually detect the completion of the entire program (i.e., exhaustion of all
potential tasks) and stop looking for more work.
• Example : Parallelization of Loops by chunk scheduling

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy