Patternlets in Parallel Programming

Material originally created by Joel Adams, Calvin College

Compiled by Libby Shoop, Macalester College

Summary

This concept module contains simple examples of basic elements that are combined to form patterns often used in programs employing parallelism. The examples are separated between two major coordination patterns:
  1. message passing used on clusters of distributed computers or on multiprocessors, and
  2. mutual exclusion between threads executing concurrently on a single shared memory system.

Both sets of examples are illustrated with the C programming language, using standard popular available libraries. The message passing example uses a C library called MPI (Message Passing Interface). The mutual exclusion/shared memory examples use the OpenMP library.

Each C code example has a makefile indicating the compiler flags needed.

These examples could be used as demonstrations in lecture or lab to introduce students to the basic constructs used in OpenMP and MPI.


Learning Goals

The primary learning goals for this module are:

Given basic C code examples using MPI, students should be able to recognize how to employ the following patterns in parallel program solutions:

  • The master-worker implementation strategy.
  • Basic message passing using send and receive.
  • Slicing data decomposition using parallel for loops.
  • Blocking data decomposition using parallel for loops.
  • Broadcast as a form of message passing.
  • Reduction following task and data decomposition.
  • Scatter and gather as forms of message passing.
Given basic code examples using OpenMP, students should be able to recognize how to employ the following patterns in parallel program solutions:
  • The master-worker implementation strategy.
  • Striping data decomposition using parallel for loops.
  • Blocking data decomposition using parallel for loops.
  • Ensuring mutual exclusion and its performance.
  • General task decomposition.

Context for Use

These code examples can be used as the starting point for introducing students to either MPI or OpenMP and for discussing common patterns used in parallel programs.

Description and Teaching Materials

Teaching Notes and Tips

The C code examples must be used with certain hardware and libraries. The OpenMP examples requires a machine with multiple cores and a compiler that enables the use of OpenMP directives (the gnu gcc compiler is such a compiler). The MPI examples require that you have a version of MPI installed, and can be run on either a multiprocessor or a cluster.

OpenMP enables multithreading. MPI does multiprocessing, rather than multithreading, and the processes communicate via message passing since processes (unlike threads) have no shared memory. The message-passing model is more generally useful than the shared-memory model because the processes in a message-passing program can run anywhere (distributed-mem multiprocessor, shared-mem multiprocessor, uniprocessor), whereas the threads in a multithreaded program cannot be distributed across a distributed multiprocessor.

Note that the makefile provided for each example makes it easy for students to compile and run each example. There are sometimes comments in the example that instruct students to change some of the code by uncommenting some lines and then observe what changes when they re-compile and run again.

Assessment

Assessment of this module can be done by assigning programming problems that require the students to use a combination of the patternlets provided here to solve a problem.

References and Resources




Patternlets in Parallel Programming -- Discussion  

Join the Discussion


Log in to reply