Parallel Computing in the Computer Science Curriculum part of Parallel Computing in the Computer Science Curriculum
CS in Parallel (supported by a grant from NSF-CCLI) provides a resource for CS educators to find, share, and discuss modular teaching materials and computational platform supports.
Concurrent Access to Data Structures in C++ part of Parallel Computing in the Computer Science Curriculum:Modules:Modules Mini-collection
This module enables students to experiment with creating a task-parallel solution to the problem of crawling the web by using C++ with Boost threads and thread-safe data structures available in the Intel Threading Building Blocks (TBB) library. We provide enough code for a sequential solution, which the students can use as the basis for creating a task-parallel threaded solution. We provide some code to get students started with the threaded solution, because in a typical use case this may the first threaded program those students have encountered.
Parallel Computing Concepts part of Parallel Computing in the Computer Science Curriculum:Modules:Modules Mini-collection
This concept module, intended for persons with a modest background in CS, introduces a core of parallel computing notions that CS students should know in preparation for the era of manycore computing. This includes categories of parallelism (pipelining, data parallelism, task parallelism), parallel speedup and Amdahl's Law, an elementary presentation of concurrency issues (definition and examples of deadlock, identifying race conditions, avoiding races using atomic actions), and selected concurrent programming strategies (message passing, Java-like per-object synchronization). It will also provide application notes to motivate and lay the groundwork for subsequent modules, including basic concepts of DISC and applications of DISC to industry. Optional short programming exercises will illustrate selected concepts. For example, a program with a predictable and observable race condition may be provided, available for experimental modification; a Java-like synchronize feature could then be provided in order to prevent those race conditions. Module Characteristics Languages Supported: Any Relevant Parallel Computing Concepts: Data Parallelism, Task Parallelism, Message Passing, Shared Memory, Distributed Recommended Teaching Level: Intermediate, Advanced Possible Course Use: Hardware Design, Software Design, Algorithm Design, Parallel Computing Systems, Programming Languages
Multicore Programming with OpenMP part of Parallel Computing in the Computer Science Curriculum:Modules:Modules Mini-collection
IntelÂ Corporation has set up a special remote system that allows faculty and students to work with computers with lots of cores, called the Manycore Testing Lab (MTL). In this lab, we will create a program that intentionally uses multi-core parallelism, upload and run it on the MTL, and explore the issues in parallelism and concurrency that arise. This module uses the OpenMP parallel platform package; there is an alternative version of this module using Intel's Threading Building Blocks for those more familiar with that library. Module Characteristics Languages Supported: C++ Relevant Parallel Computing Concepts: Shared Memory Recommended Teaching Level: Introductory, Intermediate