Parallel Computing in the Computer Science Curriculum > Modules

Modules

Want to know more about modules?

Find out more about modules and their contents.

Have a module of your own?

Contribute to the site by submitting your own module. Your submission will be reviewed by CS In Parallel to determine what categories it should be listed under. After that process, it will become available to all viewers of this site.

The Module Collection



Help

Results 1 - 17 of 17 matches

Pandemic Exemplar
Elizabeth Shoop
Sequential and parallel versions of a mote carlo simulation of the spread of infectious disease are presented in detail. Students can run the code and examine performance of sequential and parallel versions.

Parallel Computing Concepts
Richard Brown
This concept module will introduce a core of parallel computing notions that CS majors and minors should know in preparation for the era of manycore computing, including parallelism categories, concurrency issues and solutions, and programming strategies.

Concurrent Access to Data Structures
Professor Libby Shoop, Macalester College
This module enables students to experiment with creating a task-parallel solution to the problem of crawling the web by using Java threads and thread-safe data structures available in the java.util.concurrent package.

GPU Programming
Elizabeth Shoop; Yu Zhao
In this module, we will learn how to create programs that intensionally use GPU to execute. To be more specific, we will learn how to solve parallel problems more efficiently by writing programs in CUDA C Programming Language and then executes them on GPUs based on CUDA architecture.

Multi-core programming with Intel's Manycore Testing Lab (using Threading Building Blocks)
Professor Richard Brown, St. Olaf College
Intel Corporation has set up a special remote system that allows faculty and students to work with computers with lots of cores, called the Manycore Testing Lab (MTL). In this lab, we will create a program that intentionally uses multi-core parallelism, upload and run it on the MTL, and explore the issues in parallelism and concurrency that arise.

Parallel Sorting
Elizabeth Shoop
This module, targeted for algorithms and data structures courses, examines the theoretical PRAM model and its use when designing a parallel version of the mergesort algorithm.

Concurrent Access to Data Structures in C++
Richard Brown
This module enables students to experiment with creating a task-parallel solution to the problem of crawling the web by using C++ with Boost threads and thread-safe data structures available in the Intel Threading ...

Distributed Computing Fundamentals
Elizabeth Shoop
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. Using MPI, programmers can design methods to divide large data and perform the same computing task on segments of it and then and distribute those tasks to multiple processing units within the cluster. In this module, we will learn important and common MPI functions as well as techniques used in 'distributed memory' programming on clusters of networked computers.

Heterogeneous Computing
Elizabeth Shoop;
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. NVIDIA®'s CUDA, a parallel computing platform and programming model, uses GPU for parallel computation problems. This module will explore ways to combine these two parallel computing platforms to make parallel computation more efficient.

Drug Design Exemplar
Richard Brown
An important problem in the biological sciences is that of drug design: finding small molecules, called ligands, that are good candidates for use as drugs. We introduce the problem and provide several different parallel solutions, in the context of parallel program design patterns.

Multicore Programming with OpenMP
Richard Brown; Elizabeth Shoop
In this lab, we will create a program that intentionally uses multi-core parallelism, upload and run it on the MTL, and explore the issues in parallelism and concurrency that arise. This module uses OpenMP.

Map-reduce Computing for Introductory Students using WebMapReduce
Professor Richard Brown, St. Olaf College Professor Libby Shoop, Macalester College
This module emphasizes data-parallel problems and solutions, the so-called 'embarrassingly parallel' problems where processing of input data can easily be split among several parallel processes. Students use a web application called WebMapReduce (WMR) to write map and reduce functions that operate on portions of a massive dataset in parallel.

Concurrency and Map-Reduce Strategies in Various Programming Languages
Professor Richard Brown, St. Olaf College
This concept module explores how concurrency and parallelism have been established in programming languages and how one can implement map-reduce in several high-level programming languages taught in a CS curriculum, including Scheme, C++, Java, and Python.

Patternlets in Parallel Programming
Material originally created by Joel Adams, Calvin CollegeCompiled by Libby Shoop, Macalester College
Short, simple C programming examples of basic shared memory programming patterns using OpneMP and basic message-passing patterns using MPI.

Timing Operations in CUDA
Joel Adams, Calvin College, and Jeffrey Lyman, Macalester College
Through completion of Vector Addition, multipliction, square root, and squaring programs, students will gain an understanding of when the overhead of creating threads and copying memory is worth the speedup of GPU coding.

Concept: Data Decomposition Pattern
Elizabeth Shoop
This module consists of reading material and code examples that depict the data decomposition pattern in parallel programming, using a small-sized example of vector addition (sometimes called the "Hello, World" of parallel programming.

Visualize Numerical Integration
Elizabeth Shoop
This is an activity with working code supplied that enables students to see how various forms of the data decomposition pattern map processing units to computations.



      Next Page »