Want to know more about modules?
Find out more about modules and their contents.
Have a module of your own?
Contribute to the site by submitting your own module. Your submission will be reviewed by CS In Parallel to determine what categories it should be listed under. After that process, it will become available to all viewers of this site.
The Module Collection
Languageshowing only C Show all Language
Possible Course Use
Results 1 - 12 of 12 matches
Multicore Programming with OpenMP
Richard Brown; Elizabeth Shoop
In this lab, we will create a program that intentionally uses multi-core parallelism, upload and run it on the MTL, and explore the issues in parallelism and concurrency that arise. This module uses OpenMP.
Multi-core programming with Intel's Manycore Testing Lab (using Threading Building Blocks)
Professor Richard Brown, St. Olaf College
Intel Corporation has set up a special remote system that allows faculty and students to work with computers with lots of cores, called the Manycore Testing Lab (MTL). In this lab, we will create a program that intentionally uses multi-core parallelism, upload and run it on the MTL, and explore the issues in parallelism and concurrency that arise.
Elizabeth Shoop; Yu Zhao
In this module, we will learn how to create programs that intensionally use GPU to execute. To be more specific, we will learn how to solve parallel problems more efficiently by writing programs in CUDA C Programming Language and then executes them on GPUs based on CUDA architecture.
Distributed Computing Fundamentals
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. Using MPI, programmers can design methods to divide large data and perform the same computing task on segments of it and then and distribute those tasks to multiple processing units within the cluster. In this module, we will learn important and common MPI functions as well as techniques used in 'distributed memory' programming on clusters of networked computers.
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. NVIDIA®'s CUDA, a parallel computing platform and programming model, uses GPU for parallel computation problems. This module will explore ways to combine these two parallel computing platforms to make parallel computation more efficient.
Patternlets in Parallel Programming
Material originally created by Joel Adams, Calvin CollegeCompiled by Libby Shoop, Macalester College
Short, simple C programming examples of basic shared memory programming patterns using OpenMP and basic distributed memory patterns using MPI.
Sequential and parallel versions of a Monte Carlo simulation of the spread of infectious disease are presented in detail. Students can run the code and examine performance of sequential and parallel versions.
Timing Operations in CUDA
Joel Adams, Calvin College, and Jeffrey Lyman, Macalester College
Through completion of vector addition, multiplication, square root, and squaring programs, students will gain an understanding of when the overhead of creating threads and copying memory is worth the speedup of GPU coding.
Concept: Data Decomposition Pattern
This module consists of reading material and code examples that depict the data decomposition pattern in parallel programming, using a small-sized example of vector addition (sometimes called the "Hello, World" of parallel programming.
Visualize Numerical Integration
This is an activity with working code supplied that enables students to see how various forms of the data decomposition pattern map processing units to computations.
Instructor Example: Optimizing CUDA for GPU Architecture
This module, designed for instructors to use as an example, explains how to take advantage of the CUDA GPU architecture to provide maximum speedup for your CUDA applications using a Mandelbrot set generator as an example.
Monte Carlo Simulations: Parallelism in CS1/CS2
Use Monte Carlo Simulations in CS1/CS2 to expose students to parallel programming with OpenMP.