Want to know more about modules?
Find out more about modules and their contents.
Have a module of your own?
Contribute to the site by submitting your own module. Your submission will be reviewed by CS In Parallel to determine what categories it should be listed under. After that process, it will become available to all viewers of this site.
The Module Collection
Possible Course Use
Results 1 - 12 of 12 matches
Parallel Computing Concepts
This concept module will introduce a core of parallel computing notions that CS majors and minors should know in preparation for the era of manycore computing, including parallelism categories, concurrency issues and solutions, and programming strategies.
Concurrent Access to Data Structures
Professor Libby Shoop, Macalester College
This module enables students to experiment with creating a task-parallel solution to the problem of crawling the web by using Java threads and thread-safe data structures available in the java.util.concurrent package.
Concurrency and Map-Reduce Strategies in Various Programming Languages
Professor Richard Brown, St. Olaf College
This concept module explores how concurrency and parallelism have been established in programming languages and how one can implement map-reduce in several high-level programming languages taught in a CS curriculum, including Scheme, C++, Java, and Python.
Multicore Programming with OpenMP
Richard Brown; Elizabeth Shoop
In this lab, we will create a program that intentionally uses multi-core parallelism, upload and run it on the MTL, and explore the issues in parallelism and concurrency that arise. This module uses OpenMP.
Multi-core programming with Intel's Manycore Testing Lab (using Threading Building Blocks)
Professor Richard Brown, St. Olaf College
Intel Corporation has set up a special remote system that allows faculty and students to work with computers with lots of cores, called the Manycore Testing Lab (MTL). In this lab, we will create a program that intentionally uses multi-core parallelism, upload and run it on the MTL, and explore the issues in parallelism and concurrency that arise.
Map-reduce Computing for Introductory Students using WebMapReduce
Professor Richard Brown, St. Olaf College Professor Libby Shoop, Macalester College
This module emphasizes data-parallel problems and solutions, the so-called 'embarrassingly parallel' problems where processing of input data can easily be split among several parallel processes. Students use a web application called WebMapReduce (WMR) to write map and reduce functions that operate on portions of a massive dataset in parallel.
This module, targeted for algorithms and data structures courses, examines the theoretical PRAM model and its use when designing a parallel version of the mergesort algorithm.
Concurrent Access to Data Structures in C++
This module enables students to experiment with creating a task-parallel solution to the problem of crawling the web by using C++ with Boost threads and thread-safe data structures available in the Intel Threading ...
Elizabeth Shoop; Yu Zhao
In this module, we will learn how to create programs that intensionally use GPU to execute. To be more specific, we will learn how to solve parallel problems more efficiently by writing programs in CUDA C Programming Language and then executes them on GPUs based on CUDA architecture.
Distributed Computing Fundamentals
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. Using MPI, programmers can design methods to divide large data and perform the same computing task on segments of it and then and distribute those tasks to multiple processing units within the cluster. In this module, we will learn important and common MPI functions as well as techniques used in 'distributed memory' programming on clusters of networked computers.
Message Passing Interface (MPI) is a programming model widely used for parallel programming in a cluster. NVIDIAÂ's CUDA, a parallel computing platform and programming model, uses GPU for parallel computation problems. This module will explore ways to combine these two parallel computing platforms to make parallel computation more efficient.
Patternlets in Parallel Programming
Material originally created by Joel Adams, Calvin CollegeCompiled by Libby Shoop, Macalester College
Short, simple C programming examples of basic shared memory programming patterns using OpneMP and basic message-passing patterns using MPI.