OpenMP part of Parallel Computing in the Computer Science Curriculum:Platform Resources:PPPs
Taken from OpenMP's mission statement: "The OpenMP Application Program Interface (API) supports multi-platform shared-memory parallel programming in C/C++ and Fortran on all architectures, including Unix platforms and Windows NT platforms. Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer."
MPICH2 part of Parallel Computing in the Computer Science Curriculum:Platform Resources:PPPs
MPICH2 is a freely available, high-performance implementation of the MPI (Message Passing Interface), both MPI-1 and MPI-2. It is also an alternative free MPI implementation to OpenMPI. MPICH2 provides an API for message passing for parallel computing in C and C++, as well as an MPI implementation that efficiently supports different computation and communication platforms. It has been tested on several platforms, including Linux (on IA32 and x86-64), Mac OS/X (PowerPC and Intel), Solaris (32- and 64-bit), and Windows.
java.util.concurrent part of Parallel Computing in the Computer Science Curriculum:Platform Resources:PPPs
The standard Java package java.util.concurrent contains utility classes which are useful in implementing concurrent programming in Java. This package includes a few small standardized extensible frameworks, as well as some classes that provide useful functionality and are otherwise tedious or difficult to implement.
IntelĀ® Threading Building Blocks part of Parallel Computing in the Computer Science Curriculum:Platform Resources:PPPs
Threading Building Blocks (TBB), created by IntelĀ®, offers an approach to implementing parallelism in a C++ program. TBB is a library that helps programmers take advantage of multi-core processor performance without having to be an expert on threading. The library represents a higher-level, task-based parallelism that abstracts platform details and threading mechanisms for scalability and performance.