Parallel Programming: Introduction to MPI Prerequisites
This course is part of the Scientific Computing series.
This is a simple introduction to using MPI for writing parallel programs to run on clusters and multi-CPU systems, for the purposes of "high-performance computing". It will cover the principles of MPI, and teach the use of the basic facilities of MPI (i.e. the ones that are used in most HPC applications), so that attendees will be able to write serious programs using it. It will describe other features that may be useful, but not teach their use. Any requests for particular coverage will be welcomed, but cannot be promised.
- Significant programming experience with production code in Fortran, C or C++; the course will assume that attendees are reasonably fluent in their chosen language, and have experience with adding diagnostic output statements to their program for debugging. Participants should have attended "Parallel Programming: Options and Design".
- Basic knowledge of the Unix command line as might be gleaned from the "Introduction to Unix" course. Those attending should also be able to use a plain text editor (e.g. emacs, gedit, pico, vi) on a Unix system, as covered on the Emacs or Vi Introductions.
- Purposes and basic design of MPI
- MPI environment, communicators etc.
- Simple point-to-point communication
- Collective communication
- Non-blocking and more advanced point-to-point
- Error handling, debugging and tuning
- The MPI progress model
- Other facilities in MPI
A mixture of presentations, demonstrations and practicals.
gfortran and preferably OpenMPI on PWF Linux
- There is no significant difference between how to use MPI under Unix, Microsoft Windows or any other system, so it is equally relevant to users of other systems that want to learn MPI.
- More information may be found in http://www-uxsup.csx.cam.ac.uk/courses/MPI
Three full day sessions
Events available