Students who successfully complete the course should be able to:
Demonstrate competence in writing parallel programs using a variety of methods.
Write hybrid code that utilizes MPI and shared-memory methods together.
Determine which method to use in given cases.
Demonstrate knowledge of techniques to analyze and optimize existing parallel code.
Structure and Content
Strong parallel programming skills are crucial in developing code in modern scientific computing. In this course you will learn how to write efficient parallel programs using both distributed and shared memory techniques. Building on your existing programming skills, you will use the Message Passing Interface (MPI) to develop basic parallel programmes in the first semester. In the second semester you will use use several shared memory techniques including Open MP and pthreads, as well as advanced features of MPI including one-sided communications.