In this course we'll study fundamental concepts and programming techniques for parallel and distributed computing (PDC for short), using readily-available shared-memory multicore processors (the Linux Lab computers) and inexpensive Raspberry Pi computers. We will explore the increasing importance and pervasiveness of PDC in historical and modern computer systems, how specific algorithm and programs can be developed to take advantage, and ways to measure and improve their performance.

A parallel computation leverages multiple processors to increase performance in some measurable way (such as throughput, speedup, or scaling), for example by partitioning a large dataset across processors. A distributed computation similarly harnesses the power of multiple cooperating computers, for example over a network.  Modern computing systems blur the distinctions between these two approaches. We'll examine the underlying building blocks of multiprocessing, multithreading, and message passing and how these have been adapted to extant architectures. 



Administrative Handouts

Banner graphic citation
Stephen P. Carl
scarl AT sewanee DOT edu