Introduction to advanced computing systems: HPC vs. HTC.
Hardware architectures: clusters, MPP, hybrid architectures.
System software used in HPC: filesystems, libraries, resource management and job allocation. Trends in supercomputing.
Parallel computing and its importance. Main application domains. Paradigms of parallel computing: shared memory and distributed memory. Measuring the efficiency of parallel algorithms: speedup and Amdhal's law.
OpenMP programming: fork and join model. Parallel zone. Parallel loops, collective operations and barriers. Private and shared variables. Data race problems.
MPI. Parallelization techniques: data decomposition and domain decomposition. Model master-slave for data distribution and collection. MPI communication types. Collective operations for data and computation. Communicators and communication topologies. Creation of derived data types.
Applications to linear algebra problems and to the numerical solution of the Poisson equation.
Bibliography of reference
Using MPI, 2nd Edition
William Gropp, Ewing Lusk and Anthony Skjellum, MIT Press
William Gropp, Ewing Lusk and Rajeev Thakur, MIT Press
Barbara Chapman, Gabriele Jost and Ruud van der Pas, MIT Press
Parallel Programming with MPI, P. Pacheco, Morgan Kaufmann Publishers, 1997.
Numerical Linear Algebra on High-Performance Computers
Jack J. Dongarra, Iain S. Duff , Danny C. Sorensen, Hank A. van der Vorst
The Sourcebook of Parallel Computing
Jack Dongarra , Geoffrey Fox , Ken Kennedy , Linda Torczon , William Gropp , Ian Foster (Editor), Andy White (Editor)