{
"cells": [
{
"cell_type": "markdown",
"id": "7606d30a",
"metadata": {},
"source": [
"
\n",
"\n",
"### Programming large-scale parallel systems"
]
},
{
"cell_type": "markdown",
"id": "4ac1e5d9",
"metadata": {},
"source": [
"# Intro to MPI (point-to-point)"
]
},
{
"cell_type": "markdown",
"id": "a341be2e",
"metadata": {},
"source": [
"## Contents\n",
"\n",
"\n",
"In this notebook, we will learn the basics of parallel computing using the Message Passing Interface (MPI) from Julia. In particular, we will learn:\n",
"\n",
"- How to use point-to-point communication directives\n",
"- Which are the pros and cons of several types of send and receive functions\n",
"- Which are the common pitfalls when using point-to-point directives\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "8862079b",
"metadata": {},
"source": [
"## What is MPI ?\n",
"\n",
"- MPI stands for the \"Message Passing Interface\"\n",
"- It is a standardized library specification for communication between parallel processes in distributed-memory systems.\n",
"- It is the gold-standard for distributed computing in HPC systems since the 90s\n",
"- It is huge: the MPI standard has more than 1k pages (see https://www.mpi-forum.org/docs/mpi-4.0/mpi40-report.pdf)\n",
"- There are several implementations of this standard (OpenMPI, MPICH, IntelMPI)\n",
"- The interface is in C and FORTRAN (C++ was deprecated)\n",
"- There are Julia bindings via the package MPI.jl https://github.com/JuliaParallel/MPI.jl"
]
},
{
"cell_type": "markdown",
"id": "7c31907f",
"metadata": {},
"source": [
"### Before starting this notebook\n",
"\n",
"Read this paper to get an overview of the history and rationale behind MPI:\n",
"\n",
"J.J. Dongarra, S.W. Otto, M. Snir, and D. Walker, David. A message passing standard for MPP and workstations, *Commun. ACM*, 39(7), 84–90, 1996. DOI: [10.1145/233977.234000](https://doi.org/10.1145/233977.234000).\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "99c6febb",
"metadata": {},
"source": [
"### What is MPI.jl ?\n",
"\n",
"We will access MPI via the Julia bindings provided by the [`MPI.jl`]( https://github.com/JuliaParallel/MPI.jl) package. It is worth noting that:\n",
"\n",
"- MPI is not a Julia implementation of the MPI standard\n",
"- It is just a wrapper to the C interface of MPI.\n",
"- You need a C MPI installation in your system (MPI.jl downloads one for you when needed).\n",
"- On a cluster (e.g. DAS-5), you want you use the MPI installation already available in the system.\n",
"\n",
"\n",
"### Why MPI.jl?\n",
"\n",
"MPI.jl provides a convenient Julia API to access MPI. For instance, this is how you get the id (rank) of the current process.\n",
"\n",
"```julia\n",
"comm = MPI.COMM_WORLD\n",
"rank = MPI.Comm_rank(comm)\n",
"```\n",
"\n",
"Internally, MPI.jl uses `ccall` which is a mechanism that allows you to call C functions from Julia. In this, example we are calling the C function `MPI_Comm_rank` from the underlying MPI installation.\n",
"\n",
"```julia\n",
"comm = MPI.COMM_WORLD \n",
"rank_ref = Ref{Cint}()\n",
"ccall((:MPI_Comm_rank, MPI.API.libmpi), Cint, (MPI.API.MPI_Comm, Ptr{Cint}), comm, rank_ref)\n",
"rank = Int(rank_ref[])\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "c6c44e2d",
"metadata": {},
"source": [
"If you are curious, run next cell to get more information about how `ccall` works."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1f4c046",
"metadata": {},
"outputs": [],
"source": [
"? ccall"
]
},
{
"cell_type": "markdown",
"id": "e99c7676-989e-4e91-b65e-ebca2d5626a4",
"metadata": {},
"source": [
"### Installing MPI in Julia\n",
"\n",
"MPI can be installed as any other Julia package using the package manager."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0b44409e",
"metadata": {},
"outputs": [],
"source": [
"] add MPI"
]
},
{
"cell_type": "markdown",
"id": "abc6f017",
"metadata": {},
"source": [
"