XM_40017/notebooks/mpi_tutorial.ipynb
2023-09-19 11:10:00 +02:00

1018 lines
29 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "7606d30a",
"metadata": {},
"source": [
"<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/39/VU_logo.png/800px-VU_logo.png?20161029201021\" width=\"350\">\n",
"\n",
"### Programming large-scale parallel systems"
]
},
{
"cell_type": "markdown",
"id": "4ac1e5d9",
"metadata": {},
"source": [
"# Distributed computing with MPI"
]
},
{
"cell_type": "markdown",
"id": "a341be2e",
"metadata": {},
"source": [
"## Contents\n",
"\n",
"\n",
"In this notebook, we will learn the basics of parallel computing using the Message Passing Interface (MPI) from Julia. In particular, we will learn:\n",
"\n",
"- How to run parallel MPI code in Julia\n",
"- How to use basic collective communication directives\n",
"- How to use basic point-to-point communication directives\n",
"\n",
"For further information on how to use MPI from Julia see https://github.com/JuliaParallel/MPI.jl\n"
]
},
{
"cell_type": "markdown",
"id": "8862079b",
"metadata": {},
"source": [
"## What is MPI ?\n",
"\n",
"- MPI stands for the \"Message Passing Interface\"\n",
"- It is a standardized library specification for communication between parallel processes in distributed-memory systems.\n",
"- It is the gold-standard for distributed computing in HPC systems since the 90s\n",
"- It is huge: the MPI standard has more than 1k pages (see https://www.mpi-forum.org/docs/mpi-4.0/mpi40-report.pdf)\n",
"- There are several implementations of this standard (OpenMPI, MPICH, IntelMPI)\n",
"- The interface is in C and FORTRAN (C++ was deprecated)\n",
"- There are Julia bindings via the package MPI.jl https://github.com/JuliaParallel/MPI.jl"
]
},
{
"cell_type": "markdown",
"id": "99c6febb",
"metadata": {},
"source": [
"### What is MPI.jl ?\n",
"\n",
"- It is not a Julia implementation of the MPI standard\n",
"- It is a wrapper to the C interface of MPI\n",
"- You need a C MPI installation in your system\n",
"\n",
"\n",
"MPI.jl provides a convenient Julia API to access MPI. For instance, this is how you get the id (rank) of the current process.\n",
"\n",
"```julia\n",
"comm = MPI.COMM_WORLD\n",
"rank = MPI.Comm_rank(comm)\n",
"```\n",
"\n",
"Internally, MPI.jl uses `ccall` which is a mechanism that allows you to call C functions from Julia. In this, example we are calling the C function `MPI_Comm_rank` from the underlying MPI installation.\n",
"\n",
"```julia\n",
"comm = MPI.COMM_WORLD \n",
"rank_ref = Ref{Cint}()\n",
"ccall((:MPI_Comm_rank, MPI.API.libmpi), Cint, (MPI.API.MPI_Comm, Ptr{Cint}), comm, rank_ref)\n",
"rank = Int(rank_ref[])\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "82e6e98f",
"metadata": {},
"source": [
"### Installing MPI in Julia\n",
"\n",
"The Jupyter Julia kernel installed by IJulia activates the folder where the notebook is located as the default environment, which causes the main process and the worker processes to not share the same environment. Therefore, we need to set the environment as the global environment."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1cb40e35",
"metadata": {},
"outputs": [],
"source": [
"] activate"
]
},
{
"cell_type": "markdown",
"id": "e7dafc05",
"metadata": {},
"source": [
"MPI can be installed as any other Julia package using the package manager."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0b44409e",
"metadata": {},
"outputs": [],
"source": [
"] add MPI"
]
},
{
"cell_type": "markdown",
"id": "abc6f017",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\">\n",
"<b>Note:</b> The package you have installed it is the Julia interface to MPI, called MPI.jl. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the documentation of MPI.jl for further details. See more information in <a href=\"https://github.com/JuliaParallel/MPI.jl\">https://github.com/JuliaParallel/MPI.jl</a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "a534e3a2",
"metadata": {},
"source": [
"### Basic information about MPI processes\n",
"\n",
"The following cells give information about MPI processes, such as the rank id, the total number of processes and the name of the host running the code respectively. Before calling this functions one needs to initialize MPI."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "96f7c14e",
"metadata": {},
"outputs": [],
"source": [
"using MPI\n",
"MPI.Init()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bd8232f5",
"metadata": {},
"outputs": [],
"source": [
"comm = MPI.COMM_WORLD\n",
"rank = MPI.Comm_rank(comm)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0befa408",
"metadata": {},
"outputs": [],
"source": [
"nranks = MPI.Comm_size(comm)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ff01adcf",
"metadata": {},
"outputs": [],
"source": [
"MPI.Get_processor_name()"
]
},
{
"cell_type": "markdown",
"id": "f1a502a3",
"metadata": {},
"source": [
"Note that this note notebook is not running with different MPI processes (yet). So using MPI will only make sense later when we add more processes."
]
},
{
"cell_type": "markdown",
"id": "133327e2",
"metadata": {},
"source": [
"### Hello-world example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a154b55e",
"metadata": {},
"outputs": [],
"source": [
"using MPI\n",
"MPI.Init()\n",
"comm = MPI.COMM_WORLD\n",
"nranks = MPI.Comm_size(comm)\n",
"rank = MPI.Comm_rank(comm)\n",
"println(\"Hello, I am process $rank of $nranks processes!\")"
]
},
{
"cell_type": "markdown",
"id": "baddbba1",
"metadata": {},
"source": [
"### Hello world in C\n",
"\n",
"```C\n",
"#include <mpi.h>\n",
"#include <stdio.h>\n",
"int main(int argc, char** argv) {\n",
" MPI_Init(NULL, NULL);\n",
" int world_size;\n",
" MPI_Comm_size(MPI_COMM_WORLD, &world_size);\n",
" int world_rank;\n",
" MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);\n",
" char processor_name[MPI_MAX_PROCESSOR_NAME];\n",
" int name_len;\n",
" MPI_Get_processor_name(processor_name, &name_len);\n",
" printf(\"Hello from %s, I am rank %d of %d ranks!\\n\",\n",
" processor_name, world_rank, world_size);\n",
" MPI_Finalize();\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "e3901c57",
"metadata": {},
"source": [
"## Running MPI code"
]
},
{
"cell_type": "markdown",
"id": "8376135d",
"metadata": {},
"source": [
"### Creating MPI processes (aka ranks)\n",
"\n",
"- MPI processes are created with the driver program `mpiexec`\n",
"- `mpiexec` takes an application and runs it on different ranks\n",
"- The application calls MPI directives to communicate between these ranks\n",
"- The application can be Julia running your script in particular.\n"
]
},
{
"cell_type": "markdown",
"id": "a458c714",
"metadata": {},
"source": [
" <div>\n",
"<img src=\"attachment:fig23.png\" align=\"left\" width=\"450\"/>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "044aaeec",
"metadata": {},
"source": [
"### Execution model\n",
"\n",
"- MPI programs are typically run with a Single Program Multiple Data (SPMD) model\n",
"- But the standard supports Multiple Program Multiple Data (MPMD)"
]
},
{
"cell_type": "markdown",
"id": "5f76fb65",
"metadata": {},
"source": [
"### Hello world\n",
"\n",
"Julia code typically needs to be in a file to run it in with MPI. Let's us write our hello world in a file:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e03f35c5",
"metadata": {},
"outputs": [],
"source": [
"code = raw\"\"\"\n",
"using MPI\n",
"MPI.Init()\n",
"comm = MPI.COMM_WORLD\n",
"nranks = MPI.Comm_size(comm)\n",
"rank = MPI.Comm_rank(comm)\n",
"println(\"Hello, I am process $rank of $nranks processes!\")\n",
"\"\"\"\n",
"filename = tempname()*\".jl\"\n",
"write(filename,code);"
]
},
{
"cell_type": "markdown",
"id": "f13946dd",
"metadata": {},
"source": [
"Now, we can run it"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dbe654dc",
"metadata": {},
"outputs": [],
"source": [
"using MPI\n",
"mpiexec(cmd->run(`$cmd -np 4 julia --project=. $filename`));"
]
},
{
"cell_type": "markdown",
"id": "651f26ae",
"metadata": {},
"source": [
"Note that function mpiexec provided by MPI.jl is a convenient way of accessing the mpiexec program that matches the MPI installation used my Julia."
]
},
{
"cell_type": "markdown",
"id": "8665c3ca",
"metadata": {},
"source": [
"### MPIClusterManagers \n",
"\n",
"- This package allows you to create Julia workers that can call MPI functions\n",
"- This is useful to combine Distributed.jl and MPI.jl\n",
"- E.g., we can run MPI code interactively (from a notebook)\n",
"- Link: https://github.com/JuliaParallel/MPIClusterManagers.jl\n"
]
},
{
"cell_type": "markdown",
"id": "355155d8",
"metadata": {},
"source": [
" <div>\n",
"<img src=\"attachment:g1340.png\" align=\"left\" width=\"450\"/>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "28c68c9c",
"metadata": {},
"outputs": [],
"source": [
"] add MPIClusterManagers"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "84adeab3",
"metadata": {},
"outputs": [],
"source": [
"using MPIClusterManagers\n",
"using Distributed\n",
"if procs() == workers()\n",
" nranks = 3\n",
" manager = MPIWorkerManager(nranks)\n",
" addprocs(manager)\n",
"end"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "612a7587",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" using MPI\n",
" MPI.Init()\n",
" comm = MPI.COMM_WORLD\n",
" nranks = MPI.Comm_size(comm)\n",
" rank = MPI.Comm_rank(comm)\n",
" println(\"Hello, I am process $rank of $nranks processes!\")\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "cb480322",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\">\n",
"<b>Note:</b> Note that the rank ids start with 0.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "6caa8d74",
"metadata": {},
"source": [
"## MPI Communicators\n",
"\n",
"In MPI, a **communicator** represents a group of processes that can communicate with each other. `MPI_COMM_WORLD` (`MPI.COMM_WORLD` from Julia) is a built-in communicator that represents all processes available in the MPI program. Custom communicators can also be created to group processes based on specific requirements or logical divisions. The **rank** of a processor is a unique (integer) identifier assigned to each process within a communicator. It allows processes to distinguish and address each other in communication operations.\n",
"\n",
"### Duplicating a communicator\n",
"\n",
"It is a good practice to not using the built-in communicators directly, and use a copy instead with `MPI.Comm_dup`. Different libraries using the same communicator can lead to unexpected interferences."
]
},
{
"cell_type": "markdown",
"id": "c87b3c82",
"metadata": {},
"source": [
"## Collective communication\n",
"\n",
"MPI provides collective communication functions for communication involving multiple processes. Some usual collective directives are:\n",
"\n",
"- `MPI.Gather`: Gathers data from all processes to a single process.\n",
"- `MPI.Scatter`: Distributes data from one process to all processes.\n",
"- `MPI.Bcast`: Broadcasts data from one process to all processes.\n",
"- `MPI.Barrier`: Synchronizes all processes.\n",
"\n",
"See more collective directives available from Julia here: https://juliaparallel.org/MPI.jl/stable/reference/collective/\n"
]
},
{
"cell_type": "markdown",
"id": "97dc2886",
"metadata": {},
"source": [
"### Gather\n",
"\n",
"Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector)."
]
},
{
"cell_type": "markdown",
"id": "4bdf9c02",
"metadata": {},
"source": [
"<div>\n",
"<img src=\"attachment:g13884.png\" align=\"left\" width=\"350\"/>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f8a70c6",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" nranks = MPI.Comm_size(comm)\n",
" rank = MPI.Comm_rank(comm)\n",
" root = 0\n",
" snd = 10*(rank+2)\n",
" println(\"I am sending $snd\")\n",
" rcv = MPI.Gather(snd,comm;root)\n",
" if rank == root\n",
" println(\"I have received: $rcv\")\n",
" end\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "8b4254d1",
"metadata": {},
"source": [
"### Scatter\n",
"\n",
"The root rank contains a buffer (e.g., a vector) of values (one value for each rank in a communicator). Scatter sends one value to each rank (the root rank also receives a value). The root rank can be any process in a communicator."
]
},
{
"cell_type": "markdown",
"id": "74d5b606",
"metadata": {},
"source": [
"<div>\n",
"<img src=\"attachment:g13389.png\" align=\"left\" width=\"350\"/>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5ea413fd",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" nranks = MPI.Comm_size(comm)\n",
" rank = MPI.Comm_rank(comm)\n",
" root = 0\n",
" rcv = Ref(0) \n",
" if rank == root\n",
" snd = [10*(i+1) for i in 1:nranks]\n",
" println(\"I am sending: $snd\")\n",
" else\n",
" snd = nothing\n",
" end \n",
" MPI.Scatter!(snd,rcv,comm;root)\n",
" println(\"I have received: $(rcv[])\")\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "ed8da7f9",
"metadata": {},
"source": [
"### Bcast (broadcast)\n",
"\n",
"Similar to scatter, but we send the same message to all processes."
]
},
{
"cell_type": "markdown",
"id": "26b56c81",
"metadata": {},
"source": [
"<div>\n",
"<img src=\"attachment:g1657.png\" align=\"left\" width=\"350\"/>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4de15781",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" nranks = MPI.Comm_size(comm)\n",
" rank = MPI.Comm_rank(comm)\n",
" root = 0\n",
" buffer = Ref(0)\n",
" if rank == root\n",
" buffer[] = 20\n",
" println(\"I am sending: $(buffer[])\")\n",
" end \n",
" MPI.Bcast!(buffer,comm;root)\n",
" println(\"I have received: $(buffer[])\")\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "6de37cc9",
"metadata": {},
"source": [
"### Barrier"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ff61f64a",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" sleep(rand(1:3))\n",
" MPI.Barrier(comm)\n",
" println(\"Done!\")\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "ac9a5b52",
"metadata": {},
"source": [
"## Point-to-Point communication\n",
"\n",
"\n",
"MPI also provides point-to-point communication directives for arbitrary communication between processes. Point-to-point communications are two-sided: there is a sender and a receiver. Here, we will discuss these basic directives:\n",
"\n",
"- `MPI.Isend`, and `MPI.Irecv!` (*non-blocking directives*)\n",
"- `MPI.Send`, and `MPI.Recv` (*blocking directives*)\n",
"\n",
"\n",
"Non-blocking directives return immediately and return an `MPI.Request` object. This request object can be queried with functions like `MPI.Wait`. It is mandatory to wait on the request object before reading the receive buffer, or before writing again on the send buffer.\n",
"\n",
"For blocking directives, it is save to read/write from/to the receive/send buffer once the function has returned. By default, blocking directives might wait (or might not wait) for a matching send/receive. \n",
"For fine control, MPI offers advanced *blocking* directives with different blocking behaviors (called communication modes, see section 3.9 of the MPI standard 4.0). Blocking communication will be discussed later in the course.\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "26879a46",
"metadata": {},
"source": [
"### Blocking communication\n",
"\n",
"If we start a receive before a matching send, we will block in the call to `MPI.Recv!`. Run the next cell and note that the message is not printed since the process is blocked at `MPI.Recv!` waiting for a matching send."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "68aa9ada",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 4 begin\n",
" buffer = Ref(0)\n",
" comm = MPI.COMM_WORLD\n",
" MPI.Recv!(buffer,comm;source=2-2,tag=0)\n",
" println(\"I have received $(buffer[]).\")\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "ace9c787",
"metadata": {},
"source": [
"If you run the next cell containing the corresponding send, the communication will take place."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e18347cd",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 2 begin\n",
" buffer = Ref(2)\n",
" comm = MPI.COMM_WORLD\n",
" MPI.Send(buffer,comm;dest=4-2,tag=0)\n",
" println(\"I have send $(buffer[]). It is now safe to overwite the buffer.\")\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "0a038200",
"metadata": {},
"source": [
"### MPI does not integrate well with Julia Tasks\n",
"\n",
"MPI blocks without yielding (we cannot switch to other Julia tasks). Run next cell:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d7da7d27",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 4 begin\n",
" buffer = Ref(0)\n",
" comm = MPI.COMM_WORLD\n",
" MPI.Recv!(buffer,comm;source=2-2,tag=0)\n",
" println(\"I have received $(buffer[]).\")\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "530b673b",
"metadata": {},
"source": [
"Now try to spawn other tasks on process 4 by running next cell. This task will not be served yet."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae1165ac",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 4 println(\"Hello!\");"
]
},
{
"cell_type": "markdown",
"id": "c722d71e",
"metadata": {},
"source": [
"We first need to unlock the receive with a matching send. Then the task printing \"Hello!\" will be finally served."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9da6f76e",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 2 begin\n",
" buffer = Ref(2)\n",
" comm = MPI.COMM_WORLD\n",
" MPI.Send(buffer,comm;dest=4-2,tag=0)\n",
" println(\"I have send $(buffer[]). It is now safe to overwite the buffer.\")\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "6f4fa1cd",
"metadata": {},
"source": [
"### Non-blocking communication"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d6e2fbbf",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 4 begin\n",
" buffer = Ref(0)\n",
" comm = MPI.COMM_WORLD\n",
" req = MPI.Irecv!(buffer,comm;source=2-2,tag=0)\n",
" println(\"Not yet safe to read the buffer\")\n",
" MPI.Wait(req)\n",
" println(\"I have received $(buffer[]).\")\n",
"end;"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4aa426ff",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 2 begin\n",
" buffer = Ref(2)\n",
" comm = MPI.COMM_WORLD\n",
" req = MPI.Isend(buffer,comm;dest=4-2,tag=0)\n",
" println(\"Not yet safe to write the buffer\")\n",
" MPI.Wait(req)\n",
" println(\"I have send $(buffer[]). It is now safe to overwite the buffer.\")\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "30d9a970",
"metadata": {},
"source": [
"### Combining Julia Tasks and non-blocking communication\n",
"\n",
"We can implement our own blocking mechanism that combines well with the Julia task scheduler. This can be done by testing on the request in a while loop. The `yield` in the loop provides the opportunity of other tasks to run.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "768b2439",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 4 begin\n",
" buffer = Ref(0)\n",
" comm = MPI.COMM_WORLD\n",
" req = MPI.Irecv!(buffer,comm;source=2-2,tag=0)\n",
" while MPI.Test(req) == false\n",
" yield()\n",
" end\n",
" println(\"I have received $(buffer[]).\")\n",
"end;"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "073d138c",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 4 println(\"Hello!\");"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "97bc403f",
"metadata": {},
"outputs": [],
"source": [
"@spawnat 2 begin\n",
" buffer = Ref(2)\n",
" comm = MPI.COMM_WORLD\n",
" req = MPI.Isend(buffer,comm;dest=4-2,tag=0)\n",
" while MPI.Test(req) == false\n",
" yield()\n",
" end\n",
" println(\"I have send $(buffer[]). It is now safe to overwite the buffer.\")\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "750fdacb",
"metadata": {},
"source": [
"### Example\n",
"\n",
"The first rank generates a message and sends it to the last rank. The last rank receives the message and multiplies it by a coefficient. The last rank sends the result back to the first rank."
]
},
{
"cell_type": "markdown",
"id": "b479bbd4",
"metadata": {},
"source": [
"<div>\n",
"<img src=\"attachment:g5369.png\" align=\"left\" width=\"350\"/>\n",
"</div>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dccc3f6a",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" rank = MPI.Comm_rank(comm)\n",
" nranks = MPI.Comm_size(comm)\n",
" snder = 0\n",
" rcver = nranks-1\n",
" buffer = Ref(0)\n",
" if rank == snder\n",
" msg = 10*(rank+2)\n",
" println(\"I am sending: $msg\")\n",
" buffer[] = msg\n",
" req = MPI.Isend(buffer,comm;dest=rcver,tag=0)\n",
" MPI.Wait(req)\n",
" req = MPI.Irecv!(buffer,comm,source=rcver,tag=0)\n",
" MPI.Wait(req)\n",
" msg = buffer[]\n",
" println(\"I have received: $msg\")\n",
" end\n",
" if rank == rcver\n",
" req = MPI.Irecv!(buffer,comm,source=snder,tag=0)\n",
" MPI.Wait(req)\n",
" msg = buffer[]\n",
" println(\"I have received: $msg\")\n",
" coef = (rank+2)\n",
" msg = msg*coef\n",
" println(\"I am sending: $msg\")\n",
" buffer[] = msg\n",
" req = MPI.Isend(buffer,comm;dest=snder,tag=0)\n",
" MPI.Wait(req)\n",
" end\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "162682c1",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>Important:</b> In non-blocking communication, use <code>MPI.Wait()</code> before modifying the send buffer or using the receive buffer.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "9dc6c3fb",
"metadata": {},
"source": [
"### Example (with blocking directives)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a046d7dd",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" rank = MPI.Comm_rank(comm)\n",
" nranks = MPI.Comm_size(comm)\n",
" snder = 0\n",
" rcver = nranks-1\n",
" buffer = Ref(0)\n",
" if rank == snder\n",
" msg = 10*(rank+2)\n",
" println(\"I am sending: $msg\")\n",
" buffer[] = msg\n",
" MPI.Send(buffer,comm;dest=rcver,tag=0)\n",
" MPI.Recv!(buffer,comm,source=rcver,tag=0)\n",
" msg = buffer[]\n",
" println(\"I have received: $msg\")\n",
" end\n",
" if rank == rcver\n",
" MPI.Recv!(buffer,comm,source=snder,tag=0)\n",
" msg = buffer[]\n",
" println(\"I have received: $msg\")\n",
" coef = (rank+2)\n",
" msg = msg*coef\n",
" println(\"I am sending: $msg\")\n",
" buffer[] = msg\n",
" MPI.Send(buffer,comm;dest=snder,tag=0)\n",
" end\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "d9bdfa02",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-warning\">\n",
"<b>Important:</b> Blocking directives might look simpler to use, but they can lead to dead locks if the sends and receives are not issued in the right order. Non-blocking directives can also lead to dead locks, but when waiting for the request, not when calling the send/receive functions.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "d02f935d",
"metadata": {},
"source": [
"## Exercises"
]
},
{
"cell_type": "markdown",
"id": "a8e1c623",
"metadata": {},
"source": [
"### Exercise 1\n",
"\n",
"Implement this simple algorithm: Rank 0 generates a message (an integer). Rank 0 sends the message to rank 1. Rank 1 receives the message, increments the message by 1, and sends the result to rank 2. Rank 2 receives the message, increments the message by 1, and sends the result to rank 3. Etc. The last rank sends back the message to rank 0 closing the ring. See the next figure. Implement the communications using MPI.\n"
]
},
{
"cell_type": "markdown",
"id": "d474d781",
"metadata": {},
"source": [
"<div>\n",
"<img src=\"attachment:g5148.png\" align=\"left\" width=\"320\"/>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "49adf214",
"metadata": {},
"source": [
"### Exercise 2\n",
"\n",
"Implement the same algorithm as in Exercise 1, but now without using MPI. Implement the communications using the native `Distributed` module provided by Julia. In this case, start using process 1 instead of rank 0."
]
},
{
"cell_type": "markdown",
"id": "5e8f6e6a",
"metadata": {},
"source": [
"# License\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](https://www.francescverdugo.com/XM_40017) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.9.0",
"language": "julia",
"name": "julia-1.9"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.9.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}