diff --git a/notebooks/jacobi_method.ipynb b/notebooks/jacobi_method.ipynb
index 51a610c..258b5b8 100644
--- a/notebooks/jacobi_method.ipynb
+++ b/notebooks/jacobi_method.ipynb
@@ -65,7 +65,38 @@
"jacobi_3_check(answer) = answer_checker(answer, \"c\")\n",
"lh_check(answer) = answer_checker(answer, \"c\")\n",
"sndrcv_check(answer) = answer_checker(answer,\"b\")\n",
- "function sndrcv_fix_answer()\n",
+ "function partition_1d_answer(bool)\n",
+ " bool || return\n",
+ " msg = \"\"\"\n",
+ "- We update N^2/P items per iteration\n",
+ "- We need data from 2 neighbors (2 messages per iteration)\n",
+ "- We communicate N items per message\n",
+ "- Communication/computation ratio is 2N/(N^2/P) = 2P/N =O(P/N)\n",
+ " \"\"\"\n",
+ " println(msg)\n",
+ "end\n",
+ "function partition_2d_answer(bool)\n",
+ " bool || return\n",
+ " msg = \"\"\"\n",
+ "- We update N^2/P items per iteration\n",
+ "- We need data from 4 neighbors (4 messages per iteration)\n",
+ "- We communicate N/sqrt(P) items per message\n",
+ "- Communication/computation ratio is (4N/sqrt(P)/(N^2/P)= 4sqrt(P)/N =O(sqrt(P)/N)\n",
+ " \"\"\"\n",
+ " println(msg)\n",
+ "end\n",
+ "function partition_cyclic_answer(bool)\n",
+ " bool || return\n",
+ " msg = \"\"\"\n",
+ "- We update N^2/P items\n",
+ "- We need data from 4 neighbors (4 messages per iteration)\n",
+ "- We communicate N^2/P items per message (the full data owned by the neighbor)\n",
+ "- Communication/computation ratio is O(1)\n",
+ " \"\"\"\n",
+ "println(msg)\n",
+ "end\n",
+ "function sndrcv_fix_answer(bool)\n",
+ " bool || return\n",
" msg = \"\"\"\n",
" One needs to carefully order the sends and the receives to avoid cyclic dependencies\n",
" that might result in deadlocks. The actual implementation is left as an exercise. \n",
@@ -73,7 +104,8 @@
" println(msg)\n",
"end\n",
"jacobitest_check(answer) = answer_checker(answer,\"a\")\n",
- "function jacobitest_why()\n",
+ "function jacobitest_why(bool)\n",
+ " bool || return\n",
" msg = \"\"\"\n",
" The test will pass. The parallel implementation does exactly the same operations\n",
" in exactly the same order than the sequential one. Thus, the result should be\n",
@@ -83,7 +115,8 @@
" println(msg)\n",
"end\n",
"gauss_seidel_2_check(answer) = answer_checker(answer,\"d\")\n",
- "function gauss_seidel_2_why()\n",
+ "function gauss_seidel_2_why(bool)\n",
+ " bool || return\n",
" msg = \"\"\"\n",
" All \"red\" cells can be updated in parallel as they only depend on the values of \"black\" cells.\n",
" In order workds, we can update the \"red\" cells in any order whithout changing the result. They only\n",
@@ -127,7 +160,7 @@
"$u^{t+1}_i = \\dfrac{u^t_{i-1}+u^t_{i+1}}{2}$\n",
"\n",
"\n",
- "This iterative is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n"
+ "This algorithm is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n"
]
},
{
@@ -137,7 +170,12 @@
"source": [
"### Serial implementation\n",
"\n",
- "The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points."
+ "The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points.\n",
+ "\n",
+ "
\n",
@@ -1154,7 +1252,7 @@
"source": [
"### Initialization\n",
"\n",
- "Let us start with function `init`. This is its implementation:"
+ "Let us start with function `init`."
]
},
{
@@ -1191,7 +1289,7 @@
"id": "1b9e75d8",
"metadata": {},
"source": [
- "This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells or boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions."
+ "This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells and boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions."
]
},
{
@@ -1353,7 +1451,8 @@
"metadata": {},
"outputs": [],
"source": [
- "sndrcv_fix_answer()"
+ "uncover = false\n",
+ "sndrcv_fix_answer(uncover)"
]
},
{
@@ -1363,7 +1462,7 @@
"source": [
"### Local computation\n",
"\n",
- "Once the ghost values have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank, i.e. the rank that own the value. In the code this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`."
+ "Once the ghost cells have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank. In the code, this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`."
]
},
{
@@ -1390,7 +1489,7 @@
"source": [
"### Running the code\n",
"\n",
- "Not let us put all pieces together and run the code. If not done yet, install MPI."
+ "Let us put all pieces together and run the code. If not done yet, install MPI."
]
},
{
@@ -1408,7 +1507,7 @@
"id": "c966375a",
"metadata": {},
"source": [
- "The following cells includes all previous code snippets into a final one. Note that we are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?"
+ "The following cells includes all previous code snippets into a final one. We are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?"
]
},
{
@@ -1442,7 +1541,7 @@
"source": [
"### Checking the result\n",
"\n",
- "Checking the result visually is not enough in general. To check the parallel implementation we want to compare the result against the sequential implementation. The way we do the computations (either in parallel or sequential) should not affect the result. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n",
+ "Checking the result visually is not enough in general. To check the parallel implementation we want to compare it against the sequential implementation. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n",
"\n",
"\n",
"The following function gather the distributed vector in rank 0."
@@ -1592,58 +1691,6 @@
"run(`$(mpiexec()) -np 3 julia --project=. -e $code`);"
]
},
- {
- "cell_type": "markdown",
- "id": "73cd4d73",
- "metadata": {},
- "source": [
- "Note that we have used function `isapprox` to compare the results. This function checks if two values are the same within machine precision. Using `==` is generally discouraged when working with floating point numbers as they can be affected by rounding-off errors."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d73c838c",
- "metadata": {},
- "source": [
- "
\n",
- "Question: What happens if we use `u_root == u_seq` to compare the parallel and the sequential result?\n",
- "
\n",
- "\n",
- " a) The test will still pass.\n",
- " b) The test will fail due to rounding-off errors.\n",
- " c) The test might pass or fail depending on `n`.\n",
- " d) The test might pass or fail depending on the number of MPI ranks."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "cd2427f1",
- "metadata": {},
- "outputs": [],
- "source": [
- "answer = \"x\" # replace x with a, b, c or d\n",
- "jacobitest_check(answer)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "790e7064",
- "metadata": {},
- "source": [
- "Run cell below for an explanation of the correct answer."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "72ed2aa1",
- "metadata": {},
- "outputs": [],
- "source": [
- "jacobitest_why()"
- ]
- },
{
"cell_type": "markdown",
"id": "c9aa2901",
@@ -1651,7 +1698,7 @@
"source": [
"## Latency hiding\n",
"\n",
- "Can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)."
+ "We have now a correct parallel implementation. But. can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)."
]
},
{
diff --git a/notebooks/julia_mpi.ipynb b/notebooks/julia_mpi.ipynb
index 035ed72..cf5cb65 100644
--- a/notebooks/julia_mpi.ipynb
+++ b/notebooks/julia_mpi.ipynb
@@ -75,6 +75,7 @@
"- MPI is not a Julia implementation of the MPI standard\n",
"- It is just a wrapper to the C interface of MPI.\n",
"- You need a C MPI installation in your system (MPI.jl downloads one for you when needed).\n",
+ "- On a cluster (e.g. DAS-5), you want you use the MPI installation already available in the system.\n",
"\n",
"\n",
"### Why MPI.jl?\n",
@@ -211,7 +212,7 @@
"MPI.Finalize()\n",
"```\n",
"\n",
- "In some process `rand(1:10)` might be 2 and the program will stop without reaching `MPI.Finalize()` leading to an incorrect program."
+ "This is incorrect. In some process `rand(1:10)` might be 2 and the program will stop without reaching `MPI.Finalize()` leading to an incorrect program."
]
},
{
@@ -367,7 +368,7 @@
"id": "f1a502a3",
"metadata": {},
"source": [
- "Note that this note notebook is running on a single process. So using MPI will only make sense later when we add more processes."
+ "Note that this note notebook is running on a single process. So using MPI will only make actual sense later when we add more processes."
]
},
{
@@ -626,13 +627,13 @@
"source": [
"## Point-to-point communication\n",
"\n",
- "Now we are up and running, and ready to start learning MPI communication primitives. In this notebook we will cover so-called point-to-point communication directives. In a later notebook we will also learn about collective primitives.\n",
+ "Now we are up and running, and ready to start learning MPI communication primitives. In this notebook we will cover so-called point-to-point communication. In a later notebook we will also learn about collective primitives.\n",
"\n",
"MPI provides point-to-point communication directives for arbitrary communication between processes. Point-to-point communications are two-sided: there is a sender and a receiver. Here, we will discuss different types of directives:\n",
"\n",
- "- `MPI_Send`, and `MPI_Recv` (*blocking directives*)\n",
- "- `MPI_Isend`, and `MPI_Irecv` (*non-blocking directives*)\n",
- "- `MPI_Bsend`, `MPI_Ssend`, and `MPI_Rsend` (*advanced communication modes*)"
+ "- `MPI_Send`, and `MPI_Recv`: *complete (blocking) directives*\n",
+ "- `MPI_Isend`, and `MPI_Irecv`: *incomplete (non-blocking) directives*\n",
+ "- `MPI_Bsend`, `MPI_Ssend`, and `MPI_Rsend`: *advanced communication modes*"
]
},
{
@@ -640,7 +641,7 @@
"id": "0e515109",
"metadata": {},
"source": [
- "In all cases, these functions are used to send a message from a ranks and receive it in another rank. See next picture."
+ "In all cases, these functions are used to send a message from a rank and receive it in another rank. See next picture."
]
},
{
@@ -979,7 +980,7 @@
"\n",
"\n",
"\n",
- "`MPI_Send` is also often called a blocking send, but this is very misleading. `MPI_Send` might or not wait for a matching `MPI_Recv`. Assuming that `MPI_Send` will block waiting for a matching receive is erroneous. I.e., we cannot assume that `MPI_Send` has synchronization side effects with the receiver process. However, assuming that `MPI_Send` will not block is also erroneous. Look into the following example (which in fact is an incorrect MPI program). In contrast, `MPI_Send` guarantees that the send buffer can be reused when function returns (complete operation)."
+ "`MPI_Send` is *informally* called a blocking send, but this is not accurate. `MPI_Send` might or not wait for a matching `MPI_Recv`. Assuming that `MPI_Send` will block waiting for a matching receive is erroneous. I.e., we cannot assume that `MPI_Send` has synchronization side effects with the receiver process. However, assuming that `MPI_Send` will not block is also erroneous. Look into the following example (which in fact is an incorrect MPI program). `MPI_Send` only guarantees that the send buffer can be reused when function returns (complete operation)."
]
},
{
@@ -1042,7 +1043,7 @@
"1. One might want to minimize synchronization time. This is often achieved by copying the outgoing message in an internal buffer and returning from the `MPI_Send` as soon as possible, without waiting for a matching `MPI_Recv`.\n",
"2. One might want to avoid data copies (e.g. for large messages). In this case, one needs to wait for a matching receive and return from the `MPI_Send` when the data has been sent.\n",
"\n",
- "Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time."
+ "Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time unfortunately."
]
},
{
@@ -1497,7 +1498,7 @@
"function matmul_mpi_3!(C,A,B)\n",
"```\n",
"\n",
- "Assume that the input matrices `A` and `B` are given only on rank 0, the other ranks get dummy matrices with zero rows and zero columns to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that `A` and `B` are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result `C` should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called `ex1.jl`. Modify the file (e.g. with vscode). Run it from the Julia REPL using the `run` function as explained in the [Getting Started tutorial](https://www.francescverdugo.com/XM_40017/dev/getting_started_with_julia/#Running-MPI-code)."
+ "Assume that the input matrices `A` and `B` are given only on rank 0, the other ranks get dummy empty matrices to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that `A` and `B` are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result `C` should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called `ex1.jl`. Modify the file (e.g. with vscode). Run it from the Julia REPL using the `run` function as explained in the [Getting Started tutorial](https://www.francescverdugo.com/XM_40017/dev/getting_started_with_julia/#Running-MPI-code). Don't try to implement complex MPI code in a Jupyter notebook."
]
},
{
diff --git a/notebooks/mpi_collectives.ipynb b/notebooks/mpi_collectives.ipynb
index 7555343..feced87 100644
--- a/notebooks/mpi_collectives.ipynb
+++ b/notebooks/mpi_collectives.ipynb
@@ -97,7 +97,7 @@
"source": [
"## MPI_Barrier\n",
"\n",
- "This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see an example later in another notebook when studying the traveling sales person problem (TSP).\n",
+ "This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see a practical example later in another notebook when studying the traveling sales person problem (TSP).\n",
"\n",
"In Julia:\n",
"```julia\n",
@@ -117,7 +117,7 @@
"source": [
"### Example\n",
"\n",
- "In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order in this case."
+ "In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order."
]
},
{
@@ -147,7 +147,7 @@
"source": [
"## MPI_Reduce\n",
"\n",
- "This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process).\n",
+ "This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process). The root process can be any process and it is rank 0 by default in Julia.\n",
"\n",
"In Julia:\n",
"```julia\n",
@@ -301,7 +301,12 @@
"source": [
"## MPI_Gather\n",
"\n",
- "Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector). This function assumes that the amount of data sent from each rank is the same. The root process can be any process and it is rank 0 by default in Julia.\n",
+ "Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector).\n",
+ "\n",
+ "
\n",
+ "Note: This function assumes that the amount of data sent from each rank is the same. See `MPI_Gatherv` below for more general cases.\n",
+ "
\n",
+ "\n",
"\n",
"In Julia:\n",
"```julia\n",
@@ -487,7 +492,7 @@
"source": [
"### Example\n",
"\n",
- "Each process sends a random amount of integers to rank 0."
+ "Each process sends a random amount of integers to rank 0. The root process will not know the amount of data to be gathered from each rank in advance. We need an auxiliary gather to inform about the message size."
]
},
{
@@ -898,6 +903,24 @@
"After learning this material and the previous MPI notebook, you have a solid basis to start implementing sophisticated parallel algorithms using MPI."
]
},
+ {
+ "cell_type": "markdown",
+ "id": "843b40cd",
+ "metadata": {},
+ "source": [
+ "## Exercises"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5c2045d9",
+ "metadata": {},
+ "source": [
+ "### Exercise 1\n",
+ "\n",
+ "Implement the parallel matrix-matrix multiplication (Algorithm 3) using MPI collectives instead of point-to-point communication. I.e., this is the same exercise as in previous notebook, but using different functions for communication."
+ ]
+ },
{
"cell_type": "markdown",
"id": "5e8f6e6a",