mirror of
https://github.com/fverdugo/XM_40017.git
synced 2025-11-08 22:14:23 +01:00
More changes in jacobi and MPI notebooks
This commit is contained in:
parent
c35f674bd5
commit
bbb749ddc0
@ -65,7 +65,38 @@
|
|||||||
"jacobi_3_check(answer) = answer_checker(answer, \"c\")\n",
|
"jacobi_3_check(answer) = answer_checker(answer, \"c\")\n",
|
||||||
"lh_check(answer) = answer_checker(answer, \"c\")\n",
|
"lh_check(answer) = answer_checker(answer, \"c\")\n",
|
||||||
"sndrcv_check(answer) = answer_checker(answer,\"b\")\n",
|
"sndrcv_check(answer) = answer_checker(answer,\"b\")\n",
|
||||||
"function sndrcv_fix_answer()\n",
|
"function partition_1d_answer(bool)\n",
|
||||||
|
" bool || return\n",
|
||||||
|
" msg = \"\"\"\n",
|
||||||
|
"- We update N^2/P items per iteration\n",
|
||||||
|
"- We need data from 2 neighbors (2 messages per iteration)\n",
|
||||||
|
"- We communicate N items per message\n",
|
||||||
|
"- Communication/computation ratio is 2N/(N^2/P) = 2P/N =O(P/N)\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" println(msg)\n",
|
||||||
|
"end\n",
|
||||||
|
"function partition_2d_answer(bool)\n",
|
||||||
|
" bool || return\n",
|
||||||
|
" msg = \"\"\"\n",
|
||||||
|
"- We update N^2/P items per iteration\n",
|
||||||
|
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
||||||
|
"- We communicate N/sqrt(P) items per message\n",
|
||||||
|
"- Communication/computation ratio is (4N/sqrt(P)/(N^2/P)= 4sqrt(P)/N =O(sqrt(P)/N)\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" println(msg)\n",
|
||||||
|
"end\n",
|
||||||
|
"function partition_cyclic_answer(bool)\n",
|
||||||
|
" bool || return\n",
|
||||||
|
" msg = \"\"\"\n",
|
||||||
|
"- We update N^2/P items\n",
|
||||||
|
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
||||||
|
"- We communicate N^2/P items per message (the full data owned by the neighbor)\n",
|
||||||
|
"- Communication/computation ratio is O(1)\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
"println(msg)\n",
|
||||||
|
"end\n",
|
||||||
|
"function sndrcv_fix_answer(bool)\n",
|
||||||
|
" bool || return\n",
|
||||||
" msg = \"\"\"\n",
|
" msg = \"\"\"\n",
|
||||||
" One needs to carefully order the sends and the receives to avoid cyclic dependencies\n",
|
" One needs to carefully order the sends and the receives to avoid cyclic dependencies\n",
|
||||||
" that might result in deadlocks. The actual implementation is left as an exercise. \n",
|
" that might result in deadlocks. The actual implementation is left as an exercise. \n",
|
||||||
@ -73,7 +104,8 @@
|
|||||||
" println(msg)\n",
|
" println(msg)\n",
|
||||||
"end\n",
|
"end\n",
|
||||||
"jacobitest_check(answer) = answer_checker(answer,\"a\")\n",
|
"jacobitest_check(answer) = answer_checker(answer,\"a\")\n",
|
||||||
"function jacobitest_why()\n",
|
"function jacobitest_why(bool)\n",
|
||||||
|
" bool || return\n",
|
||||||
" msg = \"\"\"\n",
|
" msg = \"\"\"\n",
|
||||||
" The test will pass. The parallel implementation does exactly the same operations\n",
|
" The test will pass. The parallel implementation does exactly the same operations\n",
|
||||||
" in exactly the same order than the sequential one. Thus, the result should be\n",
|
" in exactly the same order than the sequential one. Thus, the result should be\n",
|
||||||
@ -83,7 +115,8 @@
|
|||||||
" println(msg)\n",
|
" println(msg)\n",
|
||||||
"end\n",
|
"end\n",
|
||||||
"gauss_seidel_2_check(answer) = answer_checker(answer,\"d\")\n",
|
"gauss_seidel_2_check(answer) = answer_checker(answer,\"d\")\n",
|
||||||
"function gauss_seidel_2_why()\n",
|
"function gauss_seidel_2_why(bool)\n",
|
||||||
|
" bool || return\n",
|
||||||
" msg = \"\"\"\n",
|
" msg = \"\"\"\n",
|
||||||
" All \"red\" cells can be updated in parallel as they only depend on the values of \"black\" cells.\n",
|
" All \"red\" cells can be updated in parallel as they only depend on the values of \"black\" cells.\n",
|
||||||
" In order workds, we can update the \"red\" cells in any order whithout changing the result. They only\n",
|
" In order workds, we can update the \"red\" cells in any order whithout changing the result. They only\n",
|
||||||
@ -127,7 +160,7 @@
|
|||||||
"$u^{t+1}_i = \\dfrac{u^t_{i-1}+u^t_{i+1}}{2}$\n",
|
"$u^{t+1}_i = \\dfrac{u^t_{i-1}+u^t_{i+1}}{2}$\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This iterative is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n"
|
"This algorithm is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -137,7 +170,12 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Serial implementation\n",
|
"### Serial implementation\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points."
|
"The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points.\n",
|
||||||
|
"\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<b>Note:</b> `u, u_new = u_new, u` is equivalent to `tmp = u; u = u_new; u_new = tmp`. I.e. we swap the arrays `u` and `u_new` are referring to. \n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -203,7 +241,7 @@
|
|||||||
"id": "22fda724",
|
"id": "22fda724",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"In our version of the Jacobi method, we return after a given number of iterations. Other stopping criteria are possible. For instance, iterate until the maximum difference between u and u_new is below a tolerance:"
|
"In our version of the Jacobi method, we return after a given number of iterations. Other stopping criteria are possible. For instance, iterate until the maximum difference between u and u_new (in absolute value) is below a tolerance."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -252,7 +290,7 @@
|
|||||||
"id": "6e085701",
|
"id": "6e085701",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"However, we are not going to parallelize this more complex in this notebook (left as an exercise)."
|
"However, we are not going to parallelize this more complex in this notebook (left as an exercise). The simpler one is already challenging enough to start with."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -298,7 +336,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Remember that a sufficiently large grain size is needed to achieve performance in a distributed algorithm. For Jacobi, one could update each entry of vector `u_new` in a different process, but this would not be efficient. Instead, we use a parallelization strategy with a larger grain size that is analogous to the algorithm 3 we studied for the matrix-matrix multiplication:\n",
|
"Remember that a sufficiently large grain size is needed to achieve performance in a distributed algorithm. For Jacobi, one could update each entry of vector `u_new` in a different process, but this would not be efficient. Instead, we use a parallelization strategy with a larger grain size that is analogous to the algorithm 3 we studied for the matrix-matrix multiplication:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- Each worker updates a consecutive section of the array `u_new` \n",
|
"- Data partition: each worker updates a consecutive section of the array `u_new` \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The following figure displays the data distribution over 3 processes."
|
"The following figure displays the data distribution over 3 processes."
|
||||||
]
|
]
|
||||||
@ -335,7 +373,7 @@
|
|||||||
"id": "ba4113af",
|
"id": "ba4113af",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Note that an entry in the interior of the locally stored vector can be updated using local data only. For this one, communication is not needed."
|
"Note that an entry in the interior of the locally stored vector can be updated using local data only. For updating this one, communication is not needed."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -405,6 +443,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Communication overhead\n",
|
"### Communication overhead\n",
|
||||||
|
"\n",
|
||||||
|
"Now that we understand which are the data dependencies, we can do the theoretical performance analysis.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
"- We update $N/P$ entries in each process at each iteration, where $N$ is the total length of the vector and $P$ the number of processes\n",
|
"- We update $N/P$ entries in each process at each iteration, where $N$ is the total length of the vector and $P$ the number of processes\n",
|
||||||
"- Thus, computation complexity is $O(N/P)$\n",
|
"- Thus, computation complexity is $O(N/P)$\n",
|
||||||
"- We need to get remote entries from 2 neighbors (2 messages per iteration)\n",
|
"- We need to get remote entries from 2 neighbors (2 messages per iteration)\n",
|
||||||
@ -420,7 +462,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Ghost (aka halo) cells\n",
|
"### Ghost (aka halo) cells\n",
|
||||||
"\n",
|
"\n",
|
||||||
"A usual way of implementing the Jacobi method and related algorithms is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential Jacobi update locally in the processes."
|
"This parallel strategy is efficient according to the theoretical analysis. But how to implement it? A usual way of implementing the Jacobi method and related algorithms is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential Jacobi update locally in the processes. Cells with gray edges are ghost (or boundary) cells in the following figure. Note that we added one ghost cell at the front and end of the local array."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -464,6 +506,14 @@
|
|||||||
"</div>"
|
"</div>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0a40846c",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"We are going to implement this algorithm with MPI later in this notebook."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "75f735a2",
|
"id": "75f735a2",
|
||||||
@ -474,7 +524,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"The Jacobi method studied so far was for a one dimensional Laplace equation. In real-world applications however, one solve equations in multiple dimensions. Typically 2D and 3D. The 2D and 3D cases are conceptually equivalent, but we will discuss the 2D case here for simplicity.\n",
|
"The Jacobi method studied so far was for a one dimensional Laplace equation. In real-world applications however, one solve equations in multiple dimensions. Typically 2D and 3D. The 2D and 3D cases are conceptually equivalent, but we will discuss the 2D case here for simplicity.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Now the goal is to find the interior points of a 2D grid given the values at the boundary.\n",
|
"Now, the goal is to find the interior points of a 2D grid given the values at the boundary.\n",
|
||||||
"\n"
|
"\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -618,10 +668,17 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"In 2d one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n",
|
"In 2d one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- 1D block partition (each worker handles a subset of consecutive rows and all columns)\n",
|
"- 1D block row partition (each worker handles a subset of consecutive rows and all columns)\n",
|
||||||
"- 2D block partition (each worker handles a subset of consecutive rows and columns)\n",
|
"- 2D block partition (each worker handles a subset of consecutive rows and columns)\n",
|
||||||
"- 2D cyclic partition (each workers handles a subset of alternating rows ans columns)\n",
|
"- 2D cyclic partition (each workers handles a subset of alternating rows ans columns)\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<b>Note:</b> Other options are 1D block column partition and 1D cyclic (row or column) partition. They are not analyzed in this notebook since they are closely related to the other strategies. In Julia, in fact, it is often preferable to work with 1D block column partitions than with 1D block row partitions since matrices are stored in column major order.\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
"The three partition types are depicted in the following figure for 4 processes."
|
"The three partition types are depicted in the following figure for 4 processes."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -675,13 +732,23 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "4f1e0942",
|
"id": "1bc21623",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"- We update $N^2/P$ items per iteration\n",
|
"<div class=\"alert alert-block alert-success\">\n",
|
||||||
"- We need data from 2 neighbors (2 messages per iteration)\n",
|
"<b>Question:</b> Compute the complexity of the communication over computation ratio for this data partition.\n",
|
||||||
"- We communicate $N$ items per message\n",
|
"</div>"
|
||||||
"- Communication/computation ratio is $2N/(N^2/P) = 2P/N =O(P/N)$"
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d01f8ce8",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"uncover = false # Change to true to see the answer\n",
|
||||||
|
"partition_1d_answer(uncover)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -709,13 +776,23 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "abb6520c",
|
"id": "09bd28ca",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"- We update $N^2/P$ items per iteration\n",
|
"<div class=\"alert alert-block alert-success\">\n",
|
||||||
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
"<b>Question:</b> Compute the complexity of the communication over computation ratio for this data partition.\n",
|
||||||
"- We communicate $N/\\sqrt{P}$ items per message\n",
|
"</div>"
|
||||||
"- Communication/computation ratio is $ (4N/\\sqrt{P})/(N^2/P)= 4\\sqrt{P}/N =O(\\sqrt{P}/N)$"
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "e94a1ea6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"uncover = false\n",
|
||||||
|
"partition_2d_answer(uncover)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -743,13 +820,23 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "9cd32923",
|
"id": "b373e9ce",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"- We update $N^2/P$ items\n",
|
"<div class=\"alert alert-block alert-success\">\n",
|
||||||
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
"<b>Question:</b> Compute the complexity of the communication over computation ratio for this data partition.\n",
|
||||||
"- We communicate $N^2/P$ items per message (the full data owned by the neighbor)\n",
|
"</div>"
|
||||||
"- Communication/computation ratio is $O(1)$"
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "10fab825",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"uncover = false\n",
|
||||||
|
"partition_cyclic_answer(uncover)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -897,7 +984,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Backwards Gauss-Seidel\n",
|
"### Backwards Gauss-Seidel\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In addition, the the result of the Gauss-Seidel method depends on the order of the steps in the loop over `i`. This is another symptom that tells you that this loop is hard to parallelize. For instance, if you do the iterations over `i` by reversing the loop order, you get another method called *backward* Gauss-Seidel."
|
"In addition, the the result of the Gauss-Seidel method depends on the order of the steps in the loop over `i`. This is another symptom that tells you that this loop is hard (or impossible) to parallelize. For instance, if you do the iterations over `i` by reversing the loop order, you get another method called *backward* Gauss-Seidel."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -925,7 +1012,7 @@
|
|||||||
"id": "63c4ce1f",
|
"id": "63c4ce1f",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Both Jacobi and *forward* and *backward* Gauss-Seidel converge to the same result, but they lead to slightly different values during the iterations. Check it with the following cells. First, run it with one `niters=1` and then with `niters=100`."
|
"Both Jacobi and *forward* and *backward* Gauss-Seidel converge to the same result, but they lead to slightly different values during the iterations. Check it with the following cells. First, run the methods with `niters=1` and then with `niters=100`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -967,7 +1054,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Red-black Gauss-Seidel\n",
|
"### Red-black Gauss-Seidel\n",
|
||||||
"\n",
|
"\n",
|
||||||
"There is another version called *red-black* Gauss-Seidel. This uses a very clever order for the steps in the loop over `i`. It does this loop in two phases. First, one updates the entries with even index, and then the entries with odd index."
|
"There is yet another version called *red-black* Gauss-Seidel. This uses a very clever order for the steps in the loop over `i`. It does this loop in two phases. First, one updates the entries with even index, and then the entries with odd index."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1083,7 +1170,18 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"gauss_seidel_2_why()"
|
"uncover = false\n",
|
||||||
|
"gauss_seidel_2_why(uncover)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "41e90d60",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Changing an algorithm to make it parallel\n",
|
||||||
|
"\n",
|
||||||
|
"Note that the original method (the forward Gauss-Seidel) cannot be parallelized, we needed to modify the method slightly with the red-black ordering in order to create a method that can be parallelized. However the method we parallelized is not equivalent to the original one. This happens in practice in many other applications. An algorithm might be impossible to parallelize and one needs to modify it to exploit parallelism. However, one needs to be careful when modifying the algorithm to not destroy the algorithmic properties of the original one. In this case, we succeeded. The red-black Gauss-Seidel converges as fast (if not faster) than the original forward Gauss-Seidel. However, this is not true in general. There is often a trade-off between the algorithmic properties and how parallelizable is the algorithm."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1093,7 +1191,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## MPI implementation\n",
|
"## MPI implementation\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We consider the implementation of the Jacobi method using MPI. We will consider the 1D version for simplicity.\n",
|
"In the last part of this notebook, we consider the implementation of the Jacobi method using MPI. We will consider the 1D version for simplicity.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"<div class=\"alert alert-block alert-info\">\n",
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
@ -1154,7 +1252,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Initialization\n",
|
"### Initialization\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let us start with function `init`. This is its implementation:"
|
"Let us start with function `init`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1191,7 +1289,7 @@
|
|||||||
"id": "1b9e75d8",
|
"id": "1b9e75d8",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells or boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions."
|
"This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells and boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1353,7 +1451,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"sndrcv_fix_answer()"
|
"uncover = false\n",
|
||||||
|
"sndrcv_fix_answer(uncover)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1363,7 +1462,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Local computation\n",
|
"### Local computation\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Once the ghost values have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank, i.e. the rank that own the value. In the code this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`."
|
"Once the ghost cells have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank. In the code, this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1390,7 +1489,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Running the code\n",
|
"### Running the code\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Not let us put all pieces together and run the code. If not done yet, install MPI."
|
"Let us put all pieces together and run the code. If not done yet, install MPI."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1408,7 +1507,7 @@
|
|||||||
"id": "c966375a",
|
"id": "c966375a",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The following cells includes all previous code snippets into a final one. Note that we are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?"
|
"The following cells includes all previous code snippets into a final one. We are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1442,7 +1541,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Checking the result\n",
|
"### Checking the result\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Checking the result visually is not enough in general. To check the parallel implementation we want to compare the result against the sequential implementation. The way we do the computations (either in parallel or sequential) should not affect the result. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n",
|
"Checking the result visually is not enough in general. To check the parallel implementation we want to compare it against the sequential implementation. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The following function gather the distributed vector in rank 0."
|
"The following function gather the distributed vector in rank 0."
|
||||||
@ -1592,58 +1691,6 @@
|
|||||||
"run(`$(mpiexec()) -np 3 julia --project=. -e $code`);"
|
"run(`$(mpiexec()) -np 3 julia --project=. -e $code`);"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "73cd4d73",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Note that we have used function `isapprox` to compare the results. This function checks if two values are the same within machine precision. Using `==` is generally discouraged when working with floating point numbers as they can be affected by rounding-off errors."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "d73c838c",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"<div class=\"alert alert-block alert-success\">\n",
|
|
||||||
"<b>Question:</b> What happens if we use `u_root == u_seq` to compare the parallel and the sequential result?\n",
|
|
||||||
"</div>\n",
|
|
||||||
"\n",
|
|
||||||
" a) The test will still pass.\n",
|
|
||||||
" b) The test will fail due to rounding-off errors.\n",
|
|
||||||
" c) The test might pass or fail depending on `n`.\n",
|
|
||||||
" d) The test might pass or fail depending on the number of MPI ranks."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "cd2427f1",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"answer = \"x\" # replace x with a, b, c or d\n",
|
|
||||||
"jacobitest_check(answer)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "790e7064",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Run cell below for an explanation of the correct answer."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "72ed2aa1",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"jacobitest_why()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "c9aa2901",
|
"id": "c9aa2901",
|
||||||
@ -1651,7 +1698,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Latency hiding\n",
|
"## Latency hiding\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)."
|
"We have now a correct parallel implementation. But. can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@ -75,6 +75,7 @@
|
|||||||
"- MPI is not a Julia implementation of the MPI standard\n",
|
"- MPI is not a Julia implementation of the MPI standard\n",
|
||||||
"- It is just a wrapper to the C interface of MPI.\n",
|
"- It is just a wrapper to the C interface of MPI.\n",
|
||||||
"- You need a C MPI installation in your system (MPI.jl downloads one for you when needed).\n",
|
"- You need a C MPI installation in your system (MPI.jl downloads one for you when needed).\n",
|
||||||
|
"- On a cluster (e.g. DAS-5), you want you use the MPI installation already available in the system.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Why MPI.jl?\n",
|
"### Why MPI.jl?\n",
|
||||||
@ -211,7 +212,7 @@
|
|||||||
"MPI.Finalize()\n",
|
"MPI.Finalize()\n",
|
||||||
"```\n",
|
"```\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In some process `rand(1:10)` might be 2 and the program will stop without reaching `MPI.Finalize()` leading to an incorrect program."
|
"This is incorrect. In some process `rand(1:10)` might be 2 and the program will stop without reaching `MPI.Finalize()` leading to an incorrect program."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -367,7 +368,7 @@
|
|||||||
"id": "f1a502a3",
|
"id": "f1a502a3",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Note that this note notebook is running on a single process. So using MPI will only make sense later when we add more processes."
|
"Note that this note notebook is running on a single process. So using MPI will only make actual sense later when we add more processes."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -626,13 +627,13 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Point-to-point communication\n",
|
"## Point-to-point communication\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Now we are up and running, and ready to start learning MPI communication primitives. In this notebook we will cover so-called point-to-point communication directives. In a later notebook we will also learn about collective primitives.\n",
|
"Now we are up and running, and ready to start learning MPI communication primitives. In this notebook we will cover so-called point-to-point communication. In a later notebook we will also learn about collective primitives.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"MPI provides point-to-point communication directives for arbitrary communication between processes. Point-to-point communications are two-sided: there is a sender and a receiver. Here, we will discuss different types of directives:\n",
|
"MPI provides point-to-point communication directives for arbitrary communication between processes. Point-to-point communications are two-sided: there is a sender and a receiver. Here, we will discuss different types of directives:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- `MPI_Send`, and `MPI_Recv` (*blocking directives*)\n",
|
"- `MPI_Send`, and `MPI_Recv`: *complete (blocking) directives*\n",
|
||||||
"- `MPI_Isend`, and `MPI_Irecv` (*non-blocking directives*)\n",
|
"- `MPI_Isend`, and `MPI_Irecv`: *incomplete (non-blocking) directives*\n",
|
||||||
"- `MPI_Bsend`, `MPI_Ssend`, and `MPI_Rsend` (*advanced communication modes*)"
|
"- `MPI_Bsend`, `MPI_Ssend`, and `MPI_Rsend`: *advanced communication modes*"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -640,7 +641,7 @@
|
|||||||
"id": "0e515109",
|
"id": "0e515109",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"In all cases, these functions are used to send a message from a ranks and receive it in another rank. See next picture."
|
"In all cases, these functions are used to send a message from a rank and receive it in another rank. See next picture."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -979,7 +980,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"`MPI_Send` is also often called a blocking send, but this is very misleading. `MPI_Send` might or not wait for a matching `MPI_Recv`. Assuming that `MPI_Send` will block waiting for a matching receive is erroneous. I.e., we cannot assume that `MPI_Send` has synchronization side effects with the receiver process. However, assuming that `MPI_Send` will not block is also erroneous. Look into the following example (which in fact is an incorrect MPI program). In contrast, `MPI_Send` guarantees that the send buffer can be reused when function returns (complete operation)."
|
"`MPI_Send` is *informally* called a blocking send, but this is not accurate. `MPI_Send` might or not wait for a matching `MPI_Recv`. Assuming that `MPI_Send` will block waiting for a matching receive is erroneous. I.e., we cannot assume that `MPI_Send` has synchronization side effects with the receiver process. However, assuming that `MPI_Send` will not block is also erroneous. Look into the following example (which in fact is an incorrect MPI program). `MPI_Send` only guarantees that the send buffer can be reused when function returns (complete operation)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1042,7 +1043,7 @@
|
|||||||
"1. One might want to minimize synchronization time. This is often achieved by copying the outgoing message in an internal buffer and returning from the `MPI_Send` as soon as possible, without waiting for a matching `MPI_Recv`.\n",
|
"1. One might want to minimize synchronization time. This is often achieved by copying the outgoing message in an internal buffer and returning from the `MPI_Send` as soon as possible, without waiting for a matching `MPI_Recv`.\n",
|
||||||
"2. One might want to avoid data copies (e.g. for large messages). In this case, one needs to wait for a matching receive and return from the `MPI_Send` when the data has been sent.\n",
|
"2. One might want to avoid data copies (e.g. for large messages). In this case, one needs to wait for a matching receive and return from the `MPI_Send` when the data has been sent.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time."
|
"Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time unfortunately."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1497,7 +1498,7 @@
|
|||||||
"function matmul_mpi_3!(C,A,B)\n",
|
"function matmul_mpi_3!(C,A,B)\n",
|
||||||
"```\n",
|
"```\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Assume that the input matrices `A` and `B` are given only on rank 0, the other ranks get dummy matrices with zero rows and zero columns to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that `A` and `B` are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result `C` should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called `ex1.jl`. Modify the file (e.g. with vscode). Run it from the Julia REPL using the `run` function as explained in the [Getting Started tutorial](https://www.francescverdugo.com/XM_40017/dev/getting_started_with_julia/#Running-MPI-code)."
|
"Assume that the input matrices `A` and `B` are given only on rank 0, the other ranks get dummy empty matrices to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that `A` and `B` are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result `C` should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called `ex1.jl`. Modify the file (e.g. with vscode). Run it from the Julia REPL using the `run` function as explained in the [Getting Started tutorial](https://www.francescverdugo.com/XM_40017/dev/getting_started_with_julia/#Running-MPI-code). Don't try to implement complex MPI code in a Jupyter notebook."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@ -97,7 +97,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## MPI_Barrier\n",
|
"## MPI_Barrier\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see an example later in another notebook when studying the traveling sales person problem (TSP).\n",
|
"This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see a practical example later in another notebook when studying the traveling sales person problem (TSP).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In Julia:\n",
|
"In Julia:\n",
|
||||||
"```julia\n",
|
"```julia\n",
|
||||||
@ -117,7 +117,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Example\n",
|
"### Example\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order in this case."
|
"In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -147,7 +147,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## MPI_Reduce\n",
|
"## MPI_Reduce\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process).\n",
|
"This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process). The root process can be any process and it is rank 0 by default in Julia.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In Julia:\n",
|
"In Julia:\n",
|
||||||
"```julia\n",
|
"```julia\n",
|
||||||
@ -301,7 +301,12 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## MPI_Gather\n",
|
"## MPI_Gather\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector). This function assumes that the amount of data sent from each rank is the same. The root process can be any process and it is rank 0 by default in Julia.\n",
|
"Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector).\n",
|
||||||
|
"\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<b>Note:</b> This function assumes that the amount of data sent from each rank is the same. See `MPI_Gatherv` below for more general cases.\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In Julia:\n",
|
"In Julia:\n",
|
||||||
"```julia\n",
|
"```julia\n",
|
||||||
@ -487,7 +492,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Example\n",
|
"### Example\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Each process sends a random amount of integers to rank 0."
|
"Each process sends a random amount of integers to rank 0. The root process will not know the amount of data to be gathered from each rank in advance. We need an auxiliary gather to inform about the message size."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -898,6 +903,24 @@
|
|||||||
"After learning this material and the previous MPI notebook, you have a solid basis to start implementing sophisticated parallel algorithms using MPI."
|
"After learning this material and the previous MPI notebook, you have a solid basis to start implementing sophisticated parallel algorithms using MPI."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "843b40cd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Exercises"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5c2045d9",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Exercise 1\n",
|
||||||
|
"\n",
|
||||||
|
"Implement the parallel matrix-matrix multiplication (Algorithm 3) using MPI collectives instead of point-to-point communication. I.e., this is the same exercise as in previous notebook, but using different functions for communication."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "5e8f6e6a",
|
"id": "5e8f6e6a",
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user