mirror of
https://github.com/fverdugo/XM_40017.git
synced 2025-11-24 09:24:32 +01:00
More changes in jacobi and MPI notebooks
This commit is contained in:
@@ -65,7 +65,38 @@
|
||||
"jacobi_3_check(answer) = answer_checker(answer, \"c\")\n",
|
||||
"lh_check(answer) = answer_checker(answer, \"c\")\n",
|
||||
"sndrcv_check(answer) = answer_checker(answer,\"b\")\n",
|
||||
"function sndrcv_fix_answer()\n",
|
||||
"function partition_1d_answer(bool)\n",
|
||||
" bool || return\n",
|
||||
" msg = \"\"\"\n",
|
||||
"- We update N^2/P items per iteration\n",
|
||||
"- We need data from 2 neighbors (2 messages per iteration)\n",
|
||||
"- We communicate N items per message\n",
|
||||
"- Communication/computation ratio is 2N/(N^2/P) = 2P/N =O(P/N)\n",
|
||||
" \"\"\"\n",
|
||||
" println(msg)\n",
|
||||
"end\n",
|
||||
"function partition_2d_answer(bool)\n",
|
||||
" bool || return\n",
|
||||
" msg = \"\"\"\n",
|
||||
"- We update N^2/P items per iteration\n",
|
||||
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
||||
"- We communicate N/sqrt(P) items per message\n",
|
||||
"- Communication/computation ratio is (4N/sqrt(P)/(N^2/P)= 4sqrt(P)/N =O(sqrt(P)/N)\n",
|
||||
" \"\"\"\n",
|
||||
" println(msg)\n",
|
||||
"end\n",
|
||||
"function partition_cyclic_answer(bool)\n",
|
||||
" bool || return\n",
|
||||
" msg = \"\"\"\n",
|
||||
"- We update N^2/P items\n",
|
||||
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
||||
"- We communicate N^2/P items per message (the full data owned by the neighbor)\n",
|
||||
"- Communication/computation ratio is O(1)\n",
|
||||
" \"\"\"\n",
|
||||
"println(msg)\n",
|
||||
"end\n",
|
||||
"function sndrcv_fix_answer(bool)\n",
|
||||
" bool || return\n",
|
||||
" msg = \"\"\"\n",
|
||||
" One needs to carefully order the sends and the receives to avoid cyclic dependencies\n",
|
||||
" that might result in deadlocks. The actual implementation is left as an exercise. \n",
|
||||
@@ -73,7 +104,8 @@
|
||||
" println(msg)\n",
|
||||
"end\n",
|
||||
"jacobitest_check(answer) = answer_checker(answer,\"a\")\n",
|
||||
"function jacobitest_why()\n",
|
||||
"function jacobitest_why(bool)\n",
|
||||
" bool || return\n",
|
||||
" msg = \"\"\"\n",
|
||||
" The test will pass. The parallel implementation does exactly the same operations\n",
|
||||
" in exactly the same order than the sequential one. Thus, the result should be\n",
|
||||
@@ -83,7 +115,8 @@
|
||||
" println(msg)\n",
|
||||
"end\n",
|
||||
"gauss_seidel_2_check(answer) = answer_checker(answer,\"d\")\n",
|
||||
"function gauss_seidel_2_why()\n",
|
||||
"function gauss_seidel_2_why(bool)\n",
|
||||
" bool || return\n",
|
||||
" msg = \"\"\"\n",
|
||||
" All \"red\" cells can be updated in parallel as they only depend on the values of \"black\" cells.\n",
|
||||
" In order workds, we can update the \"red\" cells in any order whithout changing the result. They only\n",
|
||||
@@ -127,7 +160,7 @@
|
||||
"$u^{t+1}_i = \\dfrac{u^t_{i-1}+u^t_{i+1}}{2}$\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This iterative is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n"
|
||||
"This algorithm is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -137,7 +170,12 @@
|
||||
"source": [
|
||||
"### Serial implementation\n",
|
||||
"\n",
|
||||
"The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points."
|
||||
"The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points.\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-info\">\n",
|
||||
"<b>Note:</b> `u, u_new = u_new, u` is equivalent to `tmp = u; u = u_new; u_new = tmp`. I.e. we swap the arrays `u` and `u_new` are referring to. \n",
|
||||
"</div>\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -203,7 +241,7 @@
|
||||
"id": "22fda724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In our version of the Jacobi method, we return after a given number of iterations. Other stopping criteria are possible. For instance, iterate until the maximum difference between u and u_new is below a tolerance:"
|
||||
"In our version of the Jacobi method, we return after a given number of iterations. Other stopping criteria are possible. For instance, iterate until the maximum difference between u and u_new (in absolute value) is below a tolerance."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -252,7 +290,7 @@
|
||||
"id": "6e085701",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"However, we are not going to parallelize this more complex in this notebook (left as an exercise)."
|
||||
"However, we are not going to parallelize this more complex in this notebook (left as an exercise). The simpler one is already challenging enough to start with."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -298,7 +336,7 @@
|
||||
"\n",
|
||||
"Remember that a sufficiently large grain size is needed to achieve performance in a distributed algorithm. For Jacobi, one could update each entry of vector `u_new` in a different process, but this would not be efficient. Instead, we use a parallelization strategy with a larger grain size that is analogous to the algorithm 3 we studied for the matrix-matrix multiplication:\n",
|
||||
"\n",
|
||||
"- Each worker updates a consecutive section of the array `u_new` \n",
|
||||
"- Data partition: each worker updates a consecutive section of the array `u_new` \n",
|
||||
"\n",
|
||||
"The following figure displays the data distribution over 3 processes."
|
||||
]
|
||||
@@ -335,7 +373,7 @@
|
||||
"id": "ba4113af",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note that an entry in the interior of the locally stored vector can be updated using local data only. For this one, communication is not needed."
|
||||
"Note that an entry in the interior of the locally stored vector can be updated using local data only. For updating this one, communication is not needed."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -405,6 +443,10 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Communication overhead\n",
|
||||
"\n",
|
||||
"Now that we understand which are the data dependencies, we can do the theoretical performance analysis.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"- We update $N/P$ entries in each process at each iteration, where $N$ is the total length of the vector and $P$ the number of processes\n",
|
||||
"- Thus, computation complexity is $O(N/P)$\n",
|
||||
"- We need to get remote entries from 2 neighbors (2 messages per iteration)\n",
|
||||
@@ -420,7 +462,7 @@
|
||||
"source": [
|
||||
"### Ghost (aka halo) cells\n",
|
||||
"\n",
|
||||
"A usual way of implementing the Jacobi method and related algorithms is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential Jacobi update locally in the processes."
|
||||
"This parallel strategy is efficient according to the theoretical analysis. But how to implement it? A usual way of implementing the Jacobi method and related algorithms is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential Jacobi update locally in the processes. Cells with gray edges are ghost (or boundary) cells in the following figure. Note that we added one ghost cell at the front and end of the local array."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -464,6 +506,14 @@
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0a40846c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We are going to implement this algorithm with MPI later in this notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "75f735a2",
|
||||
@@ -474,7 +524,7 @@
|
||||
"\n",
|
||||
"The Jacobi method studied so far was for a one dimensional Laplace equation. In real-world applications however, one solve equations in multiple dimensions. Typically 2D and 3D. The 2D and 3D cases are conceptually equivalent, but we will discuss the 2D case here for simplicity.\n",
|
||||
"\n",
|
||||
"Now the goal is to find the interior points of a 2D grid given the values at the boundary.\n",
|
||||
"Now, the goal is to find the interior points of a 2D grid given the values at the boundary.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
@@ -618,10 +668,17 @@
|
||||
"\n",
|
||||
"In 2d one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n",
|
||||
"\n",
|
||||
"- 1D block partition (each worker handles a subset of consecutive rows and all columns)\n",
|
||||
"- 1D block row partition (each worker handles a subset of consecutive rows and all columns)\n",
|
||||
"- 2D block partition (each worker handles a subset of consecutive rows and columns)\n",
|
||||
"- 2D cyclic partition (each workers handles a subset of alternating rows ans columns)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-info\">\n",
|
||||
"<b>Note:</b> Other options are 1D block column partition and 1D cyclic (row or column) partition. They are not analyzed in this notebook since they are closely related to the other strategies. In Julia, in fact, it is often preferable to work with 1D block column partitions than with 1D block row partitions since matrices are stored in column major order.\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The three partition types are depicted in the following figure for 4 processes."
|
||||
]
|
||||
},
|
||||
@@ -675,13 +732,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4f1e0942",
|
||||
"id": "1bc21623",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"- We update $N^2/P$ items per iteration\n",
|
||||
"- We need data from 2 neighbors (2 messages per iteration)\n",
|
||||
"- We communicate $N$ items per message\n",
|
||||
"- Communication/computation ratio is $2N/(N^2/P) = 2P/N =O(P/N)$"
|
||||
"<div class=\"alert alert-block alert-success\">\n",
|
||||
"<b>Question:</b> Compute the complexity of the communication over computation ratio for this data partition.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d01f8ce8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"uncover = false # Change to true to see the answer\n",
|
||||
"partition_1d_answer(uncover)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -709,13 +776,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "abb6520c",
|
||||
"id": "09bd28ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"- We update $N^2/P$ items per iteration\n",
|
||||
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
||||
"- We communicate $N/\\sqrt{P}$ items per message\n",
|
||||
"- Communication/computation ratio is $ (4N/\\sqrt{P})/(N^2/P)= 4\\sqrt{P}/N =O(\\sqrt{P}/N)$"
|
||||
"<div class=\"alert alert-block alert-success\">\n",
|
||||
"<b>Question:</b> Compute the complexity of the communication over computation ratio for this data partition.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e94a1ea6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"uncover = false\n",
|
||||
"partition_2d_answer(uncover)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -743,13 +820,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9cd32923",
|
||||
"id": "b373e9ce",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"- We update $N^2/P$ items\n",
|
||||
"- We need data from 4 neighbors (4 messages per iteration)\n",
|
||||
"- We communicate $N^2/P$ items per message (the full data owned by the neighbor)\n",
|
||||
"- Communication/computation ratio is $O(1)$"
|
||||
"<div class=\"alert alert-block alert-success\">\n",
|
||||
"<b>Question:</b> Compute the complexity of the communication over computation ratio for this data partition.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "10fab825",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"uncover = false\n",
|
||||
"partition_cyclic_answer(uncover)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -897,7 +984,7 @@
|
||||
"source": [
|
||||
"### Backwards Gauss-Seidel\n",
|
||||
"\n",
|
||||
"In addition, the the result of the Gauss-Seidel method depends on the order of the steps in the loop over `i`. This is another symptom that tells you that this loop is hard to parallelize. For instance, if you do the iterations over `i` by reversing the loop order, you get another method called *backward* Gauss-Seidel."
|
||||
"In addition, the the result of the Gauss-Seidel method depends on the order of the steps in the loop over `i`. This is another symptom that tells you that this loop is hard (or impossible) to parallelize. For instance, if you do the iterations over `i` by reversing the loop order, you get another method called *backward* Gauss-Seidel."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -925,7 +1012,7 @@
|
||||
"id": "63c4ce1f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Both Jacobi and *forward* and *backward* Gauss-Seidel converge to the same result, but they lead to slightly different values during the iterations. Check it with the following cells. First, run it with one `niters=1` and then with `niters=100`."
|
||||
"Both Jacobi and *forward* and *backward* Gauss-Seidel converge to the same result, but they lead to slightly different values during the iterations. Check it with the following cells. First, run the methods with `niters=1` and then with `niters=100`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -967,7 +1054,7 @@
|
||||
"source": [
|
||||
"### Red-black Gauss-Seidel\n",
|
||||
"\n",
|
||||
"There is another version called *red-black* Gauss-Seidel. This uses a very clever order for the steps in the loop over `i`. It does this loop in two phases. First, one updates the entries with even index, and then the entries with odd index."
|
||||
"There is yet another version called *red-black* Gauss-Seidel. This uses a very clever order for the steps in the loop over `i`. It does this loop in two phases. First, one updates the entries with even index, and then the entries with odd index."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1083,7 +1170,18 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"gauss_seidel_2_why()"
|
||||
"uncover = false\n",
|
||||
"gauss_seidel_2_why(uncover)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "41e90d60",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Changing an algorithm to make it parallel\n",
|
||||
"\n",
|
||||
"Note that the original method (the forward Gauss-Seidel) cannot be parallelized, we needed to modify the method slightly with the red-black ordering in order to create a method that can be parallelized. However the method we parallelized is not equivalent to the original one. This happens in practice in many other applications. An algorithm might be impossible to parallelize and one needs to modify it to exploit parallelism. However, one needs to be careful when modifying the algorithm to not destroy the algorithmic properties of the original one. In this case, we succeeded. The red-black Gauss-Seidel converges as fast (if not faster) than the original forward Gauss-Seidel. However, this is not true in general. There is often a trade-off between the algorithmic properties and how parallelizable is the algorithm."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1093,7 +1191,7 @@
|
||||
"source": [
|
||||
"## MPI implementation\n",
|
||||
"\n",
|
||||
"We consider the implementation of the Jacobi method using MPI. We will consider the 1D version for simplicity.\n",
|
||||
"In the last part of this notebook, we consider the implementation of the Jacobi method using MPI. We will consider the 1D version for simplicity.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-info\">\n",
|
||||
@@ -1154,7 +1252,7 @@
|
||||
"source": [
|
||||
"### Initialization\n",
|
||||
"\n",
|
||||
"Let us start with function `init`. This is its implementation:"
|
||||
"Let us start with function `init`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1191,7 +1289,7 @@
|
||||
"id": "1b9e75d8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells or boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions."
|
||||
"This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells and boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1353,7 +1451,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sndrcv_fix_answer()"
|
||||
"uncover = false\n",
|
||||
"sndrcv_fix_answer(uncover)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1363,7 +1462,7 @@
|
||||
"source": [
|
||||
"### Local computation\n",
|
||||
"\n",
|
||||
"Once the ghost values have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank, i.e. the rank that own the value. In the code this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`."
|
||||
"Once the ghost cells have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank. In the code, this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1390,7 +1489,7 @@
|
||||
"source": [
|
||||
"### Running the code\n",
|
||||
"\n",
|
||||
"Not let us put all pieces together and run the code. If not done yet, install MPI."
|
||||
"Let us put all pieces together and run the code. If not done yet, install MPI."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1408,7 +1507,7 @@
|
||||
"id": "c966375a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cells includes all previous code snippets into a final one. Note that we are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?"
|
||||
"The following cells includes all previous code snippets into a final one. We are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1442,7 +1541,7 @@
|
||||
"source": [
|
||||
"### Checking the result\n",
|
||||
"\n",
|
||||
"Checking the result visually is not enough in general. To check the parallel implementation we want to compare the result against the sequential implementation. The way we do the computations (either in parallel or sequential) should not affect the result. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n",
|
||||
"Checking the result visually is not enough in general. To check the parallel implementation we want to compare it against the sequential implementation. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The following function gather the distributed vector in rank 0."
|
||||
@@ -1592,58 +1691,6 @@
|
||||
"run(`$(mpiexec()) -np 3 julia --project=. -e $code`);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "73cd4d73",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note that we have used function `isapprox` to compare the results. This function checks if two values are the same within machine precision. Using `==` is generally discouraged when working with floating point numbers as they can be affected by rounding-off errors."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d73c838c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-block alert-success\">\n",
|
||||
"<b>Question:</b> What happens if we use `u_root == u_seq` to compare the parallel and the sequential result?\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
" a) The test will still pass.\n",
|
||||
" b) The test will fail due to rounding-off errors.\n",
|
||||
" c) The test might pass or fail depending on `n`.\n",
|
||||
" d) The test might pass or fail depending on the number of MPI ranks."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cd2427f1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"answer = \"x\" # replace x with a, b, c or d\n",
|
||||
"jacobitest_check(answer)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "790e7064",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run cell below for an explanation of the correct answer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "72ed2aa1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"jacobitest_why()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c9aa2901",
|
||||
@@ -1651,7 +1698,7 @@
|
||||
"source": [
|
||||
"## Latency hiding\n",
|
||||
"\n",
|
||||
"Can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)."
|
||||
"We have now a correct parallel implementation. But. can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user