From f23cd2ee39a8629226044c71b708c94a61fed680 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 11 Sep 2024 08:58:17 +0000 Subject: [PATCH] build based on cd63aed --- dev/.documenter-siteinfo.json | 2 +- dev/LEQ/index.html | 2 +- dev/asp/index.html | 2 +- dev/getting_started_with_julia/index.html | 2 +- dev/index.html | 2 +- dev/jacobi_2D/index.html | 2 +- dev/jacobi_method.ipynb | 231 ++++++++++-------- dev/jacobi_method/index.html | 2 +- dev/jacobi_method_src/index.html | 260 ++++++++++++--------- dev/julia_async/index.html | 2 +- dev/julia_basics/index.html | 2 +- dev/julia_distributed/index.html | 2 +- dev/julia_intro/index.html | 2 +- dev/julia_jacobi/index.html | 2 +- dev/julia_mpi.ipynb | 21 +- dev/julia_mpi/index.html | 2 +- dev/julia_mpi_src/index.html | 21 +- dev/julia_tutorial/index.html | 2 +- dev/matrix_matrix/index.html | 2 +- dev/mpi_collectives.ipynb | 33 ++- dev/mpi_collectives/index.html | 2 +- dev/mpi_collectives_src/index.html | 35 ++- dev/mpi_tutorial/index.html | 2 +- dev/notebook-hello/index.html | 2 +- dev/pdes/index.html | 2 +- dev/solutions/index.html | 2 +- dev/solutions_for_all_notebooks/index.html | 2 +- dev/tsp/index.html | 2 +- 28 files changed, 387 insertions(+), 258 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index f3fe01c..56f2561 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-09T15:08:46","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-11T08:58:10","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/LEQ/index.html b/dev/LEQ/index.html index fe1fbb4..31f95c0 100644 --- a/dev/LEQ/index.html +++ b/dev/LEQ/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/asp/index.html b/dev/asp/index.html index 27abea5..1289864 100644 --- a/dev/asp/index.html +++ b/dev/asp/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/getting_started_with_julia/index.html b/dev/getting_started_with_julia/index.html index 306b81a..09d982f 100644 --- a/dev/getting_started_with_julia/index.html +++ b/dev/getting_started_with_julia/index.html @@ -15,4 +15,4 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"

Copy the contents of previous code block into a file called Project.toml and place it in an empty folder named newproject. It is important that the file is named Project.toml. You can create a new folder from the REPL with

julia> mkdir("newproject")

To install all the packages registered in this file you need to activate the folder containing your Project.toml file

(@v1.10) pkg> activate newproject

and then instantiating it

(newproject) pkg> instantiate

The instantiate command will download and install all listed packages and their dependencies in just one click.

Getting help in package mode

You can get help about a particular package operator by writing help in front of it

(@v1.10) pkg> help activate

You can get an overview of all package commands by typing help alone

(@v1.10) pkg> help

Package operations in Julia code

In some situations it is required to use package commands in Julia code, e.g., to automatize installation and deployment of Julia applications. This can be done using the Pkg package. For instance

julia> using Pkg
 julia> Pkg.status()

is equivalent to calling status in package mode.

(@v1.10) pkg> status

Creating you own package

In many situations, it is useful to create your own package, for instance, when working with a large code base, when you want to reduce compilation latency using Revise.jl, or if you want to eventually register your package and share it with others.

The simplest way of generating a package (called MyPackage) is as follows. Open Julia, go to package mode, and type

(@v1.10) pkg> generate MyPackage

This will crate a minimal package consisting of a new folder MyPackage with two files:

Tip

This approach only generates a very minimal package. To create a more sophisticated package skeleton (including unit testing, code coverage, readme file, licence, etc.) use PkgTemplates.jl or BestieTemplate.jl. The later one is developed in Amsterdam at the Netherlands eScience Center.

You can add dependencies to the package by activating the MyPackage folder in package mode and adding new dependencies as always:

(@v1.10) pkg> activate MyPackage
 (MyPackage) pkg> add MPI

This will add MPI to your package dependencies.

Using your own package

To use your package you first need to add it to a package environment of your choice. This is done by changing to package mode and typing develop followed by the path to the folder containing the package. For instance:

(@v1.10) pkg> develop MyPackage
Note

You do not need to "develop" your package if you activated the package folder MyPackage.

Now, we can go back to standard Julia mode and use it as any other package:

using MyPackage
-MyPackage.greet()

Here, we just called the example function defined in MyPackage/src/MyPackage.jl.

Conclusion

We have learned the basics of how to work with Julia, including how to run serial and parallel code, and how to manage, create, and use Julia packages. This knowledge will allow you to follow the course effectively! If you want to further dig into the topics we have covered here, you can take a look at the following links:

+MyPackage.greet()

Here, we just called the example function defined in MyPackage/src/MyPackage.jl.

Conclusion

We have learned the basics of how to work with Julia, including how to run serial and parallel code, and how to manage, create, and use Julia packages. This knowledge will allow you to follow the course effectively! If you want to further dig into the topics we have covered here, you can take a look at the following links:

diff --git a/dev/index.html b/dev/index.html index b526f67..6e3cd17 100644 --- a/dev/index.html +++ b/dev/index.html @@ -2,4 +2,4 @@ Home · XM_40017

Programming Large-Scale Parallel Systems (XM_40017)

Welcome to the interactive lecture notes of the Programming Large-Scale Parallel Systems course at VU Amsterdam!

What

This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. Further information about the course is found in the study guide (click here) and our Canvas page (for registered students).

Note

Material will be added incrementally to the website as the course advances.

Warning

This page will eventually contain only a part of the course material. The rest will be available on Canvas. In particular, the material in this public webpage does not fully cover all topics in the final exam.

How to use this page

You have two main ways of studying the notebooks:

  • Download the notebooks and run them locally on your computer (recommended). At each notebook page you will find a green box with links to download the notebook.
  • You also have the static version of the notebooks displayed in this webpage for quick reference.

How to run the notebooks locally

To run a notebook locally follow these steps:

  • Install Julia (if not done already). More information in Getting started.
  • Download the notebook.
  • Launch Julia. More information in Getting started.
  • Execute these commands in the Julia command line:
julia> using Pkg
 julia> Pkg.add("IJulia")
 julia> using IJulia
-julia> notebook()
  • These commands will open a jupyter in your web browser. Navigate in jupyter to the notebook file you have downloaded and open it.

Authors

This material is created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.

License

All material on this page that is original to this course may be used under a CC BY 4.0 license.

Acknowledgment

This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project "Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course" funded by the "Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning".

+julia> notebook()

Authors

This material is created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.

License

All material on this page that is original to this course may be used under a CC BY 4.0 license.

Acknowledgment

This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project "Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course" funded by the "Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning".

diff --git a/dev/jacobi_2D/index.html b/dev/jacobi_2D/index.html index d696536..cce62a6 100644 --- a/dev/jacobi_2D/index.html +++ b/dev/jacobi_2D/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/jacobi_method.ipynb b/dev/jacobi_method.ipynb index 283b2a0..eb6a5e8 100644 --- a/dev/jacobi_method.ipynb +++ b/dev/jacobi_method.ipynb @@ -65,7 +65,38 @@ "jacobi_3_check(answer) = answer_checker(answer, \"c\")\n", "lh_check(answer) = answer_checker(answer, \"c\")\n", "sndrcv_check(answer) = answer_checker(answer,\"b\")\n", - "function sndrcv_fix_answer()\n", + "function partition_1d_answer(bool)\n", + " bool || return\n", + " msg = \"\"\"\n", + "- We update N^2/P items per iteration\n", + "- We need data from 2 neighbors (2 messages per iteration)\n", + "- We communicate N items per message\n", + "- Communication/computation ratio is 2N/(N^2/P) = 2P/N =O(P/N)\n", + " \"\"\"\n", + " println(msg)\n", + "end\n", + "function partition_2d_answer(bool)\n", + " bool || return\n", + " msg = \"\"\"\n", + "- We update N^2/P items per iteration\n", + "- We need data from 4 neighbors (4 messages per iteration)\n", + "- We communicate N/sqrt(P) items per message\n", + "- Communication/computation ratio is (4N/sqrt(P)/(N^2/P)= 4sqrt(P)/N =O(sqrt(P)/N)\n", + " \"\"\"\n", + " println(msg)\n", + "end\n", + "function partition_cyclic_answer(bool)\n", + " bool || return\n", + " msg = \"\"\"\n", + "- We update N^2/P items\n", + "- We need data from 4 neighbors (4 messages per iteration)\n", + "- We communicate N^2/P items per message (the full data owned by the neighbor)\n", + "- Communication/computation ratio is O(1)\n", + " \"\"\"\n", + "println(msg)\n", + "end\n", + "function sndrcv_fix_answer(bool)\n", + " bool || return\n", " msg = \"\"\"\n", " One needs to carefully order the sends and the receives to avoid cyclic dependencies\n", " that might result in deadlocks. The actual implementation is left as an exercise. \n", @@ -73,7 +104,8 @@ " println(msg)\n", "end\n", "jacobitest_check(answer) = answer_checker(answer,\"a\")\n", - "function jacobitest_why()\n", + "function jacobitest_why(bool)\n", + " bool || return\n", " msg = \"\"\"\n", " The test will pass. The parallel implementation does exactly the same operations\n", " in exactly the same order than the sequential one. Thus, the result should be\n", @@ -83,7 +115,8 @@ " println(msg)\n", "end\n", "gauss_seidel_2_check(answer) = answer_checker(answer,\"d\")\n", - "function gauss_seidel_2_why()\n", + "function gauss_seidel_2_why(bool)\n", + " bool || return\n", " msg = \"\"\"\n", " All \"red\" cells can be updated in parallel as they only depend on the values of \"black\" cells.\n", " In order workds, we can update the \"red\" cells in any order whithout changing the result. They only\n", @@ -127,7 +160,7 @@ "$u^{t+1}_i = \\dfrac{u^t_{i-1}+u^t_{i+1}}{2}$\n", "\n", "\n", - "This iterative is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n" + "This algorithm is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.\n" ] }, { @@ -137,7 +170,12 @@ "source": [ "### Serial implementation\n", "\n", - "The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points." + "The following code implements the iterative scheme above for boundary conditions -1 and 1 on a grid with $n$ interior points.\n", + "\n", + "
\n", + "Note: `u, u_new = u_new, u` is equivalent to `tmp = u; u = u_new; u_new = tmp`. I.e. we swap the arrays `u` and `u_new` are referring to. \n", + "
\n", + "\n" ] }, { @@ -203,7 +241,7 @@ "id": "22fda724", "metadata": {}, "source": [ - "In our version of the Jacobi method, we return after a given number of iterations. Other stopping criteria are possible. For instance, iterate until the maximum difference between u and u_new is below a tolerance:" + "In our version of the Jacobi method, we return after a given number of iterations. Other stopping criteria are possible. For instance, iterate until the maximum difference between u and u_new (in absolute value) is below a tolerance." ] }, { @@ -252,7 +290,7 @@ "id": "6e085701", "metadata": {}, "source": [ - "However, we are not going to parallelize this more complex in this notebook (left as an exercise)." + "However, we are not going to parallelize this more complex in this notebook (left as an exercise). The simpler one is already challenging enough to start with." ] }, { @@ -298,7 +336,7 @@ "\n", "Remember that a sufficiently large grain size is needed to achieve performance in a distributed algorithm. For Jacobi, one could update each entry of vector `u_new` in a different process, but this would not be efficient. Instead, we use a parallelization strategy with a larger grain size that is analogous to the algorithm 3 we studied for the matrix-matrix multiplication:\n", "\n", - "- Each worker updates a consecutive section of the array `u_new` \n", + "- Data partition: each worker updates a consecutive section of the array `u_new` \n", "\n", "The following figure displays the data distribution over 3 processes." ] @@ -335,7 +373,7 @@ "id": "ba4113af", "metadata": {}, "source": [ - "Note that an entry in the interior of the locally stored vector can be updated using local data only. For this one, communication is not needed." + "Note that an entry in the interior of the locally stored vector can be updated using local data only. For updating this one, communication is not needed." ] }, { @@ -405,6 +443,10 @@ "metadata": {}, "source": [ "### Communication overhead\n", + "\n", + "Now that we understand which are the data dependencies, we can do the theoretical performance analysis.\n", + "\n", + "\n", "- We update $N/P$ entries in each process at each iteration, where $N$ is the total length of the vector and $P$ the number of processes\n", "- Thus, computation complexity is $O(N/P)$\n", "- We need to get remote entries from 2 neighbors (2 messages per iteration)\n", @@ -420,7 +462,7 @@ "source": [ "### Ghost (aka halo) cells\n", "\n", - "A usual way of implementing the Jacobi method and related algorithms is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential Jacobi update locally in the processes." + "This parallel strategy is efficient according to the theoretical analysis. But how to implement it? A usual way of implementing the Jacobi method and related algorithms is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential Jacobi update locally in the processes. Cells with gray edges are ghost (or boundary) cells in the following figure. Note that we added one ghost cell at the front and end of the local array." ] }, { @@ -464,6 +506,14 @@ "" ] }, + { + "cell_type": "markdown", + "id": "0a40846c", + "metadata": {}, + "source": [ + "We are going to implement this algorithm with MPI later in this notebook." + ] + }, { "cell_type": "markdown", "id": "75f735a2", @@ -474,7 +524,7 @@ "\n", "The Jacobi method studied so far was for a one dimensional Laplace equation. In real-world applications however, one solve equations in multiple dimensions. Typically 2D and 3D. The 2D and 3D cases are conceptually equivalent, but we will discuss the 2D case here for simplicity.\n", "\n", - "Now the goal is to find the interior points of a 2D grid given the values at the boundary.\n", + "Now, the goal is to find the interior points of a 2D grid given the values at the boundary.\n", "\n" ] }, @@ -618,10 +668,17 @@ "\n", "In 2d one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n", "\n", - "- 1D block partition (each worker handles a subset of consecutive rows and all columns)\n", + "- 1D block row partition (each worker handles a subset of consecutive rows and all columns)\n", "- 2D block partition (each worker handles a subset of consecutive rows and columns)\n", "- 2D cyclic partition (each workers handles a subset of alternating rows ans columns)\n", "\n", + "\n", + "\n", + "
\n", + "Note: Other options are 1D block column partition and 1D cyclic (row or column) partition. They are not analyzed in this notebook since they are closely related to the other strategies. In Julia, in fact, it is often preferable to work with 1D block column partitions than with 1D block row partitions since matrices are stored in column major order.\n", + "
\n", + "\n", + "\n", "The three partition types are depicted in the following figure for 4 processes." ] }, @@ -675,13 +732,23 @@ }, { "cell_type": "markdown", - "id": "4f1e0942", + "id": "1bc21623", "metadata": {}, "source": [ - "- We update $N^2/P$ items per iteration\n", - "- We need data from 2 neighbors (2 messages per iteration)\n", - "- We communicate $N$ items per message\n", - "- Communication/computation ratio is $2N/(N^2/P) = 2P/N =O(P/N)$" + "
\n", + "Question: Compute the complexity of the communication over computation ratio for this data partition.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d01f8ce8", + "metadata": {}, + "outputs": [], + "source": [ + "uncover = false # Change to true to see the answer\n", + "partition_1d_answer(uncover)" ] }, { @@ -709,13 +776,23 @@ }, { "cell_type": "markdown", - "id": "abb6520c", + "id": "09bd28ca", "metadata": {}, "source": [ - "- We update $N^2/P$ items per iteration\n", - "- We need data from 4 neighbors (4 messages per iteration)\n", - "- We communicate $N/\\sqrt{P}$ items per message\n", - "- Communication/computation ratio is $ (4N/\\sqrt{P})/(N^2/P)= 4\\sqrt{P}/N =O(\\sqrt{P}/N)$" + "
\n", + "Question: Compute the complexity of the communication over computation ratio for this data partition.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e94a1ea6", + "metadata": {}, + "outputs": [], + "source": [ + "uncover = false\n", + "partition_2d_answer(uncover)" ] }, { @@ -743,13 +820,23 @@ }, { "cell_type": "markdown", - "id": "9cd32923", + "id": "b373e9ce", "metadata": {}, "source": [ - "- We update $N^2/P$ items\n", - "- We need data from 4 neighbors (4 messages per iteration)\n", - "- We communicate $N^2/P$ items per message (the full data owned by the neighbor)\n", - "- Communication/computation ratio is $O(1)$" + "
\n", + "Question: Compute the complexity of the communication over computation ratio for this data partition.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "10fab825", + "metadata": {}, + "outputs": [], + "source": [ + "uncover = false\n", + "partition_cyclic_answer(uncover)" ] }, { @@ -897,7 +984,7 @@ "source": [ "### Backwards Gauss-Seidel\n", "\n", - "In addition, the the result of the Gauss-Seidel method depends on the order of the steps in the loop over `i`. This is another symptom that tells you that this loop is hard to parallelize. For instance, if you do the iterations over `i` by reversing the loop order, you get another method called *backward* Gauss-Seidel." + "In addition, the the result of the Gauss-Seidel method depends on the order of the steps in the loop over `i`. This is another symptom that tells you that this loop is hard (or impossible) to parallelize. For instance, if you do the iterations over `i` by reversing the loop order, you get another method called *backward* Gauss-Seidel." ] }, { @@ -925,7 +1012,7 @@ "id": "63c4ce1f", "metadata": {}, "source": [ - "Both Jacobi and *forward* and *backward* Gauss-Seidel converge to the same result, but they lead to slightly different values during the iterations. Check it with the following cells. First, run it with one `niters=1` and then with `niters=100`." + "Both Jacobi and *forward* and *backward* Gauss-Seidel converge to the same result, but they lead to slightly different values during the iterations. Check it with the following cells. First, run the methods with `niters=1` and then with `niters=100`." ] }, { @@ -967,7 +1054,7 @@ "source": [ "### Red-black Gauss-Seidel\n", "\n", - "There is another version called *red-black* Gauss-Seidel. This uses a very clever order for the steps in the loop over `i`. It does this loop in two phases. First, one updates the entries with even index, and then the entries with odd index." + "There is yet another version called *red-black* Gauss-Seidel. This uses a very clever order for the steps in the loop over `i`. It does this loop in two phases. First, one updates the entries with even index, and then the entries with odd index." ] }, { @@ -1083,7 +1170,18 @@ "metadata": {}, "outputs": [], "source": [ - "gauss_seidel_2_why()" + "uncover = false\n", + "gauss_seidel_2_why(uncover)" + ] + }, + { + "cell_type": "markdown", + "id": "41e90d60", + "metadata": {}, + "source": [ + "### Changing an algorithm to make it parallel\n", + "\n", + "Note that the original method (the forward Gauss-Seidel) cannot be parallelized, we needed to modify the method slightly with the red-black ordering in order to create a method that can be parallelized. However the method we parallelized is not equivalent to the original one. This happens in practice in many other applications. An algorithm might be impossible to parallelize and one needs to modify it to exploit parallelism. However, one needs to be careful when modifying the algorithm to not destroy the algorithmic properties of the original one. In this case, we succeeded. The red-black Gauss-Seidel converges as fast (if not faster) than the original forward Gauss-Seidel. However, this is not true in general. There is often a trade-off between the algorithmic properties and how parallelizable is the algorithm." ] }, { @@ -1093,7 +1191,7 @@ "source": [ "## MPI implementation\n", "\n", - "We consider the implementation of the Jacobi method using MPI. We will consider the 1D version for simplicity.\n", + "In the last part of this notebook, we consider the implementation of the Jacobi method using MPI. We will consider the 1D version for simplicity.\n", "\n", "\n", "
\n", @@ -1154,7 +1252,7 @@ "source": [ "### Initialization\n", "\n", - "Let us start with function `init`. This is its implementation:" + "Let us start with function `init`." ] }, { @@ -1191,7 +1289,7 @@ "id": "1b9e75d8", "metadata": {}, "source": [ - "This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells or boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions." + "This function crates and initializes the vector `u` and the auxiliary vector `u_new` and fills in the boundary values. Note that we are not creating the full arrays like in the sequential case. We are only creating the parts to be managed by the current rank. To this end, we start by computing the number of entries to be updated in this rank, i.e., variable `load`. We have assumed that `n` is a multiple of the number of ranks for simplicity. If this is not the case, we stop the computation with `MPI.Abort`. Note that we are allocating two extra elements in `u` (and `u_new`) for the ghost cells and boundary conditions. The following figure displays the arrays created for `n==9` and `nranks==3` (thus `load == 3`). Note that the first and last elements of the arrays are displayed with gray edges denoting that they are the extra elements allocated for ghost cells or boundary conditions." ] }, { @@ -1353,7 +1451,8 @@ "metadata": {}, "outputs": [], "source": [ - "sndrcv_fix_answer()" + "uncover = false\n", + "sndrcv_fix_answer(uncover)" ] }, { @@ -1363,7 +1462,7 @@ "source": [ "### Local computation\n", "\n", - "Once the ghost values have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank, i.e. the rank that own the value. In the code this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`." + "Once the ghost cells have the right values, we can perform the Jacobi update locally at each process. This is done in function `local_update!`. Note that here we only update the data *owned* by the current MPI rank, i.e. we do not modify the ghost values. There is no need to modify the ghost values since they will updated by another rank. In the code, this is reflected in the loop over `i`. We do not visit the first nor the last entry in `u_new`." ] }, { @@ -1390,7 +1489,7 @@ "source": [ "### Running the code\n", "\n", - "Not let us put all pieces together and run the code. If not done yet, install MPI." + "Let us put all pieces together and run the code. If not done yet, install MPI." ] }, { @@ -1408,7 +1507,7 @@ "id": "c966375a", "metadata": {}, "source": [ - "The following cells includes all previous code snippets into a final one. Note that we are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?" + "The following cells includes all previous code snippets into a final one. We are eventually calling function `jacobi_mpi` and showing the result vector `u`. Run the following code for 1 MPI rank, then for 2 and 3 MPI ranks. Look into the values of `u`. Does it make sense?" ] }, { @@ -1442,7 +1541,7 @@ "source": [ "### Checking the result\n", "\n", - "Checking the result visually is not enough in general. To check the parallel implementation we want to compare the result against the sequential implementation. The way we do the computations (either in parallel or sequential) should not affect the result. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n", + "Checking the result visually is not enough in general. To check the parallel implementation we want to compare it against the sequential implementation. However, how can we compare the sequential and the parallel result? The parallel version gives a distributed vector. We cannot compare this one directly with the result of the sequential function. A possible solution is to gather all the pieces of the parallel result in a single rank and compare there against the parallel implementation.\n", "\n", "\n", "The following function gather the distributed vector in rank 0." @@ -1592,58 +1691,6 @@ "run(`$(mpiexec()) -np 3 julia --project=. -e $code`);" ] }, - { - "cell_type": "markdown", - "id": "73cd4d73", - "metadata": {}, - "source": [ - "Note that we have used function `isapprox` to compare the results. This function checks if two values are the same within machine precision. Using `==` is generally discouraged when working with floating point numbers as they can be affected by rounding-off errors." - ] - }, - { - "cell_type": "markdown", - "id": "d73c838c", - "metadata": {}, - "source": [ - "
\n", - "Question: What happens if we use `u_root == u_seq` to compare the parallel and the sequential result?\n", - "
\n", - "\n", - " a) The test will still pass.\n", - " b) The test will fail due to rounding-off errors.\n", - " c) The test might pass or fail depending on `n`.\n", - " d) The test might pass or fail depending on the number of MPI ranks." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "cd2427f1", - "metadata": {}, - "outputs": [], - "source": [ - "answer = \"x\" # replace x with a, b, c or d\n", - "jacobitest_check(answer)" - ] - }, - { - "cell_type": "markdown", - "id": "790e7064", - "metadata": {}, - "source": [ - "Run cell below for an explanation of the correct answer." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "72ed2aa1", - "metadata": {}, - "outputs": [], - "source": [ - "jacobitest_why()" - ] - }, { "cell_type": "markdown", "id": "c9aa2901", @@ -1651,7 +1698,7 @@ "source": [ "## Latency hiding\n", "\n", - "Can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)." + "We have now a correct parallel implementation. But. can our implementation above be improved? Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with computation that we need to do anyway. The actual implementation is left as an exercise (see Exercise 1)." ] }, { diff --git a/dev/jacobi_method/index.html b/dev/jacobi_method/index.html index 129da4e..267a145 100644 --- a/dev/jacobi_method/index.html +++ b/dev/jacobi_method/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+ diff --git a/dev/jacobi_method_src/index.html b/dev/jacobi_method_src/index.html index 8f15a1e..848269a 100644 --- a/dev/jacobi_method_src/index.html +++ b/dev/jacobi_method_src/index.html @@ -7586,7 +7586,38 @@ a.anchor-link { jacobi_3_check(answer) = answer_checker(answer, "c") lh_check(answer) = answer_checker(answer, "c") sndrcv_check(answer) = answer_checker(answer,"b") -function sndrcv_fix_answer() +function partition_1d_answer(bool) + bool || return + msg = """ +- We update N^2/P items per iteration +- We need data from 2 neighbors (2 messages per iteration) +- We communicate N items per message +- Communication/computation ratio is 2N/(N^2/P) = 2P/N =O(P/N) + """ + println(msg) +end +function partition_2d_answer(bool) + bool || return + msg = """ +- We update N^2/P items per iteration +- We need data from 4 neighbors (4 messages per iteration) +- We communicate N/sqrt(P) items per message +- Communication/computation ratio is (4N/sqrt(P)/(N^2/P)= 4sqrt(P)/N =O(sqrt(P)/N) + """ + println(msg) +end +function partition_cyclic_answer(bool) + bool || return + msg = """ +- We update N^2/P items +- We need data from 4 neighbors (4 messages per iteration) +- We communicate N^2/P items per message (the full data owned by the neighbor) +- Communication/computation ratio is O(1) + """ +println(msg) +end +function sndrcv_fix_answer(bool) + bool || return msg = """ One needs to carefully order the sends and the receives to avoid cyclic dependencies that might result in deadlocks. The actual implementation is left as an exercise. @@ -7594,7 +7625,8 @@ a.anchor-link { println(msg) end jacobitest_check(answer) = answer_checker(answer,"a") -function jacobitest_why() +function jacobitest_why(bool) + bool || return msg = """ The test will pass. The parallel implementation does exactly the same operations in exactly the same order than the sequential one. Thus, the result should be @@ -7604,7 +7636,8 @@ a.anchor-link { println(msg) end gauss_seidel_2_check(answer) = answer_checker(answer,"d") -function gauss_seidel_2_why() +function gauss_seidel_2_why(bool) + bool || return msg = """ All "red" cells can be updated in parallel as they only depend on the values of "black" cells. In order workds, we can update the "red" cells in any order whithout changing the result. They only @@ -7652,7 +7685,7 @@ a.anchor-link {

When solving a Laplace equation in 1D, the Jacobi method leads to the following iterative scheme: The entry $i$ of vector $u$ at iteration $t+1$ is computed as:

$u^{t+1}_i = \dfrac{u^t_{i-1}+u^t_{i+1}}{2}$

-

This iterative is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.

+

This algorithm is yet simple but shares fundamental challenges with many other algorithms used in scientific computing. This is why we are studying it here.

@@ -7664,6 +7697,9 @@ a.anchor-link {
@@ -7750,7 +7786,7 @@ a.anchor-link {
@@ -7810,7 +7846,7 @@ a.anchor-link {
@@ -7856,7 +7892,7 @@ a.anchor-link {

Parallelization strategy

Remember that a sufficiently large grain size is needed to achieve performance in a distributed algorithm. For Jacobi, one could update each entry of vector u_new in a different process, but this would not be efficient. Instead, we use a parallelization strategy with a larger grain size that is analogous to the algorithm 3 we studied for the matrix-matrix multiplication:

The following figure displays the data distribution over 3 processes.

@@ -7894,7 +7930,7 @@ a.anchor-link {
@@ -7966,7 +8002,8 @@ a.anchor-link {
@@ -8030,6 +8067,17 @@ a.anchor-link { +
+ +
@@ -8256,18 +8307,30 @@ a.anchor-link { -
+
-
+
-
+
@@ -8580,7 +8667,7 @@ d) The inner, but not the outer
@@ -8635,7 +8722,7 @@ d) The inner, but not the outer
@@ -8769,20 +8856,32 @@ d) Loop over i only
In [ ]:
-
gauss_seidel_2_why()
+
uncover = false
+gauss_seidel_2_why(uncover)
 
+
+ +
@@ -9065,7 +9164,8 @@ d) This implementation does not work when distributing over just a single MPI ra
In [ ]:
-
sndrcv_fix_answer()
+
uncover = false
+sndrcv_fix_answer(uncover)
 
@@ -9078,7 +9178,7 @@ d) This implementation does not work when distributing over just a single MPI ra
@@ -9110,7 +9210,7 @@ d) This implementation does not work when distributing over just a single MPI ra
@@ -9135,7 +9235,7 @@ d) This implementation does not work when distributing over just a single MPI ra
@@ -9174,7 +9274,7 @@ d) This implementation does not work when distributing over just a single MPI ra
@@ -9346,81 +9446,13 @@ d) This implementation does not work when distributing over just a single MPI ra -
- -
-
- -
- -
-
- -
- -
diff --git a/dev/julia_async/index.html b/dev/julia_async/index.html index 6aaff10..5d20dc2 100644 --- a/dev/julia_async/index.html +++ b/dev/julia_async/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+ diff --git a/dev/julia_basics/index.html b/dev/julia_basics/index.html index f7c7ed5..7123eee 100644 --- a/dev/julia_basics/index.html +++ b/dev/julia_basics/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/julia_distributed/index.html b/dev/julia_distributed/index.html index 8384020..97b0605 100644 --- a/dev/julia_distributed/index.html +++ b/dev/julia_distributed/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/julia_intro/index.html b/dev/julia_intro/index.html index 7630c7a..f0d8f2e 100644 --- a/dev/julia_intro/index.html +++ b/dev/julia_intro/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/julia_jacobi/index.html b/dev/julia_jacobi/index.html index f9311e9..e44d136 100644 --- a/dev/julia_jacobi/index.html +++ b/dev/julia_jacobi/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/julia_mpi.ipynb b/dev/julia_mpi.ipynb index 60e6cfe..4089626 100644 --- a/dev/julia_mpi.ipynb +++ b/dev/julia_mpi.ipynb @@ -75,6 +75,7 @@ "- MPI is not a Julia implementation of the MPI standard\n", "- It is just a wrapper to the C interface of MPI.\n", "- You need a C MPI installation in your system (MPI.jl downloads one for you when needed).\n", + "- On a cluster (e.g. DAS-5), you want you use the MPI installation already available in the system.\n", "\n", "\n", "### Why MPI.jl?\n", @@ -211,7 +212,7 @@ "MPI.Finalize()\n", "```\n", "\n", - "In some process `rand(1:10)` might be 2 and the program will stop without reaching `MPI.Finalize()` leading to an incorrect program." + "This is incorrect. In some process `rand(1:10)` might be 2 and the program will stop without reaching `MPI.Finalize()` leading to an incorrect program." ] }, { @@ -367,7 +368,7 @@ "id": "f1a502a3", "metadata": {}, "source": [ - "Note that this note notebook is running on a single process. So using MPI will only make sense later when we add more processes." + "Note that this note notebook is running on a single process. So using MPI will only make actual sense later when we add more processes." ] }, { @@ -626,13 +627,13 @@ "source": [ "## Point-to-point communication\n", "\n", - "Now we are up and running, and ready to start learning MPI communication primitives. In this notebook we will cover so-called point-to-point communication directives. In a later notebook we will also learn about collective primitives.\n", + "Now we are up and running, and ready to start learning MPI communication primitives. In this notebook we will cover so-called point-to-point communication. In a later notebook we will also learn about collective primitives.\n", "\n", "MPI provides point-to-point communication directives for arbitrary communication between processes. Point-to-point communications are two-sided: there is a sender and a receiver. Here, we will discuss different types of directives:\n", "\n", - "- `MPI_Send`, and `MPI_Recv` (*blocking directives*)\n", - "- `MPI_Isend`, and `MPI_Irecv` (*non-blocking directives*)\n", - "- `MPI_Bsend`, `MPI_Ssend`, and `MPI_Rsend` (*advanced communication modes*)" + "- `MPI_Send`, and `MPI_Recv`: *complete (blocking) directives*\n", + "- `MPI_Isend`, and `MPI_Irecv`: *incomplete (non-blocking) directives*\n", + "- `MPI_Bsend`, `MPI_Ssend`, and `MPI_Rsend`: *advanced communication modes*" ] }, { @@ -640,7 +641,7 @@ "id": "0e515109", "metadata": {}, "source": [ - "In all cases, these functions are used to send a message from a ranks and receive it in another rank. See next picture." + "In all cases, these functions are used to send a message from a rank and receive it in another rank. See next picture." ] }, { @@ -979,7 +980,7 @@ "\n", "\n", "\n", - "`MPI_Send` is also often called a blocking send, but this is very misleading. `MPI_Send` might or not wait for a matching `MPI_Recv`. Assuming that `MPI_Send` will block waiting for a matching receive is erroneous. I.e., we cannot assume that `MPI_Send` has synchronization side effects with the receiver process. However, assuming that `MPI_Send` will not block is also erroneous. Look into the following example (which in fact is an incorrect MPI program). In contrast, `MPI_Send` guarantees that the send buffer can be reused when function returns (complete operation)." + "`MPI_Send` is *informally* called a blocking send, but this is not accurate. `MPI_Send` might or not wait for a matching `MPI_Recv`. Assuming that `MPI_Send` will block waiting for a matching receive is erroneous. I.e., we cannot assume that `MPI_Send` has synchronization side effects with the receiver process. However, assuming that `MPI_Send` will not block is also erroneous. Look into the following example (which in fact is an incorrect MPI program). `MPI_Send` only guarantees that the send buffer can be reused when function returns (complete operation)." ] }, { @@ -1042,7 +1043,7 @@ "1. One might want to minimize synchronization time. This is often achieved by copying the outgoing message in an internal buffer and returning from the `MPI_Send` as soon as possible, without waiting for a matching `MPI_Recv`.\n", "2. One might want to avoid data copies (e.g. for large messages). In this case, one needs to wait for a matching receive and return from the `MPI_Send` when the data has been sent.\n", "\n", - "Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time." + "Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time unfortunately." ] }, { @@ -1497,7 +1498,7 @@ "function matmul_mpi_3!(C,A,B)\n", "```\n", "\n", - "Assume that the input matrices `A` and `B` are given only on rank 0, the other ranks get dummy matrices with zero rows and zero columns to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that `A` and `B` are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result `C` should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called `ex1.jl`. Modify the file (e.g. with vscode). Run it from the Julia REPL using the `run` function as explained in the [Getting Started tutorial](https://www.francescverdugo.com/XM_40017/dev/getting_started_with_julia/#Running-MPI-code)." + "Assume that the input matrices `A` and `B` are given only on rank 0, the other ranks get dummy empty matrices to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that `A` and `B` are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result `C` should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called `ex1.jl`. Modify the file (e.g. with vscode). Run it from the Julia REPL using the `run` function as explained in the [Getting Started tutorial](https://www.francescverdugo.com/XM_40017/dev/getting_started_with_julia/#Running-MPI-code). Don't try to implement complex MPI code in a Jupyter notebook." ] }, { diff --git a/dev/julia_mpi/index.html b/dev/julia_mpi/index.html index 4b0aa51..b658556 100644 --- a/dev/julia_mpi/index.html +++ b/dev/julia_mpi/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/julia_mpi_src/index.html b/dev/julia_mpi_src/index.html index 843af1f..73d5ba8 100644 --- a/dev/julia_mpi_src/index.html +++ b/dev/julia_mpi_src/index.html @@ -7593,6 +7593,7 @@ a.anchor-link {
  • MPI is not a Julia implementation of the MPI standard
  • It is just a wrapper to the C interface of MPI.
  • You need a C MPI installation in your system (MPI.jl downloads one for you when needed).
  • +
  • On a cluster (e.g. DAS-5), you want you use the MPI installation already available in the system.
  • Why MPI.jl?

    MPI.jl provides a convenient Julia API to access MPI. For instance, this is how you get the id (rank) of the current process.

    comm = MPI.COMM_WORLD
    @@ -7727,7 +7728,7 @@ a.anchor-link {
     @assert rand(1:10) != 2
     MPI.Finalize()
     
    -

    In some process rand(1:10) might be 2 and the program will stop without reaching MPI.Finalize() leading to an incorrect program.

    +

    This is incorrect. In some process rand(1:10) might be 2 and the program will stop without reaching MPI.Finalize() leading to an incorrect program.

    @@ -7925,7 +7926,7 @@ a.anchor-link {
    @@ -8231,12 +8232,12 @@ a.anchor-link {
    @@ -8248,7 +8249,7 @@ a.anchor-link {
    @@ -8610,7 +8611,7 @@ a.anchor-link {
    @@ -8682,7 +8683,7 @@ a.anchor-link {
  • One might want to minimize synchronization time. This is often achieved by copying the outgoing message in an internal buffer and returning from the MPI_Send as soon as possible, without waiting for a matching MPI_Recv.
  • One might want to avoid data copies (e.g. for large messages). In this case, one needs to wait for a matching receive and return from the MPI_Send when the data has been sent.
  • -

    Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time.

    +

    Thus, there is a trade-off between memory copied (buffering) and synchronization (wait) time. One cannot minimize both at the same time unfortunately.

    @@ -9171,7 +9172,7 @@ a.anchor-link {

    Exercise 1

    Implement the parallel matrix-matrix multiplication (Algorithm 3) in previous notebook using MPI instead of Distributed. Use this function signature:

    function matmul_mpi_3!(C,A,B)
     
    -

    Assume that the input matrices A and B are given only on rank 0, the other ranks get dummy matrices with zero rows and zero columns to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that A and B are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result C should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called ex1.jl. Modify the file (e.g. with vscode). Run it from the Julia REPL using the run function as explained in the Getting Started tutorial.

    +

    Assume that the input matrices A and B are given only on rank 0, the other ranks get dummy empty matrices to save memory. You need to communicate the required parts to other ranks. For simplicity you can assume that A and B are square matrices and that the number of rows is a multiple of the number of processes (on rank 0). The result C should be overwritten only on rank 0. You can use the following cell to implement and check your result. Copy the code below to a file called ex1.jl. Modify the file (e.g. with vscode). Run it from the Julia REPL using the run function as explained in the Getting Started tutorial. Don't try to implement complex MPI code in a Jupyter notebook.

    diff --git a/dev/julia_tutorial/index.html b/dev/julia_tutorial/index.html index cdc8225..b059119 100644 --- a/dev/julia_tutorial/index.html +++ b/dev/julia_tutorial/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/matrix_matrix/index.html b/dev/matrix_matrix/index.html index 31d3f32..3c460ca 100644 --- a/dev/matrix_matrix/index.html +++ b/dev/matrix_matrix/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/mpi_collectives.ipynb b/dev/mpi_collectives.ipynb index d7e9c98..647e6b0 100644 --- a/dev/mpi_collectives.ipynb +++ b/dev/mpi_collectives.ipynb @@ -97,7 +97,7 @@ "source": [ "## MPI_Barrier\n", "\n", - "This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see an example later in another notebook when studying the traveling sales person problem (TSP).\n", + "This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see a practical example later in another notebook when studying the traveling sales person problem (TSP).\n", "\n", "In Julia:\n", "```julia\n", @@ -117,7 +117,7 @@ "source": [ "### Example\n", "\n", - "In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order in this case." + "In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order." ] }, { @@ -147,7 +147,7 @@ "source": [ "## MPI_Reduce\n", "\n", - "This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process).\n", + "This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process). The root process can be any process and it is rank 0 by default in Julia.\n", "\n", "In Julia:\n", "```julia\n", @@ -301,7 +301,12 @@ "source": [ "## MPI_Gather\n", "\n", - "Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector). This function assumes that the amount of data sent from each rank is the same. The root process can be any process and it is rank 0 by default in Julia.\n", + "Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector).\n", + "\n", + "
    \n", + "Note: This function assumes that the amount of data sent from each rank is the same. See `MPI_Gatherv` below for more general cases.\n", + "
    \n", + "\n", "\n", "In Julia:\n", "```julia\n", @@ -487,7 +492,7 @@ "source": [ "### Example\n", "\n", - "Each process sends a random amount of integers to rank 0." + "Each process sends a random amount of integers to rank 0. The root process will not know the amount of data to be gathered from each rank in advance. We need an auxiliary gather to inform about the message size." ] }, { @@ -898,6 +903,24 @@ "After learning this material and the previous MPI notebook, you have a solid basis to start implementing sophisticated parallel algorithms using MPI." ] }, + { + "cell_type": "markdown", + "id": "843b40cd", + "metadata": {}, + "source": [ + "## Exercises" + ] + }, + { + "cell_type": "markdown", + "id": "5c2045d9", + "metadata": {}, + "source": [ + "### Exercise 1\n", + "\n", + "Implement the parallel matrix-matrix multiplication (Algorithm 3) using MPI collectives instead of point-to-point communication. I.e., this is the same exercise as in previous notebook, but using different functions for communication." + ] + }, { "cell_type": "markdown", "id": "5e8f6e6a", diff --git a/dev/mpi_collectives/index.html b/dev/mpi_collectives/index.html index 01b760f..698eb61 100644 --- a/dev/mpi_collectives/index.html +++ b/dev/mpi_collectives/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/mpi_collectives_src/index.html b/dev/mpi_collectives_src/index.html index ababd79..82c12ff 100644 --- a/dev/mpi_collectives_src/index.html +++ b/dev/mpi_collectives_src/index.html @@ -7614,7 +7614,7 @@ a.anchor-link {
    @@ -7666,7 +7666,7 @@ a.anchor-link {
    @@ -8451,6 +8454,28 @@ a.anchor-link {
    +
    + +
    +
    + +
    +
    diff --git a/dev/notebook-hello/index.html b/dev/notebook-hello/index.html index 736b5a0..5fdb8d8 100644 --- a/dev/notebook-hello/index.html +++ b/dev/notebook-hello/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/pdes/index.html b/dev/pdes/index.html index cdd4cc8..910fbed 100644 --- a/dev/pdes/index.html +++ b/dev/pdes/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/solutions/index.html b/dev/solutions/index.html index 02d4eae..dab6c3b 100644 --- a/dev/solutions/index.html +++ b/dev/solutions/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - + diff --git a/dev/solutions_for_all_notebooks/index.html b/dev/solutions_for_all_notebooks/index.html index 6355459..7156870 100644 --- a/dev/solutions_for_all_notebooks/index.html +++ b/dev/solutions_for_all_notebooks/index.html @@ -172,4 +172,4 @@ end

    « Jacobi method
    +end diff --git a/dev/tsp/index.html b/dev/tsp/index.html index 592b151..fbc1977 100644 --- a/dev/tsp/index.html +++ b/dev/tsp/index.html @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); - +