mirror of
https://github.com/fverdugo/XM_40017.git
synced 2025-11-24 09:24:32 +01:00
More changes in jacobi and MPI notebooks
This commit is contained in:
@@ -97,7 +97,7 @@
|
||||
"source": [
|
||||
"## MPI_Barrier\n",
|
||||
"\n",
|
||||
"This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see an example later in another notebook when studying the traveling sales person problem (TSP).\n",
|
||||
"This function is used to synchronizes a group of processes. All processes block until all have reached the barrier. It is often invoked at the end of for loops to make sure all processes have finished the current loop iteration to move to the next one. We will see a practical example later in another notebook when studying the traveling sales person problem (TSP).\n",
|
||||
"\n",
|
||||
"In Julia:\n",
|
||||
"```julia\n",
|
||||
@@ -117,7 +117,7 @@
|
||||
"source": [
|
||||
"### Example\n",
|
||||
"\n",
|
||||
"In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order in this case."
|
||||
"In this example the ranks sleep for a random amount of time and then they call barrier. It is guaranteed that the message \"Done!\" will be printed after all processes printed \"I woke up\" since we used a barrier. Try also to comment out the call to `MPI.Barrier`. You will see that the message can be printed in any order."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -147,7 +147,7 @@
|
||||
"source": [
|
||||
"## MPI_Reduce\n",
|
||||
"\n",
|
||||
"This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process).\n",
|
||||
"This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process). The root process can be any process and it is rank 0 by default in Julia.\n",
|
||||
"\n",
|
||||
"In Julia:\n",
|
||||
"```julia\n",
|
||||
@@ -301,7 +301,12 @@
|
||||
"source": [
|
||||
"## MPI_Gather\n",
|
||||
"\n",
|
||||
"Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector). This function assumes that the amount of data sent from each rank is the same. The root process can be any process and it is rank 0 by default in Julia.\n",
|
||||
"Each rank sends a message to the root rank (the root rank also sends a message to itself). The root rank receives all these values in a buffer (e.g. a vector).\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-block alert-info\">\n",
|
||||
"<b>Note:</b> This function assumes that the amount of data sent from each rank is the same. See `MPI_Gatherv` below for more general cases.\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"In Julia:\n",
|
||||
"```julia\n",
|
||||
@@ -487,7 +492,7 @@
|
||||
"source": [
|
||||
"### Example\n",
|
||||
"\n",
|
||||
"Each process sends a random amount of integers to rank 0."
|
||||
"Each process sends a random amount of integers to rank 0. The root process will not know the amount of data to be gathered from each rank in advance. We need an auxiliary gather to inform about the message size."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -898,6 +903,24 @@
|
||||
"After learning this material and the previous MPI notebook, you have a solid basis to start implementing sophisticated parallel algorithms using MPI."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "843b40cd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Exercises"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5c2045d9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Exercise 1\n",
|
||||
"\n",
|
||||
"Implement the parallel matrix-matrix multiplication (Algorithm 3) using MPI collectives instead of point-to-point communication. I.e., this is the same exercise as in previous notebook, but using different functions for communication."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5e8f6e6a",
|
||||
|
||||
Reference in New Issue
Block a user