Compare commits

23 Commits

Author SHA1 Message Date
Francesc Verdugo
8b9029c93f Merge pull request #72 from jvdtoorn/fix/mpi_collectives-deadlock-comm_dup
Some checks failed
CI / Julia 1.9 - ubuntu-latest - x64 - push (push) Failing after 1m34s
CI / Documentation (push) Successful in 6m32s
2025-09-24 07:34:23 +02:00
Jules van der Toorn
eb8cd31240 fix(mpi_collectives): prevent potential deadlock in MPI_Comm_dup example 2025-09-16 17:00:18 +02:00
Francesc Verdugo
62fcf5ae2e Merge pull request #71 from fverdugo/2025-26
Change julia kernel version for MPI collectives notebook
2025-09-15 17:30:47 +02:00
Francesc Verdugo
7cae58c2e7 Change julia kernel version for MPI collectives notebook 2025-09-15 17:14:58 +02:00
Francesc Verdugo
788d7f39d0 Merge pull request #70 from fverdugo/2025-26
Small fixes for lecture 03
2025-09-08 12:04:41 +02:00
Francesc Verdugo
ae6e14bc62 Enhancing solutions for julia_distributed notebook. 2025-09-08 11:30:14 +02:00
Francesc Verdugo
aa1b5ce0d7 Fix small explanation in Julia async 2025-09-08 10:25:20 +02:00
Francesc Verdugo
42a485560e Merge pull request #69 from fverdugo/2025-26
Rephrase exercise 3 in notebook julia_basics
2025-09-05 15:18:16 +02:00
Francesc Verdugo
74b41c059e Adding figure 2025-09-05 15:13:56 +02:00
Francesc Verdugo
388a8d9f5a Rephrase exercise 3 in notebook julia_basics 2025-09-05 15:12:46 +02:00
Francesc Verdugo
e7b6ba8407 Merge pull request #68 from MaartM/main
Typo in julia_basics
2025-09-04 16:23:17 +02:00
LowkeyLoki-0
ac8a44f6ab Fix typo in julia_basics description of higher-order functions 2025-09-03 15:59:17 +02:00
Francesc Verdugo
57c8db52eb Merge pull request #67 from fverdugo/2025-26
Changes for 2025-26
2025-08-27 11:42:34 +02:00
Francesc Verdugo
50cb8fff17 Update notebooks to Julia 1.11.6 2025-08-27 10:52:14 +02:00
Francesc Verdugo
470bb36cc6 Minor improvements in the tutorial 2025-08-27 09:51:03 +02:00
Francesc Verdugo
072476ec46 Merge pull request #66 from abelsiqueira/fix-bestie-url
Update BestieTemplate.jl URL to JuliaBesties
2024-10-07 14:12:11 +02:00
Abel Soares Siqueira
4f4b7fa430 Update BestieTemplate.jl URL to JuliaBesties
This is a semi-automated PR.
BestieTemplate.jl has been moved to the JuliaBesties organization.
This updates the URL in .copier-answers.yml to point to the new location.
2024-10-07 13:29:20 +02:00
Francesc Verdugo
20c92dc92b Merge pull request #65 from VictorianHues/main 2024-10-01 07:33:57 +02:00
VictorianHues
82cfa1d44b Miscellaneous typos fixed 2024-09-30 23:14:53 +02:00
Francesc Verdugo
024429bceb Merge pull request #64 from fverdugo/francesc
Minor in tsp
2024-09-30 17:22:33 +02:00
Francesc Verdugo
5835451687 Merge pull request #63 from fverdugo/francesc
Fixing missing loop over k in LEQ
2024-09-27 13:29:55 +02:00
Francesc Verdugo
e4eea0da0a Merge pull request #62 from fverdugo/francesc
More work for ASP
2024-09-26 11:13:52 +02:00
Francesc Verdugo
08cfd87856 Merge pull request #61 from fverdugo/francesc
Polishing TSP notebook.
2024-09-25 14:30:06 +02:00
14 changed files with 7464 additions and 135 deletions

View File

@@ -210,11 +210,11 @@ To install a package, we need to enter *package* mode. Remember that we entered
```julia ```julia
julia> ] julia> ]
``` ```
At this point, the prompt should have changed to `(@v1.10) pkg>` indicating that we are in package mode. The text between the parentheses indicates which is the active *project*, i.e., where packages are going to be installed. In this case, we are working with the global project associated with our Julia installation (which is Julia 1.10 in this example, but it can be another version in your case). At this point, the prompt should have changed to `(@v1.11) pkg>` indicating that we are in package mode. The text between the parentheses indicates which is the active *project*, i.e., where packages are going to be installed. In this case, we are working with the global project associated with our Julia installation (which is Julia 1.11 in this example, but it can be another version in your case).
To install the MPI package, type To install the MPI package, type
```julia ```julia
(@v1.10) pkg> add MPI (@v1.11) pkg> add MPI
``` ```
Congrats, you have installed MPI! Congrats, you have installed MPI!
@@ -222,7 +222,8 @@ Congrats, you have installed MPI!
Many Julia package names end with `.jl`. This is just a way of signaling that a package is written in Julia. When using such packages, the `.jl` needs to be omitted. In this case, we have installed the `MPI.jl` package even though we have only typed `MPI` in the REPL. Many Julia package names end with `.jl`. This is just a way of signaling that a package is written in Julia. When using such packages, the `.jl` needs to be omitted. In this case, we have installed the `MPI.jl` package even though we have only typed `MPI` in the REPL.
!!! note !!! note
The package you have installed is the Julia interface to MPI, called `MPI.jl`. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the [documentation](https://juliaparallel.org/MPI.jl/stable/configuration/) of `MPI.jl` for further details. The package you have installed is the Julia interface to MPI, called `MPI.jl`. Note that it is not an MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs an MPI library for you, but it is also possible to use an MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the [documentation](https://juliaparallel.org/MPI.jl/stable/configuration/) of `MPI.jl` for further details.
To check that the package was installed properly, exit package mode by pressing the backspace key several times, and run it again To check that the package was installed properly, exit package mode by pressing the backspace key several times, and run it again
@@ -241,7 +242,7 @@ $ mpiexec -np 4 julia hello_mpi.jl
But it will probably not work since the version of `mpiexec` needs to match with the MPI version we are using from Julia. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands: But it will probably not work since the version of `mpiexec` needs to match with the MPI version we are using from Julia. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands:
```julia ```julia
julia> using MPI julia> using MPI
julia> run(`$(mpiexec()) -np 4 julia hello_mpi.jl`) julia> run(`$(mpiexec()) -np 4 julia hello_mpi.jl`);
``` ```
Now, you should see output from 4 ranks. Now, you should see output from 4 ranks.
@@ -254,7 +255,7 @@ We have installed the `MPI` package globally and it will be available in all Jul
A project is simply a folder in your file system. To use a particular folder as your project, you need to *activate* it. This is done by entering package mode and using the `activate` command followed by the path to the folder you want to activate. A project is simply a folder in your file system. To use a particular folder as your project, you need to *activate* it. This is done by entering package mode and using the `activate` command followed by the path to the folder you want to activate.
```julia ```julia
(@v1.10) pkg> activate . (@v1.11) pkg> activate .
``` ```
The previous command will activate the current working directory. Note that the dot `.` is indeed the path to the current folder. The previous command will activate the current working directory. Note that the dot `.` is indeed the path to the current folder.
@@ -264,7 +265,7 @@ The prompt has changed to `(lessons) pkg>` indicating that we are in the project
You can activate a project directly when opening Julia from the terminal using the `--project` flag. The command `$ julia --project=.` will open Julia and activate a project in the current directory. You can also achieve the same effect by setting the environment variable `JULIA_PROJECT` with the path of the folder you want to activate. You can activate a project directly when opening Julia from the terminal using the `--project` flag. The command `$ julia --project=.` will open Julia and activate a project in the current directory. You can also achieve the same effect by setting the environment variable `JULIA_PROJECT` with the path of the folder you want to activate.
!!! note !!! note
The active project folder and the current working directory are two independent concepts! For instance, `(@v1.10) pkg> activate folderB` and then `julia> cd("folderA")`, will activate the project in `folderB` and change the current working directory to `folderA`. The active project folder and the current working directory are two independent concepts! For instance, `(@v1.11) pkg> activate folderB` and then `julia> cd("folderA")`, will activate the project in `folderB` and change the current working directory to `folderA`.
At this point all package-related operations will be local to the new project. For instance, install the `DataFrames` package. At this point all package-related operations will be local to the new project. For instance, install the `DataFrames` package.
@@ -282,7 +283,7 @@ Now, we can return to the global project to check that `DataFrames` has not been
```julia ```julia
(lessons) pkg> activate (lessons) pkg> activate
``` ```
The prompt is again `(@v1.10) pkg>` The prompt is again `(@v1.11) pkg>`
Now, try to use `DataFrames`. Now, try to use `DataFrames`.
@@ -306,13 +307,13 @@ In other words, `Project.toml` contains the packages relevant for the user, wher
You can see the path to the current `Project.toml` file by using the `status` operator (or `st` in its short form) while in package mode You can see the path to the current `Project.toml` file by using the `status` operator (or `st` in its short form) while in package mode
```julia ```julia
(@v1.10) pkg> status (@v1.11) pkg> status
``` ```
The information about the `Manifest.toml` can be inspected by passing the `-m` flag. The information about the `Manifest.toml` can be inspected by passing the `-m` flag.
```julia ```julia
(@v1.10) pkg> status -m (@v1.11) pkg> status -m
``` ```
### Installing packages from a project file ### Installing packages from a project file
@@ -336,7 +337,7 @@ julia> mkdir("newproject")
To install all the packages registered in this file you need to activate the folder containing your `Project.toml` file To install all the packages registered in this file you need to activate the folder containing your `Project.toml` file
```julia ```julia
(@v1.10) pkg> activate newproject (@v1.11) pkg> activate newproject
``` ```
and then *instantiating* it and then *instantiating* it
```julia ```julia
@@ -350,12 +351,12 @@ The instantiate command will download and install all listed packages and their
You can get help about a particular package operator by writing `help` in front of it You can get help about a particular package operator by writing `help` in front of it
```julia ```julia
(@v1.10) pkg> help activate (@v1.11) pkg> help activate
``` ```
You can get an overview of all package commands by typing `help` alone You can get an overview of all package commands by typing `help` alone
```julia ```julia
(@v1.10) pkg> help (@v1.11) pkg> help
``` ```
### Package operations in Julia code ### Package operations in Julia code
@@ -368,7 +369,7 @@ julia> Pkg.status()
``` ```
is equivalent to calling `status` in package mode. is equivalent to calling `status` in package mode.
```julia ```julia
(@v1.10) pkg> status (@v1.11) pkg> status
``` ```
### Creating you own package ### Creating you own package
@@ -379,7 +380,7 @@ or if you want to eventually [register your package](https://github.com/JuliaReg
The simplest way of generating a package (called `MyPackage`) is as follows. Open Julia, go to package mode, and type The simplest way of generating a package (called `MyPackage`) is as follows. Open Julia, go to package mode, and type
```julia ```julia
(@v1.10) pkg> generate MyPackage (@v1.11) pkg> generate MyPackage
``` ```
This will crate a minimal package consisting of a new folder `MyPackage` with two files: This will crate a minimal package consisting of a new folder `MyPackage` with two files:
@@ -389,13 +390,13 @@ This will crate a minimal package consisting of a new folder `MyPackage` with tw
!!! tip !!! tip
This approach only generates a very minimal package. To create a more sophisticated package skeleton (including unit testing, code coverage, readme file, licence, etc.) use This approach only generates a very minimal package. To create a more sophisticated package skeleton (including unit testing, code coverage, readme file, licence, etc.) use
[`PkgTemplates.jl`](https://github.com/JuliaCI/PkgTemplates.jl) or [`BestieTemplate.jl`](https://github.com/abelsiqueira/BestieTemplate.jl). The later one is developed in Amsterdam at the [`PkgTemplates.jl`](https://github.com/JuliaCI/PkgTemplates.jl) or [`BestieTemplate.jl`](https://github.com/JuliaBesties/BestieTemplate.jl). The later one is developed in Amsterdam at the
[Netherlands eScience Center](https://www.esciencecenter.nl/). [Netherlands eScience Center](https://www.esciencecenter.nl/).
You can add dependencies to the package by activating the `MyPackage` folder in package mode and adding new dependencies as always: You can add dependencies to the package by activating the `MyPackage` folder in package mode and adding new dependencies as always:
```julia ```julia
(@v1.10) pkg> activate MyPackage (@v1.11) pkg> activate MyPackage
(MyPackage) pkg> add MPI (MyPackage) pkg> add MPI
``` ```
@@ -406,7 +407,7 @@ This will add MPI to your package dependencies.
To use your package you first need to add it to a package environment of your choice. This is done by changing to package mode and typing `develop ` followed by the path to the folder containing the package. For instance: To use your package you first need to add it to a package environment of your choice. This is done by changing to package mode and typing `develop ` followed by the path to the folder containing the package. For instance:
```julia ```julia
(@v1.10) pkg> develop MyPackage (@v1.11) pkg> develop MyPackage
``` ```
!!! note !!! note

View File

@@ -10,7 +10,7 @@ Welcome to the interactive lecture notes of the [Programming Large-Scale Paralle
This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam.
We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems.
Further information about the course is found in the study guide Further information about the course is found in the study guide
([click here](https://studiegids.vu.nl/EN/courses/2023-2024/XM_40017#/)) and our Canvas page (for registered students). ([click here](https://studiegids.vu.nl/en/vakken/2025-2026/XM_40017#/)) and our Canvas page (for registered students).
!!! note !!! note
Material will be added incrementally to the website as the course advances. Material will be added incrementally to the website as the course advances.

View File

@@ -27,12 +27,17 @@ ex2(f,g) = x -> f(x) + g(x)
### Exercise 3 ### Exercise 3
```julia ```julia
using GLMakie
max_iters = 100
n = 1000 n = 1000
x = LinRange(-1.7,0.7,n) x = LinRange(-1.7,0.7,n)
y = LinRange(-1.2,1.2,n) y = LinRange(-1.2,1.2,n)
heatmap(x,y,(i,j)->mandel(i,j,max_iters)) values = zeros(n,n)
for j in 1:n
for i in 1:n
values[i,j] = surprise(x[i],y[j])
end
end
using GLMakie
heatmap(x,y,values)
``` ```
## Asynchronous programming in Julia ## Asynchronous programming in Julia
@@ -43,11 +48,12 @@ heatmap(x,y,(i,j)->mandel(i,j,max_iters))
```julia ```julia
f = () -> Channel{Int}(1) f = () -> Channel{Int}(1)
chnls = [ RemoteChannel(f,w) for w in workers() ] worker_ids = workers()
@sync for (iw,w) in enumerate(workers()) chnls = [ RemoteChannel(f,w) for w in worker_ids ]
@sync for (iw,w) in enumerate(worker_ids)
@spawnat w begin @spawnat w begin
chnl_snd = chnls[iw] chnl_snd = chnls[iw]
if w == 2 if iw == 1
chnl_rcv = chnls[end] chnl_rcv = chnls[end]
msg = 2 msg = 2
println("msg = $msg") println("msg = $msg")
@@ -65,23 +71,26 @@ chnls = [ RemoteChannel(f,w) for w in workers() ]
end end
``` ```
This is another possible solution. This is another possible solution that does not use remote channels.
```julia ```julia
@everywhere function work(msg) @everywhere function work(msg,iw,worker_ids)
println("msg = $msg") println("msg = $msg")
if myid() != nprocs() if iw < length(worker_ids)
next = myid() + 1 inext = iw+1
@fetchfrom next work(msg+1) next = worker_ids[iw+1]
@fetchfrom next work(msg+1,inext,worker_ids)
else else
@fetchfrom 2 println("msg = $msg") @fetchfrom worker_ids[1] println("msg = $msg")
end end
return nothing
end end
msg = 2 msg = 2
@fetchfrom 2 work(msg) iw = 1
worker_ids = workers()
@fetchfrom worker_ids[iw] work(msg,iw,worker_ids)
``` ```
## Matrix-matrix multiplication ## Matrix-matrix multiplication
### Exercise 1 ### Exercise 1

View File

@@ -103,7 +103,7 @@
"### Problem statement\n", "### Problem statement\n",
"\n", "\n",
"Let us consider a system of linear equations written in matrix form $Ax=b$, where $A$ is a nonsingular square matrix, and $x$ and $b$ are vectors. $A$ and $b$ are given, and $x$ is unknown. The goal of Gaussian elimination is to transform the system $Ax=b$, into a new system $Ux=c$ such that\n", "Let us consider a system of linear equations written in matrix form $Ax=b$, where $A$ is a nonsingular square matrix, and $x$ and $b$ are vectors. $A$ and $b$ are given, and $x$ is unknown. The goal of Gaussian elimination is to transform the system $Ax=b$, into a new system $Ux=c$ such that\n",
"- both system have the same solution vector $x$,\n", "- both systems have the same solution vector $x$,\n",
"- the matrix $U$ of the new system is *upper triangular* with unit diagonal, namely $U_{ii} = 1$ and $U_{ij} = 0$ for $i>j$.\n", "- the matrix $U$ of the new system is *upper triangular* with unit diagonal, namely $U_{ii} = 1$ and $U_{ij} = 0$ for $i>j$.\n",
"\n", "\n",
"\n", "\n",
@@ -398,7 +398,7 @@
"source": [ "source": [
"### Data partition\n", "### Data partition\n",
"\n", "\n",
"Let start considering a row-wise block partition, as we did in previous algorithms.\n", "Let's start considering a row-wise block partition, as we did in previous algorithms.\n",
"\n", "\n",
"In the figure below, we use different colors to illustrate which entries are assigned to a CPU. All entries with the same color are assigned to the same CPU." "In the figure below, we use different colors to illustrate which entries are assigned to a CPU. All entries with the same color are assigned to the same CPU."
] ]
@@ -454,7 +454,7 @@
"<b>Definition:</b> *Load imbalance*: is the problem when work is not equally distributed over all processes and consequently some processes do more work than others.\n", "<b>Definition:</b> *Load imbalance*: is the problem when work is not equally distributed over all processes and consequently some processes do more work than others.\n",
"</div>\n", "</div>\n",
"\n", "\n",
"Having processors waiting for others is a waist of computational resources and affects negatively parallel speedups. The optimal speedup (speedup equal to the number of processors) assumes that the work is perfectly parallel and that it is evenly distributed. If there is load imbalance, the last assumption is not true anymore and the speedup will be suboptimal.\n" "Having processors waiting for others is a waste of computational resources and affects negatively parallel speedups. The optimal speedup (speedup equal to the number of processors) assumes that the work is perfectly parallel and that it is evenly distributed. If there is load imbalance, the last assumption is not true anymore and the speedup will be suboptimal.\n"
] ]
}, },
{ {
@@ -620,15 +620,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.9.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.9" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.9.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -57,7 +57,7 @@
"function q1_answer(bool)\n", "function q1_answer(bool)\n",
" bool || return\n", " bool || return\n",
" msg = \"\"\"\n", " msg = \"\"\"\n",
" The we can change the loop order over i and j without changing the result. Rememeber:\n", " Then we can change the loop order over i and j without changing the result. Remember:\n",
" \n", " \n",
" C[i,j] = min(C[i,j],C[i,k]+C[k,j])\n", " C[i,j] = min(C[i,j],C[i,k]+C[k,j])\n",
" \n", " \n",
@@ -788,7 +788,7 @@
" if rank == 0\n", " if rank == 0\n",
" N = size(C,1)\n", " N = size(C,1)\n",
" if mod(N,P) !=0\n", " if mod(N,P) !=0\n",
" println(\"N not multplie of P\")\n", " println(\"N not multiple of P\")\n",
" MPI.Abort(comm,-1)\n", " MPI.Abort(comm,-1)\n",
" end\n", " end\n",
" Nref = Ref(N)\n", " Nref = Ref(N)\n",
@@ -1131,15 +1131,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.9.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.9" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.9.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

7324
notebooks/figures/mandel.svg Normal file

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 440 KiB

View File

@@ -27,7 +27,7 @@
"\n", "\n",
"In this notebook, we will learn\n", "In this notebook, we will learn\n",
"\n", "\n",
"- How to paralleize the Jacobi method\n", "- How to parallelize the Jacobi method\n",
"- How the data partition can impact the performance of a distributed algorithm\n", "- How the data partition can impact the performance of a distributed algorithm\n",
"- How to use latency hiding to improve parallel performance\n", "- How to use latency hiding to improve parallel performance\n",
"\n" "\n"
@@ -452,7 +452,7 @@
"- We need to get remote entries from 2 neighbors (2 messages per iteration)\n", "- We need to get remote entries from 2 neighbors (2 messages per iteration)\n",
"- We need to communicate 1 entry per message\n", "- We need to communicate 1 entry per message\n",
"- Thus, communication complexity is $O(1)$\n", "- Thus, communication complexity is $O(1)$\n",
"- Communication/computation ration is $O(P/N)$, making the algorithm potentially scalable if $P<<N$.\n" "- Communication/computation ratio is $O(P/N)$, making the algorithm potentially scalable if $P<<N$.\n"
] ]
}, },
{ {
@@ -655,7 +655,7 @@
"end\n", "end\n",
"```\n", "```\n",
"\n", "\n",
"- The outer loop cannot be parallelized (like in the 1d case). \n", "- The outer loop cannot be parallelized (like in the 1D case). \n",
"- The two inner loops are trivially parallel\n" "- The two inner loops are trivially parallel\n"
] ]
}, },
@@ -666,7 +666,7 @@
"source": [ "source": [
"### Parallelization strategies\n", "### Parallelization strategies\n",
"\n", "\n",
"In 2d one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n", "In 2D, one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n",
"\n", "\n",
"- 1D block row partition (each worker handles a subset of consecutive rows and all columns)\n", "- 1D block row partition (each worker handles a subset of consecutive rows and all columns)\n",
"- 2D block partition (each worker handles a subset of consecutive rows and columns)\n", "- 2D block partition (each worker handles a subset of consecutive rows and columns)\n",
@@ -848,9 +848,9 @@
"\n", "\n",
"|Partition | Messages <br> per iteration | Communication <br>per worker | Computation <br>per worker | Ratio communication/<br>computation |\n", "|Partition | Messages <br> per iteration | Communication <br>per worker | Computation <br>per worker | Ratio communication/<br>computation |\n",
"|---|---|---|---|---|\n", "|---|---|---|---|---|\n",
"| 1d block | 2 | O(N) | N²/P | O(P/N) |\n", "| 1D block | 2 | O(N) | N²/P | O(P/N) |\n",
"| 2d block | 4 | O(N/√P) | N²/P | O(√P/N) |\n", "| 2D block | 4 | O(N/√P) | N²/P | O(√P/N) |\n",
"| 2d cyclic | 4 |O(N²/P) | N²/P | O(1) |" "| 2D cyclic | 4 |O(N²/P) | N²/P | O(1) |"
] ]
}, },
{ {
@@ -862,9 +862,9 @@
"\n", "\n",
"\n", "\n",
"\n", "\n",
"- Both 1d and 2d block partitions are potentially scalable if $P<<N$\n", "- Both 1D and 2D block partitions are potentially scalable if $P<<N$\n",
"- The 2d block partition has the lowest communication complexity\n", "- The 2D block partition has the lowest communication complexity\n",
"- The 1d block partition requires to send less messages (It can be useful if the fixed cost of sending a message is high)\n", "- The 1D block partition requires to send less messages (It can be useful if the fixed cost of sending a message is high)\n",
"- The best strategy for a given problem size will thus depend on the machine.\n", "- The best strategy for a given problem size will thus depend on the machine.\n",
"- Cyclic partitions are impractical for this application (but they are useful in others) \n", "- Cyclic partitions are impractical for this application (but they are useful in others) \n",
"\n" "\n"
@@ -1932,15 +1932,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.10.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.10" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.10.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -87,7 +87,7 @@
"\n", "\n",
"### Creating a task\n", "### Creating a task\n",
"\n", "\n",
"Technically, a task in Julia is a *symmetric* [*co-routine*](https://en.wikipedia.org/wiki/Coroutine). More informally, a task is a piece of computational work that can be started (scheduled) at some point in the future, and that can be interrupted and resumed. To create a task, we first need to create a function that represents the work to be done in the task. In next cell, we generate a task that generates and sums two matrices." "Technically, a task in Julia is a *symmetric* [*co-routine*](https://en.wikipedia.org/wiki/Coroutine). More informally, a task is a piece of computational work that can be started (scheduled) at some point in the future, and that can be interrupted and resumed. To create a task, we first need to create a function that represents the work to be done in the task. In the next cell, we generate a task that generates and sums two matrices."
] ]
}, },
{ {
@@ -322,7 +322,7 @@
"source": [ "source": [
"### `yield`\n", "### `yield`\n",
"\n", "\n",
"If tasks do not run in parallel, what is the purpose of tasks? Tasks are handy since they can be interrupted and to switch control to other tasks. This is achieved via function `yield`. When we call yield, we provide the opportunity to switch to another task. The function below is a variation of function `compute_π` in which we yield every 1000 iterations. At the call to yield we allow other tasks to take over. Without this call to yield, once we start function `compute_π` we cannot start any other tasks until this function finishes." "If tasks do not run in parallel, what is the purpose of tasks? Tasks are handy since they can be interrupted and to switch control to other tasks. This is achieved via function `yield`. When we call `yield`, we provide the opportunity to switch to another task. The function below is a variation of function `compute_π` in which we `yield` every 1000 iterations. At the call to `yield` we allow other tasks to take over. Without this call to `yield`, once we start function `compute_π` we cannot start any other tasks until this function finishes."
] ]
}, },
{ {
@@ -349,7 +349,7 @@
"id": "69fd4131", "id": "69fd4131",
"metadata": {}, "metadata": {},
"source": [ "source": [
"You can check this behavior experimentally with the two following cells. The next one creates and schedules a task that computes pi with the function `compute_π_yield`. Note that you can run the 2nd cell bellow while this task is running since we call to yield often inside `compute_π_yield`." "You can check this behavior experimentally with the two following cells. The next one creates and schedules a task that computes pi with the function `compute_π_yield`. Note that you can run the 2nd cell bellow while this task is running since we call to `yield` often inside `compute_π_yield`."
] ]
}, },
{ {
@@ -381,7 +381,7 @@
"source": [ "source": [
"### Example: Implementing function sleep\n", "### Example: Implementing function sleep\n",
"\n", "\n",
"Using yield, we can implement our own version of the sleep function as follows:" "Using `yield`, we can implement our own version of the sleep function as follows:"
] ]
}, },
{ {
@@ -738,7 +738,8 @@
"\n", "\n",
"- `put!` will wait for a `take!` if there is not space left in the channel's buffer.\n", "- `put!` will wait for a `take!` if there is not space left in the channel's buffer.\n",
"- `take!` will wait for a `put!` if there is no data to be consumed in the channel.\n", "- `take!` will wait for a `put!` if there is no data to be consumed in the channel.\n",
"- Both `put!` and `take!` will raise an error if the channel is closed." "- `put!` will raise an error if the channel is closed.\n",
"- `take!` will raise an error if the channel is closed *and* empty."
] ]
}, },
{ {
@@ -1015,15 +1016,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.10.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.10" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.10.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -167,7 +167,7 @@
"```julia\n", "```julia\n",
"using MPI\n", "using MPI\n",
"MPI.Init()\n", "MPI.Init()\n",
"# Your MPI programm here\n", "# Your MPI program here\n",
"MPI.Finalize() # Optional\n", "MPI.Finalize() # Optional\n",
"```\n", "```\n",
"\n", "\n",
@@ -176,7 +176,7 @@
"```julia\n", "```julia\n",
"using MPI\n", "using MPI\n",
"MPI.Init(finalize_atexit=false)\n", "MPI.Init(finalize_atexit=false)\n",
"# Your MPI programm here\n", "# Your MPI program here\n",
"MPI.Finalize() # Mandatory\n", "MPI.Finalize() # Mandatory\n",
"```\n", "```\n",
"\n", "\n",
@@ -186,7 +186,7 @@
"#include <mpi.h>\n", "#include <mpi.h>\n",
"int main(int argc, char** argv) {\n", "int main(int argc, char** argv) {\n",
" MPI_Init(NULL, NULL);\n", " MPI_Init(NULL, NULL);\n",
" /* Your MPI Programm here */\n", " /* Your MPI Program here */\n",
" MPI_Finalize();\n", " MPI_Finalize();\n",
"}\n", "}\n",
"```\n", "```\n",
@@ -612,7 +612,7 @@
"id": "4b455f98", "id": "4b455f98",
"metadata": {}, "metadata": {},
"source": [ "source": [
"So, the full MPI program needs to be in the source file passed to Julia or the quote block. In practice, long MPI programms are written as Julia packages using several files, which are then loaded by each MPI process. For our simple example, we just need to include the definition of `foo` inside the quote block." "So, the full MPI program needs to be in the source file passed to Julia or the quote block. In practice, long MPI programs are written as Julia packages using several files, which are then loaded by each MPI process. For our simple example, we just need to include the definition of `foo` inside the quote block."
] ]
}, },
{ {
@@ -920,7 +920,7 @@
" source = MPI.ANY_SOURCE\n", " source = MPI.ANY_SOURCE\n",
" tag = MPI.ANY_TAG\n", " tag = MPI.ANY_TAG\n",
" status = MPI.Probe(comm,MPI.Status; source, tag)\n", " status = MPI.Probe(comm,MPI.Status; source, tag)\n",
" count = MPI.Get_count(status,Int) # Get incomming message length\n", " count = MPI.Get_count(status,Int) # Get incoming message length\n",
" println(\"I am about to receive $count integers.\")\n", " println(\"I am about to receive $count integers.\")\n",
" rcvbuf = zeros(Int,count) # Allocate \n", " rcvbuf = zeros(Int,count) # Allocate \n",
" MPI.Recv!(rcvbuf, comm, MPI.Status; source, tag)\n", " MPI.Recv!(rcvbuf, comm, MPI.Status; source, tag)\n",
@@ -973,7 +973,7 @@
" if rank == 3\n", " if rank == 3\n",
" rcvbuf = zeros(Int,5)\n", " rcvbuf = zeros(Int,5)\n",
" MPI.Recv!(rcvbuf, comm, MPI.Status; source=2, tag=0)\n", " MPI.Recv!(rcvbuf, comm, MPI.Status; source=2, tag=0)\n",
" # recvbuf will have the incomming message fore sure. Recv! has returned.\n", " # recvbuf will have the incoming message fore sure. Recv! has returned.\n",
" @show rcvbuf\n", " @show rcvbuf\n",
" end\n", " end\n",
"end\n", "end\n",
@@ -1590,15 +1590,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.10.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.10" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.10.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -293,7 +293,7 @@
"## Where can we exploit parallelism?\n", "## Where can we exploit parallelism?\n",
"\n", "\n",
"\n", "\n",
"The matrix-matrix multiplication is an example of [embarrassingly parallel algorithm](https://en.wikipedia.org/wiki/Embarrassingly_parallel). An embarrassingly parallel (also known as trivially parallel) algorithm is an algorithm that can be split in parallel tasks with no (or very few) dependences between them. Such algorithms are typically easy to parallelize.\n", "The matrix-matrix multiplication is an example of [embarrassingly parallel algorithm](https://en.wikipedia.org/wiki/Embarrassingly_parallel). An embarrassingly parallel (also known as trivially parallel) algorithm is an algorithm that can be split in parallel tasks with no (or very few) dependencies between them. Such algorithms are typically easy to parallelize.\n",
"\n", "\n",
"Which parts of an algorithm are completely independent and thus trivially parallel? To answer this question, it is useful to inspect the for loops, which are potential sources of parallelism. If the iterations are independent of each other, then they are trivial to parallelize. An easy check to find out if the iterations are dependent or not is to change their order (for instance changing `for j in 1:n` by `for j in n:-1:1`, i.e. doing the loop in reverse). If the result changes, then the iterations are not independent.\n", "Which parts of an algorithm are completely independent and thus trivially parallel? To answer this question, it is useful to inspect the for loops, which are potential sources of parallelism. If the iterations are independent of each other, then they are trivial to parallelize. An easy check to find out if the iterations are dependent or not is to change their order (for instance changing `for j in 1:n` by `for j in n:-1:1`, i.e. doing the loop in reverse). If the result changes, then the iterations are not independent.\n",
"\n", "\n",
@@ -314,7 +314,7 @@
"Note that:\n", "Note that:\n",
"\n", "\n",
"- Loops over `i` and `j` are trivially parallel.\n", "- Loops over `i` and `j` are trivially parallel.\n",
"- The loop over `k` is not trivially parallel. The accumulation into the reduction variable `Cij` introduces extra dependences. In addition, remember that the addition of floating point numbers is not strictly associative due to rounding errors. Thus, the result of this loop may change with the loop order when using floating point numbers. In any case, this loop can also be parallelized, but it requires a parallel *fold* or a parallel *reduction*.\n", "- The loop over `k` is not trivially parallel. The accumulation into the reduction variable `Cij` introduces extra dependencies. In addition, remember that the addition of floating point numbers is not strictly associative due to rounding errors. Thus, the result of this loop may change with the loop order when using floating point numbers. In any case, this loop can also be parallelized, but it requires a parallel *fold* or a parallel *reduction*.\n",
"\n" "\n"
] ]
}, },
@@ -1138,15 +1138,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.10.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.10" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.10.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -771,9 +771,11 @@
" rank = MPI.Comm_rank(comm)\n", " rank = MPI.Comm_rank(comm)\n",
" if rank == 2\n", " if rank == 2\n",
" sndbuf = [2]\n", " sndbuf = [2]\n",
" MPI.Send(sndbuf, comm2; dest=3, tag=0)\n", " req1 = MPI.Isend(sndbuf, comm2; dest=3, tag=0)\n",
" sndbuf = [1]\n", " sndbuf = [1]\n",
" MPI.Send(sndbuf, comm; dest=3, tag=0)\n", " req2 = MPI.Isend(sndbuf, comm; dest=3, tag=0)\n",
" MPI.Wait(req2)\n",
" MPI.Wait(req1)\n",
" end\n", " end\n",
" if rank == 3\n", " if rank == 3\n",
" rcvbuf = zeros(Int,1)\n", " rcvbuf = zeros(Int,1)\n",
@@ -944,15 +946,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.9.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.9" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.9.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -1217,15 +1217,15 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Julia 1.10.0", "display_name": "Julia 1.11.6",
"language": "julia", "language": "julia",
"name": "julia-1.10" "name": "julia-1.11"
}, },
"language_info": { "language_info": {
"file_extension": ".jl", "file_extension": ".jl",
"mimetype": "application/julia", "mimetype": "application/julia",
"name": "julia", "name": "julia",
"version": "1.10.0" "version": "1.11.6"
} }
}, },
"nbformat": 4, "nbformat": 4,