mirror of
https://github.com/fverdugo/XM_40017.git
synced 2025-12-29 10:18:31 +01:00
Compare commits
23 Commits
francesc
...
8b9029c93f
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8b9029c93f | ||
|
|
eb8cd31240 | ||
|
|
62fcf5ae2e | ||
|
|
7cae58c2e7 | ||
|
|
788d7f39d0 | ||
|
|
ae6e14bc62 | ||
|
|
aa1b5ce0d7 | ||
|
|
42a485560e | ||
|
|
74b41c059e | ||
|
|
388a8d9f5a | ||
|
|
e7b6ba8407 | ||
|
|
ac8a44f6ab | ||
|
|
57c8db52eb | ||
|
|
50cb8fff17 | ||
|
|
470bb36cc6 | ||
|
|
072476ec46 | ||
|
|
4f4b7fa430 | ||
|
|
20c92dc92b | ||
|
|
82cfa1d44b | ||
|
|
024429bceb | ||
|
|
5835451687 | ||
|
|
e4eea0da0a | ||
|
|
08cfd87856 |
@@ -210,11 +210,11 @@ To install a package, we need to enter *package* mode. Remember that we entered
|
||||
```julia
|
||||
julia> ]
|
||||
```
|
||||
At this point, the prompt should have changed to `(@v1.10) pkg>` indicating that we are in package mode. The text between the parentheses indicates which is the active *project*, i.e., where packages are going to be installed. In this case, we are working with the global project associated with our Julia installation (which is Julia 1.10 in this example, but it can be another version in your case).
|
||||
At this point, the prompt should have changed to `(@v1.11) pkg>` indicating that we are in package mode. The text between the parentheses indicates which is the active *project*, i.e., where packages are going to be installed. In this case, we are working with the global project associated with our Julia installation (which is Julia 1.11 in this example, but it can be another version in your case).
|
||||
|
||||
To install the MPI package, type
|
||||
```julia
|
||||
(@v1.10) pkg> add MPI
|
||||
(@v1.11) pkg> add MPI
|
||||
```
|
||||
Congrats, you have installed MPI!
|
||||
|
||||
@@ -222,7 +222,8 @@ Congrats, you have installed MPI!
|
||||
Many Julia package names end with `.jl`. This is just a way of signaling that a package is written in Julia. When using such packages, the `.jl` needs to be omitted. In this case, we have installed the `MPI.jl` package even though we have only typed `MPI` in the REPL.
|
||||
|
||||
!!! note
|
||||
The package you have installed is the Julia interface to MPI, called `MPI.jl`. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the [documentation](https://juliaparallel.org/MPI.jl/stable/configuration/) of `MPI.jl` for further details.
|
||||
The package you have installed is the Julia interface to MPI, called `MPI.jl`. Note that it is not an MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs an MPI library for you, but it is also possible to use an MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the [documentation](https://juliaparallel.org/MPI.jl/stable/configuration/) of `MPI.jl` for further details.
|
||||
|
||||
|
||||
To check that the package was installed properly, exit package mode by pressing the backspace key several times, and run it again
|
||||
|
||||
@@ -241,7 +242,7 @@ $ mpiexec -np 4 julia hello_mpi.jl
|
||||
But it will probably not work since the version of `mpiexec` needs to match with the MPI version we are using from Julia. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands:
|
||||
```julia
|
||||
julia> using MPI
|
||||
julia> run(`$(mpiexec()) -np 4 julia hello_mpi.jl`)
|
||||
julia> run(`$(mpiexec()) -np 4 julia hello_mpi.jl`);
|
||||
```
|
||||
|
||||
Now, you should see output from 4 ranks.
|
||||
@@ -254,7 +255,7 @@ We have installed the `MPI` package globally and it will be available in all Jul
|
||||
|
||||
A project is simply a folder in your file system. To use a particular folder as your project, you need to *activate* it. This is done by entering package mode and using the `activate` command followed by the path to the folder you want to activate.
|
||||
```julia
|
||||
(@v1.10) pkg> activate .
|
||||
(@v1.11) pkg> activate .
|
||||
```
|
||||
The previous command will activate the current working directory. Note that the dot `.` is indeed the path to the current folder.
|
||||
|
||||
@@ -264,7 +265,7 @@ The prompt has changed to `(lessons) pkg>` indicating that we are in the project
|
||||
You can activate a project directly when opening Julia from the terminal using the `--project` flag. The command `$ julia --project=.` will open Julia and activate a project in the current directory. You can also achieve the same effect by setting the environment variable `JULIA_PROJECT` with the path of the folder you want to activate.
|
||||
|
||||
!!! note
|
||||
The active project folder and the current working directory are two independent concepts! For instance, `(@v1.10) pkg> activate folderB` and then `julia> cd("folderA")`, will activate the project in `folderB` and change the current working directory to `folderA`.
|
||||
The active project folder and the current working directory are two independent concepts! For instance, `(@v1.11) pkg> activate folderB` and then `julia> cd("folderA")`, will activate the project in `folderB` and change the current working directory to `folderA`.
|
||||
|
||||
At this point all package-related operations will be local to the new project. For instance, install the `DataFrames` package.
|
||||
|
||||
@@ -282,7 +283,7 @@ Now, we can return to the global project to check that `DataFrames` has not been
|
||||
```julia
|
||||
(lessons) pkg> activate
|
||||
```
|
||||
The prompt is again `(@v1.10) pkg>`
|
||||
The prompt is again `(@v1.11) pkg>`
|
||||
|
||||
Now, try to use `DataFrames`.
|
||||
|
||||
@@ -306,13 +307,13 @@ In other words, `Project.toml` contains the packages relevant for the user, wher
|
||||
You can see the path to the current `Project.toml` file by using the `status` operator (or `st` in its short form) while in package mode
|
||||
|
||||
```julia
|
||||
(@v1.10) pkg> status
|
||||
(@v1.11) pkg> status
|
||||
```
|
||||
|
||||
The information about the `Manifest.toml` can be inspected by passing the `-m` flag.
|
||||
|
||||
```julia
|
||||
(@v1.10) pkg> status -m
|
||||
(@v1.11) pkg> status -m
|
||||
```
|
||||
|
||||
### Installing packages from a project file
|
||||
@@ -336,7 +337,7 @@ julia> mkdir("newproject")
|
||||
|
||||
To install all the packages registered in this file you need to activate the folder containing your `Project.toml` file
|
||||
```julia
|
||||
(@v1.10) pkg> activate newproject
|
||||
(@v1.11) pkg> activate newproject
|
||||
```
|
||||
and then *instantiating* it
|
||||
```julia
|
||||
@@ -350,12 +351,12 @@ The instantiate command will download and install all listed packages and their
|
||||
You can get help about a particular package operator by writing `help` in front of it
|
||||
|
||||
```julia
|
||||
(@v1.10) pkg> help activate
|
||||
(@v1.11) pkg> help activate
|
||||
```
|
||||
|
||||
You can get an overview of all package commands by typing `help` alone
|
||||
```julia
|
||||
(@v1.10) pkg> help
|
||||
(@v1.11) pkg> help
|
||||
```
|
||||
|
||||
### Package operations in Julia code
|
||||
@@ -368,7 +369,7 @@ julia> Pkg.status()
|
||||
```
|
||||
is equivalent to calling `status` in package mode.
|
||||
```julia
|
||||
(@v1.10) pkg> status
|
||||
(@v1.11) pkg> status
|
||||
```
|
||||
|
||||
### Creating you own package
|
||||
@@ -379,7 +380,7 @@ or if you want to eventually [register your package](https://github.com/JuliaReg
|
||||
The simplest way of generating a package (called `MyPackage`) is as follows. Open Julia, go to package mode, and type
|
||||
|
||||
```julia
|
||||
(@v1.10) pkg> generate MyPackage
|
||||
(@v1.11) pkg> generate MyPackage
|
||||
```
|
||||
|
||||
This will crate a minimal package consisting of a new folder `MyPackage` with two files:
|
||||
@@ -389,13 +390,13 @@ This will crate a minimal package consisting of a new folder `MyPackage` with tw
|
||||
|
||||
!!! tip
|
||||
This approach only generates a very minimal package. To create a more sophisticated package skeleton (including unit testing, code coverage, readme file, licence, etc.) use
|
||||
[`PkgTemplates.jl`](https://github.com/JuliaCI/PkgTemplates.jl) or [`BestieTemplate.jl`](https://github.com/abelsiqueira/BestieTemplate.jl). The later one is developed in Amsterdam at the
|
||||
[`PkgTemplates.jl`](https://github.com/JuliaCI/PkgTemplates.jl) or [`BestieTemplate.jl`](https://github.com/JuliaBesties/BestieTemplate.jl). The later one is developed in Amsterdam at the
|
||||
[Netherlands eScience Center](https://www.esciencecenter.nl/).
|
||||
|
||||
You can add dependencies to the package by activating the `MyPackage` folder in package mode and adding new dependencies as always:
|
||||
|
||||
```julia
|
||||
(@v1.10) pkg> activate MyPackage
|
||||
(@v1.11) pkg> activate MyPackage
|
||||
(MyPackage) pkg> add MPI
|
||||
```
|
||||
|
||||
@@ -406,7 +407,7 @@ This will add MPI to your package dependencies.
|
||||
To use your package you first need to add it to a package environment of your choice. This is done by changing to package mode and typing `develop ` followed by the path to the folder containing the package. For instance:
|
||||
|
||||
```julia
|
||||
(@v1.10) pkg> develop MyPackage
|
||||
(@v1.11) pkg> develop MyPackage
|
||||
```
|
||||
|
||||
!!! note
|
||||
|
||||
@@ -10,7 +10,7 @@ Welcome to the interactive lecture notes of the [Programming Large-Scale Paralle
|
||||
This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam.
|
||||
We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems.
|
||||
Further information about the course is found in the study guide
|
||||
([click here](https://studiegids.vu.nl/EN/courses/2023-2024/XM_40017#/)) and our Canvas page (for registered students).
|
||||
([click here](https://studiegids.vu.nl/en/vakken/2025-2026/XM_40017#/)) and our Canvas page (for registered students).
|
||||
|
||||
!!! note
|
||||
Material will be added incrementally to the website as the course advances.
|
||||
|
||||
@@ -27,12 +27,17 @@ ex2(f,g) = x -> f(x) + g(x)
|
||||
### Exercise 3
|
||||
|
||||
```julia
|
||||
using GLMakie
|
||||
max_iters = 100
|
||||
n = 1000
|
||||
x = LinRange(-1.7,0.7,n)
|
||||
y = LinRange(-1.2,1.2,n)
|
||||
heatmap(x,y,(i,j)->mandel(i,j,max_iters))
|
||||
values = zeros(n,n)
|
||||
for j in 1:n
|
||||
for i in 1:n
|
||||
values[i,j] = surprise(x[i],y[j])
|
||||
end
|
||||
end
|
||||
using GLMakie
|
||||
heatmap(x,y,values)
|
||||
```
|
||||
|
||||
## Asynchronous programming in Julia
|
||||
@@ -43,11 +48,12 @@ heatmap(x,y,(i,j)->mandel(i,j,max_iters))
|
||||
|
||||
```julia
|
||||
f = () -> Channel{Int}(1)
|
||||
chnls = [ RemoteChannel(f,w) for w in workers() ]
|
||||
@sync for (iw,w) in enumerate(workers())
|
||||
worker_ids = workers()
|
||||
chnls = [ RemoteChannel(f,w) for w in worker_ids ]
|
||||
@sync for (iw,w) in enumerate(worker_ids)
|
||||
@spawnat w begin
|
||||
chnl_snd = chnls[iw]
|
||||
if w == 2
|
||||
if iw == 1
|
||||
chnl_rcv = chnls[end]
|
||||
msg = 2
|
||||
println("msg = $msg")
|
||||
@@ -65,23 +71,26 @@ chnls = [ RemoteChannel(f,w) for w in workers() ]
|
||||
end
|
||||
```
|
||||
|
||||
This is another possible solution.
|
||||
This is another possible solution that does not use remote channels.
|
||||
|
||||
```julia
|
||||
@everywhere function work(msg)
|
||||
@everywhere function work(msg,iw,worker_ids)
|
||||
println("msg = $msg")
|
||||
if myid() != nprocs()
|
||||
next = myid() + 1
|
||||
@fetchfrom next work(msg+1)
|
||||
if iw < length(worker_ids)
|
||||
inext = iw+1
|
||||
next = worker_ids[iw+1]
|
||||
@fetchfrom next work(msg+1,inext,worker_ids)
|
||||
else
|
||||
@fetchfrom 2 println("msg = $msg")
|
||||
@fetchfrom worker_ids[1] println("msg = $msg")
|
||||
end
|
||||
return nothing
|
||||
end
|
||||
msg = 2
|
||||
@fetchfrom 2 work(msg)
|
||||
iw = 1
|
||||
worker_ids = workers()
|
||||
@fetchfrom worker_ids[iw] work(msg,iw,worker_ids)
|
||||
```
|
||||
|
||||
|
||||
## Matrix-matrix multiplication
|
||||
|
||||
### Exercise 1
|
||||
|
||||
@@ -103,7 +103,7 @@
|
||||
"### Problem statement\n",
|
||||
"\n",
|
||||
"Let us consider a system of linear equations written in matrix form $Ax=b$, where $A$ is a nonsingular square matrix, and $x$ and $b$ are vectors. $A$ and $b$ are given, and $x$ is unknown. The goal of Gaussian elimination is to transform the system $Ax=b$, into a new system $Ux=c$ such that\n",
|
||||
"- both system have the same solution vector $x$,\n",
|
||||
"- both systems have the same solution vector $x$,\n",
|
||||
"- the matrix $U$ of the new system is *upper triangular* with unit diagonal, namely $U_{ii} = 1$ and $U_{ij} = 0$ for $i>j$.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@@ -398,7 +398,7 @@
|
||||
"source": [
|
||||
"### Data partition\n",
|
||||
"\n",
|
||||
"Let start considering a row-wise block partition, as we did in previous algorithms.\n",
|
||||
"Let's start considering a row-wise block partition, as we did in previous algorithms.\n",
|
||||
"\n",
|
||||
"In the figure below, we use different colors to illustrate which entries are assigned to a CPU. All entries with the same color are assigned to the same CPU."
|
||||
]
|
||||
@@ -454,7 +454,7 @@
|
||||
"<b>Definition:</b> *Load imbalance*: is the problem when work is not equally distributed over all processes and consequently some processes do more work than others.\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"Having processors waiting for others is a waist of computational resources and affects negatively parallel speedups. The optimal speedup (speedup equal to the number of processors) assumes that the work is perfectly parallel and that it is evenly distributed. If there is load imbalance, the last assumption is not true anymore and the speedup will be suboptimal.\n"
|
||||
"Having processors waiting for others is a waste of computational resources and affects negatively parallel speedups. The optimal speedup (speedup equal to the number of processors) assumes that the work is perfectly parallel and that it is evenly distributed. If there is load imbalance, the last assumption is not true anymore and the speedup will be suboptimal.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -620,15 +620,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.9.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.9"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.9.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -57,7 +57,7 @@
|
||||
"function q1_answer(bool)\n",
|
||||
" bool || return\n",
|
||||
" msg = \"\"\"\n",
|
||||
" The we can change the loop order over i and j without changing the result. Rememeber:\n",
|
||||
" Then we can change the loop order over i and j without changing the result. Remember:\n",
|
||||
" \n",
|
||||
" C[i,j] = min(C[i,j],C[i,k]+C[k,j])\n",
|
||||
" \n",
|
||||
@@ -788,7 +788,7 @@
|
||||
" if rank == 0\n",
|
||||
" N = size(C,1)\n",
|
||||
" if mod(N,P) !=0\n",
|
||||
" println(\"N not multplie of P\")\n",
|
||||
" println(\"N not multiple of P\")\n",
|
||||
" MPI.Abort(comm,-1)\n",
|
||||
" end\n",
|
||||
" Nref = Ref(N)\n",
|
||||
@@ -1131,15 +1131,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.9.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.9"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.9.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
7324
notebooks/figures/mandel.svg
Normal file
7324
notebooks/figures/mandel.svg
Normal file
File diff suppressed because it is too large
Load Diff
|
After Width: | Height: | Size: 440 KiB |
@@ -27,7 +27,7 @@
|
||||
"\n",
|
||||
"In this notebook, we will learn\n",
|
||||
"\n",
|
||||
"- How to paralleize the Jacobi method\n",
|
||||
"- How to parallelize the Jacobi method\n",
|
||||
"- How the data partition can impact the performance of a distributed algorithm\n",
|
||||
"- How to use latency hiding to improve parallel performance\n",
|
||||
"\n"
|
||||
@@ -452,7 +452,7 @@
|
||||
"- We need to get remote entries from 2 neighbors (2 messages per iteration)\n",
|
||||
"- We need to communicate 1 entry per message\n",
|
||||
"- Thus, communication complexity is $O(1)$\n",
|
||||
"- Communication/computation ration is $O(P/N)$, making the algorithm potentially scalable if $P<<N$.\n"
|
||||
"- Communication/computation ratio is $O(P/N)$, making the algorithm potentially scalable if $P<<N$.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -655,7 +655,7 @@
|
||||
"end\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- The outer loop cannot be parallelized (like in the 1d case). \n",
|
||||
"- The outer loop cannot be parallelized (like in the 1D case). \n",
|
||||
"- The two inner loops are trivially parallel\n"
|
||||
]
|
||||
},
|
||||
@@ -666,7 +666,7 @@
|
||||
"source": [
|
||||
"### Parallelization strategies\n",
|
||||
"\n",
|
||||
"In 2d one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n",
|
||||
"In 2D, one has more flexibility in order to distribute the data over the processes. We consider these three alternatives:\n",
|
||||
"\n",
|
||||
"- 1D block row partition (each worker handles a subset of consecutive rows and all columns)\n",
|
||||
"- 2D block partition (each worker handles a subset of consecutive rows and columns)\n",
|
||||
@@ -848,9 +848,9 @@
|
||||
"\n",
|
||||
"|Partition | Messages <br> per iteration | Communication <br>per worker | Computation <br>per worker | Ratio communication/<br>computation |\n",
|
||||
"|---|---|---|---|---|\n",
|
||||
"| 1d block | 2 | O(N) | N²/P | O(P/N) |\n",
|
||||
"| 2d block | 4 | O(N/√P) | N²/P | O(√P/N) |\n",
|
||||
"| 2d cyclic | 4 |O(N²/P) | N²/P | O(1) |"
|
||||
"| 1D block | 2 | O(N) | N²/P | O(P/N) |\n",
|
||||
"| 2D block | 4 | O(N/√P) | N²/P | O(√P/N) |\n",
|
||||
"| 2D cyclic | 4 |O(N²/P) | N²/P | O(1) |"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -862,9 +862,9 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"- Both 1d and 2d block partitions are potentially scalable if $P<<N$\n",
|
||||
"- The 2d block partition has the lowest communication complexity\n",
|
||||
"- The 1d block partition requires to send less messages (It can be useful if the fixed cost of sending a message is high)\n",
|
||||
"- Both 1D and 2D block partitions are potentially scalable if $P<<N$\n",
|
||||
"- The 2D block partition has the lowest communication complexity\n",
|
||||
"- The 1D block partition requires to send less messages (It can be useful if the fixed cost of sending a message is high)\n",
|
||||
"- The best strategy for a given problem size will thus depend on the machine.\n",
|
||||
"- Cyclic partitions are impractical for this application (but they are useful in others) \n",
|
||||
"\n"
|
||||
@@ -1932,15 +1932,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.10.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.10"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.10.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -87,7 +87,7 @@
|
||||
"\n",
|
||||
"### Creating a task\n",
|
||||
"\n",
|
||||
"Technically, a task in Julia is a *symmetric* [*co-routine*](https://en.wikipedia.org/wiki/Coroutine). More informally, a task is a piece of computational work that can be started (scheduled) at some point in the future, and that can be interrupted and resumed. To create a task, we first need to create a function that represents the work to be done in the task. In next cell, we generate a task that generates and sums two matrices."
|
||||
"Technically, a task in Julia is a *symmetric* [*co-routine*](https://en.wikipedia.org/wiki/Coroutine). More informally, a task is a piece of computational work that can be started (scheduled) at some point in the future, and that can be interrupted and resumed. To create a task, we first need to create a function that represents the work to be done in the task. In the next cell, we generate a task that generates and sums two matrices."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -322,7 +322,7 @@
|
||||
"source": [
|
||||
"### `yield`\n",
|
||||
"\n",
|
||||
"If tasks do not run in parallel, what is the purpose of tasks? Tasks are handy since they can be interrupted and to switch control to other tasks. This is achieved via function `yield`. When we call yield, we provide the opportunity to switch to another task. The function below is a variation of function `compute_π` in which we yield every 1000 iterations. At the call to yield we allow other tasks to take over. Without this call to yield, once we start function `compute_π` we cannot start any other tasks until this function finishes."
|
||||
"If tasks do not run in parallel, what is the purpose of tasks? Tasks are handy since they can be interrupted and to switch control to other tasks. This is achieved via function `yield`. When we call `yield`, we provide the opportunity to switch to another task. The function below is a variation of function `compute_π` in which we `yield` every 1000 iterations. At the call to `yield` we allow other tasks to take over. Without this call to `yield`, once we start function `compute_π` we cannot start any other tasks until this function finishes."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -349,7 +349,7 @@
|
||||
"id": "69fd4131",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can check this behavior experimentally with the two following cells. The next one creates and schedules a task that computes pi with the function `compute_π_yield`. Note that you can run the 2nd cell bellow while this task is running since we call to yield often inside `compute_π_yield`."
|
||||
"You can check this behavior experimentally with the two following cells. The next one creates and schedules a task that computes pi with the function `compute_π_yield`. Note that you can run the 2nd cell bellow while this task is running since we call to `yield` often inside `compute_π_yield`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -381,7 +381,7 @@
|
||||
"source": [
|
||||
"### Example: Implementing function sleep\n",
|
||||
"\n",
|
||||
"Using yield, we can implement our own version of the sleep function as follows:"
|
||||
"Using `yield`, we can implement our own version of the sleep function as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -738,7 +738,8 @@
|
||||
"\n",
|
||||
"- `put!` will wait for a `take!` if there is not space left in the channel's buffer.\n",
|
||||
"- `take!` will wait for a `put!` if there is no data to be consumed in the channel.\n",
|
||||
"- Both `put!` and `take!` will raise an error if the channel is closed."
|
||||
"- `put!` will raise an error if the channel is closed.\n",
|
||||
"- `take!` will raise an error if the channel is closed *and* empty."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1015,15 +1016,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.10.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.10"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.10.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -167,7 +167,7 @@
|
||||
"```julia\n",
|
||||
"using MPI\n",
|
||||
"MPI.Init()\n",
|
||||
"# Your MPI programm here\n",
|
||||
"# Your MPI program here\n",
|
||||
"MPI.Finalize() # Optional\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
@@ -176,7 +176,7 @@
|
||||
"```julia\n",
|
||||
"using MPI\n",
|
||||
"MPI.Init(finalize_atexit=false)\n",
|
||||
"# Your MPI programm here\n",
|
||||
"# Your MPI program here\n",
|
||||
"MPI.Finalize() # Mandatory\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
@@ -186,7 +186,7 @@
|
||||
"#include <mpi.h>\n",
|
||||
"int main(int argc, char** argv) {\n",
|
||||
" MPI_Init(NULL, NULL);\n",
|
||||
" /* Your MPI Programm here */\n",
|
||||
" /* Your MPI Program here */\n",
|
||||
" MPI_Finalize();\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
@@ -612,7 +612,7 @@
|
||||
"id": "4b455f98",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"So, the full MPI program needs to be in the source file passed to Julia or the quote block. In practice, long MPI programms are written as Julia packages using several files, which are then loaded by each MPI process. For our simple example, we just need to include the definition of `foo` inside the quote block."
|
||||
"So, the full MPI program needs to be in the source file passed to Julia or the quote block. In practice, long MPI programs are written as Julia packages using several files, which are then loaded by each MPI process. For our simple example, we just need to include the definition of `foo` inside the quote block."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -920,7 +920,7 @@
|
||||
" source = MPI.ANY_SOURCE\n",
|
||||
" tag = MPI.ANY_TAG\n",
|
||||
" status = MPI.Probe(comm,MPI.Status; source, tag)\n",
|
||||
" count = MPI.Get_count(status,Int) # Get incomming message length\n",
|
||||
" count = MPI.Get_count(status,Int) # Get incoming message length\n",
|
||||
" println(\"I am about to receive $count integers.\")\n",
|
||||
" rcvbuf = zeros(Int,count) # Allocate \n",
|
||||
" MPI.Recv!(rcvbuf, comm, MPI.Status; source, tag)\n",
|
||||
@@ -973,7 +973,7 @@
|
||||
" if rank == 3\n",
|
||||
" rcvbuf = zeros(Int,5)\n",
|
||||
" MPI.Recv!(rcvbuf, comm, MPI.Status; source=2, tag=0)\n",
|
||||
" # recvbuf will have the incomming message fore sure. Recv! has returned.\n",
|
||||
" # recvbuf will have the incoming message fore sure. Recv! has returned.\n",
|
||||
" @show rcvbuf\n",
|
||||
" end\n",
|
||||
"end\n",
|
||||
@@ -1590,15 +1590,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.10.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.10"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.10.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -293,7 +293,7 @@
|
||||
"## Where can we exploit parallelism?\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The matrix-matrix multiplication is an example of [embarrassingly parallel algorithm](https://en.wikipedia.org/wiki/Embarrassingly_parallel). An embarrassingly parallel (also known as trivially parallel) algorithm is an algorithm that can be split in parallel tasks with no (or very few) dependences between them. Such algorithms are typically easy to parallelize.\n",
|
||||
"The matrix-matrix multiplication is an example of [embarrassingly parallel algorithm](https://en.wikipedia.org/wiki/Embarrassingly_parallel). An embarrassingly parallel (also known as trivially parallel) algorithm is an algorithm that can be split in parallel tasks with no (or very few) dependencies between them. Such algorithms are typically easy to parallelize.\n",
|
||||
"\n",
|
||||
"Which parts of an algorithm are completely independent and thus trivially parallel? To answer this question, it is useful to inspect the for loops, which are potential sources of parallelism. If the iterations are independent of each other, then they are trivial to parallelize. An easy check to find out if the iterations are dependent or not is to change their order (for instance changing `for j in 1:n` by `for j in n:-1:1`, i.e. doing the loop in reverse). If the result changes, then the iterations are not independent.\n",
|
||||
"\n",
|
||||
@@ -314,7 +314,7 @@
|
||||
"Note that:\n",
|
||||
"\n",
|
||||
"- Loops over `i` and `j` are trivially parallel.\n",
|
||||
"- The loop over `k` is not trivially parallel. The accumulation into the reduction variable `Cij` introduces extra dependences. In addition, remember that the addition of floating point numbers is not strictly associative due to rounding errors. Thus, the result of this loop may change with the loop order when using floating point numbers. In any case, this loop can also be parallelized, but it requires a parallel *fold* or a parallel *reduction*.\n",
|
||||
"- The loop over `k` is not trivially parallel. The accumulation into the reduction variable `Cij` introduces extra dependencies. In addition, remember that the addition of floating point numbers is not strictly associative due to rounding errors. Thus, the result of this loop may change with the loop order when using floating point numbers. In any case, this loop can also be parallelized, but it requires a parallel *fold* or a parallel *reduction*.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
@@ -1138,15 +1138,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.10.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.10"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.10.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -771,9 +771,11 @@
|
||||
" rank = MPI.Comm_rank(comm)\n",
|
||||
" if rank == 2\n",
|
||||
" sndbuf = [2]\n",
|
||||
" MPI.Send(sndbuf, comm2; dest=3, tag=0)\n",
|
||||
" req1 = MPI.Isend(sndbuf, comm2; dest=3, tag=0)\n",
|
||||
" sndbuf = [1]\n",
|
||||
" MPI.Send(sndbuf, comm; dest=3, tag=0)\n",
|
||||
" req2 = MPI.Isend(sndbuf, comm; dest=3, tag=0)\n",
|
||||
" MPI.Wait(req2)\n",
|
||||
" MPI.Wait(req1)\n",
|
||||
" end\n",
|
||||
" if rank == 3\n",
|
||||
" rcvbuf = zeros(Int,1)\n",
|
||||
@@ -944,15 +946,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.9.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.9"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.9.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1217,15 +1217,15 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.10.0",
|
||||
"display_name": "Julia 1.11.6",
|
||||
"language": "julia",
|
||||
"name": "julia-1.10"
|
||||
"name": "julia-1.11"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.10.0"
|
||||
"version": "1.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
Reference in New Issue
Block a user