mirror of
https://github.com/fverdugo/XM_40017.git
synced 2025-11-24 09:24:32 +01:00
Some improvements in the first notebooks
This commit is contained in:
@@ -2,19 +2,6 @@
|
||||
|
||||
## Julia Basics
|
||||
|
||||
### NB1-Q1
|
||||
|
||||
In the first, line we assign a variable to a value. In the second line, we assign another variable to the same value. Thus,we have 2 variables associated with the same value. In line 3, we associate `y` to a new value (re-assignment). Thus, we have 2 variables associated with 2 different values. Variable `x` is still associated with its original value. Thus, the value at the final line is `x=1`.
|
||||
|
||||
### NB1-Q2
|
||||
|
||||
It will be `1` for very similar reasons as in the previous questions: we are reassigning a local variable, not the global variable defined outside the function.
|
||||
|
||||
### NB1-Q3
|
||||
|
||||
It will be `6`. In the returned function `f2`, `x` is equal to `2`. Thus, when calling `f2(3)` we compute `2*3`.
|
||||
|
||||
|
||||
### Exercise 1
|
||||
|
||||
```julia
|
||||
@@ -50,77 +37,10 @@ heatmap(x,y,(i,j)->mandel(i,j,max_iters))
|
||||
|
||||
## Asynchronous programming in Julia
|
||||
|
||||
### NB2-Q1
|
||||
|
||||
Evaluating `compute_π(100_000_000)` takes about 0.25 seconds. Thus, the loop would take about 2.5 seconds since we are calling the function 10 times.
|
||||
|
||||
### NB2-Q2
|
||||
|
||||
The time in doing the loop will be almost zero since the loop just schedules 10 tasks, which should be very fast.
|
||||
|
||||
### NB2-Q3
|
||||
|
||||
It will take 2.5 seconds, like in question 1. The `@sync` macro forces to wait for all tasks we have generated with the `@async` macro. Since we have created 10 tasks and each of them takes about 0.25 seconds, the total time will be about 2.5 seconds.
|
||||
|
||||
### NB2-Q4
|
||||
|
||||
It will take about 3 seconds. The channel has buffer size 4, thus the call to `put!`will not block. The call to `take!` will not block neither since there is a value stored in the channel. The taken value is 3 and therefore we will wait for 3 seconds.
|
||||
|
||||
### NB2-Q5
|
||||
|
||||
The channel is not buffered and therefore the call to `put!` will block. The cell will run forever, since there is no other task that calls `take!` on this channel.
|
||||
|
||||
## Distributed computing in Julia
|
||||
|
||||
### NB3-Q1
|
||||
|
||||
We send the matrix (16 entries) and then we receive back the result (1 extra integer). Thus, the total number of transferred integers in 17.
|
||||
|
||||
### NB3-Q2
|
||||
|
||||
Even though we only use a single entry of the matrix in the remote worker, the entire matrix is captured and sent to the worker. Thus, we will transfer 17 integers like in Question 1.
|
||||
|
||||
### NB3-Q3
|
||||
|
||||
The value of `x` will still be zero since the worker receives a copy of the matrix and it modifies this copy, not the original one.
|
||||
|
||||
### NB3-Q4
|
||||
|
||||
In this case, the code `a[2]=2` is executed in the main process. Since the matrix is already in the main process, it is not needed to create and send a copy of it. Thus, the code modifies the original matrix and the value of `x` will be 2.
|
||||
|
||||
## Distributed computing with MPI
|
||||
|
||||
### Exercise 1
|
||||
|
||||
```julia
|
||||
using MPI
|
||||
MPI.Init()
|
||||
comm = MPI.Comm_dup(MPI.COMM_WORLD)
|
||||
rank = MPI.Comm_rank(comm)
|
||||
nranks = MPI.Comm_size(comm)
|
||||
buffer = Ref(0)
|
||||
if rank == 0
|
||||
msg = 2
|
||||
buffer[] = msg
|
||||
println("msg = $(buffer[])")
|
||||
MPI.Send(buffer,comm;dest=rank+1,tag=0)
|
||||
MPI.Recv!(buffer,comm;source=nranks-1,tag=0)
|
||||
println("msg = $(buffer[])")
|
||||
else
|
||||
dest = if (rank != nranks-1)
|
||||
rank+1
|
||||
else
|
||||
0
|
||||
end
|
||||
MPI.Recv!(buffer,comm;source=rank-1,tag=0)
|
||||
buffer[] += 1
|
||||
println("msg = $(buffer[])")
|
||||
MPI.Send(buffer,comm;dest,tag=0)
|
||||
end
|
||||
```
|
||||
|
||||
### Exercise 2
|
||||
|
||||
```julia
|
||||
f = () -> Channel{Int}(1)
|
||||
chnls = [ RemoteChannel(f,w) for w in workers() ]
|
||||
@@ -160,6 +80,38 @@ end
|
||||
msg = 2
|
||||
@fetchfrom 2 work(msg)
|
||||
```
|
||||
|
||||
## MPI (Point-to-point)
|
||||
|
||||
### Exercise 1
|
||||
|
||||
```julia
|
||||
using MPI
|
||||
MPI.Init()
|
||||
comm = MPI.Comm_dup(MPI.COMM_WORLD)
|
||||
rank = MPI.Comm_rank(comm)
|
||||
nranks = MPI.Comm_size(comm)
|
||||
buffer = Ref(0)
|
||||
if rank == 0
|
||||
msg = 2
|
||||
buffer[] = msg
|
||||
println("msg = $(buffer[])")
|
||||
MPI.Send(buffer,comm;dest=rank+1,tag=0)
|
||||
MPI.Recv!(buffer,comm;source=nranks-1,tag=0)
|
||||
println("msg = $(buffer[])")
|
||||
else
|
||||
dest = if (rank != nranks-1)
|
||||
rank+1
|
||||
else
|
||||
0
|
||||
end
|
||||
MPI.Recv!(buffer,comm;source=rank-1,tag=0)
|
||||
buffer[] += 1
|
||||
println("msg = $(buffer[])")
|
||||
MPI.Send(buffer,comm;dest,tag=0)
|
||||
end
|
||||
```
|
||||
|
||||
## Matrix-matrix multiplication
|
||||
|
||||
### Exercise 1
|
||||
@@ -209,10 +161,6 @@ end
|
||||
end
|
||||
```
|
||||
|
||||
### Exercise 2
|
||||
|
||||
At each call to @spawnat we will communicate O(N) and compute O(N) in a worker process just like in algorithm 1. However, we will do this work N^2/P times on average at each worker. Thus, the total communication and computation on a worker will be O(N^3/P) for both communication and computation. Thus, the communication over computation ratio will still be O(1) and thus the communication will dominate in practice, making the algorithm inefficient.
|
||||
|
||||
## Jacobi method
|
||||
|
||||
### Exercise 1
|
||||
|
||||
Reference in New Issue
Block a user