Programming large-scale parallel systems¶
Jacobi method¶
Contents¶
In this notebook, we will learn
- How to paralleize a Jacobi method
- How the data partition can impact the performance of a distributed algorithm
- How to use latency hiding
The Jacobi method¶
The Jacobi method is a numerical tool to solve systems of linear algebraic equations. One of the main applications of the method is to solve boundary value problems (BVPs). I.e., given the values at the boundary (of a grid), the Jacoby method will find the interior values that fulfill a certain equation.
Serial implementation¶
function jacobi(n,niters)
u = zeros(n+2)
u[1] = -1
u[end] = 1
u_new = copy(u)
for t in 1:niters
for i in 2:(n+1)
u_new[i] = 0.5*(u[i-1]+u[i+1])
end
u, u_new = u_new, u
end
u
end
jacobi(5,1000)
Where do we can exploit parallelism?¶
Look at the two nested loops in the sequential implementation:
for t in 1:nsteps
for i in 2:(n+1)
u_new[i] = 0.5*(u[i-1]+u[i+1])
end
u, u_new = u_new, u
end
- The outer loop cannot be parallelized. The value of
uat stept+1depends on the value at the previous stept. - The inner loop can be parallelized
The Gauss-Seidel method¶
The usage of u_new seems a bit unnecessary at first sight, right?. If we remove it, we get another method called Gauss-Seidel.
function gauss_seidel(n,nsteps)
u = zeros(n+2)
u[1] = -1
u[end] = 1
for t in 1:nsteps
for i in 2:(n+1)
u[i] = 0.5*(u[i-1]+u[i+1])
end
end
u
end
Note that the final solution is the same (up to machine precision).
gauss_seidel(5,1000)
for t in 1:nsteps
for i in 2:(n+1)
u[i] = 0.5*(u[i-1]+u[i+1])
end
end
a) Both of them
b) The outer, but not the inner
c) None of them
d) The inner, but not the outer
#TODO answer (c)
Parallelization of the Jacobi method¶
Parallelization strategy¶
- Each worker updates a consecutive section of the array
u_new
Data dependencies¶
Recall:
u_new[i] = 0.5*(u[i-1]+u[i+1])
Thus, each process will need values from the neighboring processes to perform the update of its boundary values.
Ghost cells¶
A usual way of handling this type of data dependencies is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential jacoby update locally in the processes.
#TODO
Implementation¶
] add MPI MPIClusterManagers
using MPIClusterManagers
using Distributed
if procs() == workers()
nw = 3
manager = MPIWorkerManager(nw)
addprocs(manager)
end
@everywhere workers() begin
using MPI
MPI.Initialized() && MPI.Init()
comm = MPI.Comm_dup(MPI.COMM_WORLD)
nw = MPI.Comm_size(comm)
iw = MPI.Comm_rank(comm)+1
function jacobi_mpi(n,niters)
if mod(n,nw) != 0
println("n must be a multiple of nw")
MPI.Abort(comm,1)
end
n_own = div(n,nw)
u = zeros(n_own+2)
u[1] = -1
u[end] = 1
u_new = copy(u)
for t in 1:niters
reqs = MPI.Request[]
if iw != 1
neig_rank = (iw-1)-1
req = MPI.Isend(view(u,2:2),comm,dest=neig_rank,tag=0)
push!(reqs,req)
req = MPI.Irecv!(view(u,1:1),comm,source=neig_rank,tag=0)
push!(reqs,req)
end
if iw != nw
neig_rank = (iw+1)-1
s = n_own-1
r = n_own
req = MPI.Isend(view(u,s:s),comm,dest=neig_rank,tag=0)
push!(reqs,req)
req = MPI.Irecv!(view(u,r:r),comm,source=neig_rank,tag=0)
push!(reqs,req)
end
MPI.Waitall(reqs)
for i in 2:(n_own+1)
u_new[i] = 0.5*(u[i-1]+u[i+1])
end
u, u_new = u_new, u
end
u
@show u
end
niters = 100
load = 4
n = load*nw
jacobi_mpi(n,niters)
end