Merge branch 'main' of github.com:fverdugo/XM_40017 into main

This commit is contained in:
Francesc Verdugo
2023-08-14 18:49:28 +02:00
50 changed files with 221195 additions and 114 deletions

View File

@@ -2110,7 +2110,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.9.0",
"display_name": "Julia 1.9.1",
"language": "julia",
"name": "julia-1.9"
},
@@ -2118,7 +2118,7 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.9.0"
"version": "1.9.1"
}
},
"nbformat": 4,

View File

@@ -51,7 +51,9 @@
"end\n",
"gauss_seidel_1_check(answer) = answer_checker(answer,\"c\")\n",
"jacobi_1_check(answer) = answer_checker(answer, \"d\")\n",
"jacobi_2_check(answer) = answer_checker(answer, \"b\")"
"jacobi_2_check(answer) = answer_checker(answer, \"b\")\n",
"jacobi_3_check(answer) = answer_checker(answer, \"c\")\n",
"jacobi_4_check(anwswer) = answer_checker(answer, \"d\")"
]
},
{
@@ -158,7 +160,7 @@
"```\n",
"\n",
"- The outer loop cannot be parallelized. The value of `u` at step `t+1` depends on the value at the previous step `t`.\n",
"- The inner loop can be parallelized\n",
"- The inner loop can be parallelized.\n",
"\n"
]
},
@@ -429,7 +431,7 @@
"\n",
"We consider the implementation using MPI. The programming model of MPI is generally better suited for data-parallel algorithms like this one than the task-based model provided by Distributed.jl. In any case, one can also implement it using Distributed, but it requires some extra effort to setup remote channels right for the communication between neighbor processes.\n",
"\n",
"Take a look at the implementation below and try to understand it. Note that we have used MPIClustermanagers and Distributed just to run the MPI code on the notebook. When running it on a cluster MPIClustermanagers and Distributed are not needed.\n"
"Take a look at the implementation below and try to understand it. Note that we have used MPIClustermanagers and Distributed just to run the MPI code on the notebook. When running it on a cluster, MPIClustermanagers and Distributed are not needed.\n"
]
},
{
@@ -449,7 +451,7 @@
"metadata": {},
"outputs": [],
"source": [
"using MPIClusterManagers\n",
"using MPIClusterManagers \n",
"using Distributed"
]
},
@@ -470,13 +472,26 @@
{
"cell_type": "code",
"execution_count": null,
"id": "68851107",
"id": "a0923606",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
"# Test cell, remove me\n",
"u = [-1, 0, 0, 0, 0, 1]\n",
"view(u, 6:6)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "68851107",
"metadata": {
"code_folding": []
},
"outputs": [],
"source": [
"@mpi_do manager begin\n",
" using MPI\n",
" MPI.Initialized() || MPI.Init()\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" nw = MPI.Comm_size(comm)\n",
" iw = MPI.Comm_rank(comm)+1\n",
@@ -492,6 +507,7 @@
" u_new = copy(u)\n",
" for t in 1:niters\n",
" reqs = MPI.Request[]\n",
" # Exchange cell values with neighbors\n",
" if iw != 1\n",
" neig_rank = (iw-1)-1\n",
" req = MPI.Isend(view(u,2:2),comm,dest=neig_rank,tag=0)\n",
@@ -501,8 +517,8 @@
" end\n",
" if iw != nw\n",
" neig_rank = (iw+1)-1\n",
" s = n_own-1\n",
" r = n_own\n",
" s = n_own+1\n",
" r = n_own+2\n",
" req = MPI.Isend(view(u,s:s),comm,dest=neig_rank,tag=0)\n",
" push!(reqs,req)\n",
" req = MPI.Irecv!(view(u,r:r),comm,source=neig_rank,tag=0)\n",
@@ -516,6 +532,14 @@
" end\n",
" u\n",
" @show u\n",
" # Gather results in root process\n",
" results = zeros(n+2)\n",
" results[1] = -1\n",
" results[n+2] = 1\n",
" MPI.Gather!(view(u,2:n_own+1), view(results, 2:n+1), root=0, comm)\n",
" if iw == 1\n",
" @show results\n",
" end \n",
" end\n",
" niters = 100\n",
" load = 4\n",
@@ -548,8 +572,60 @@
"outputs": [],
"source": [
"answer = \"x\" # replace x with a, b, c or d\n",
"jacobi_2_check(answer)\n",
"# TODO: think of more questions"
"jacobi_2_check(answer)"
]
},
{
"cell_type": "markdown",
"id": "075dd6d8",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-success\">\n",
"<b>Question:</b> After the end of the for-loop (line 43), ...\n",
"</div>\n",
"\n",
" a) each worker holds the complete solution.\n",
" b) the root process holds the solution. \n",
" c) the ghost cells contain redundant values. \n",
" d) all ghost cells contain the initial values -1 and 1. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c3b58002",
"metadata": {},
"outputs": [],
"source": [
"answer = \"x\" # replace x with a, b, c or d\n",
"jacobi_3_check(answer)"
]
},
{
"cell_type": "markdown",
"id": "4537661d",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-success\">\n",
"<b>Question:</b> In line 35 of the code, we wait for all receive and send requests. Is it possible to instead wait for just the receive requests?\n",
"</div>\n",
"\n",
" \n",
" a) No, because the send buffer might be overwritten if we don't wait for send requests.\n",
" b) No, because MPI does not allow an asynchronous send without a Wait().\n",
" c) Yes, because each send has a matching receive, so all requests are done when the receive requests return. \n",
" d) Yes, because there are no writes to the send buffer in this iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e16ea5eb",
"metadata": {},
"outputs": [],
"source": [
"answer = \"x\" # replace x with a, b, c or d.\n",
"jacobi_4_check(answer)"
]
},
{
@@ -872,9 +948,8 @@
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
"@mpi_do manager begin\n",
" using MPI\n",
" MPI.Initialized() || MPI.Init()\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" nw = MPI.Comm_size(comm)\n",
" iw = MPI.Comm_rank(comm)+1\n",
@@ -890,6 +965,7 @@
" u_new = copy(u)\n",
" for t in 1:niters\n",
" reqs = MPI.Request[]\n",
" # Exchange cell values with neighbors\n",
" if iw != 1\n",
" neig_rank = (iw-1)-1\n",
" req = MPI.Isend(view(u,2:2),comm,dest=neig_rank,tag=0)\n",
@@ -899,8 +975,8 @@
" end\n",
" if iw != nw\n",
" neig_rank = (iw+1)-1\n",
" s = n_own-1\n",
" r = n_own\n",
" s = n_own+1\n",
" r = n_own+2\n",
" req = MPI.Isend(view(u,s:s),comm,dest=neig_rank,tag=0)\n",
" push!(reqs,req)\n",
" req = MPI.Irecv!(view(u,r:r),comm,source=neig_rank,tag=0)\n",
@@ -914,6 +990,14 @@
" end\n",
" u\n",
" @show u\n",
" # Gather results in root process\n",
" results = zeros(n+2)\n",
" results[1] = -1\n",
" results[n+2] = 1\n",
" MPI.Gather!(view(u,2:n_own+1), view(results, 2:n+1), root=0, comm)\n",
" if iw == 1\n",
" @show results\n",
" end \n",
" end\n",
" niters = 100\n",
" load = 4\n",
@@ -923,75 +1007,44 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f302cce2",
"cell_type": "markdown",
"id": "ebb650d0",
"metadata": {},
"outputs": [],
"source": [
"## TODO move the following solution to its appropiate place:"
"### Exercise 2\n",
"\n",
"Compute the complexity of the communication and computation of the three data partition strategies (1d block partition, 2d block partition, and 2d cyclic partition) when computing a single iteration of the Jacobi method in 2D. Assume that the grid is of size $N \\times N$ and the number of processes $P$ is a perfect square number, i.e. $\\sqrt{P}$ is an integer. Hint: For the complexity analysis, you can ignore the effect of the boundary conditions.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4fa7fad3",
"id": "7b3d7cb3",
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
" using MPI\n",
" MPI.Initialized() || MPI.Init()\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" nw = MPI.Comm_size(comm)\n",
" iw = MPI.Comm_rank(comm)+1\n",
" function jacobi_mpi(n,niters)\n",
" if mod(n,nw) != 0\n",
" println(\"n must be a multiple of nw\")\n",
" MPI.Abort(comm,1)\n",
" end\n",
" n_own = div(n,nw)\n",
" u = zeros(n_own+2)\n",
" u[1] = -1\n",
" u[end] = 1\n",
" u_new = copy(u)\n",
" for t in 1:niters\n",
" reqs_snd = MPI.Request[]\n",
" reqs_rcv = MPI.Request[]\n",
" if iw != 1\n",
" neig_rank = (iw-1)-1\n",
" req = MPI.Isend(view(u,2:2),comm,dest=neig_rank,tag=0)\n",
" push!(reqs_snd,req)\n",
" req = MPI.Irecv!(view(u,1:1),comm,source=neig_rank,tag=0)\n",
" push!(reqs_rcv,req)\n",
" end\n",
" if iw != nw\n",
" neig_rank = (iw+1)-1\n",
" s = n_own-1\n",
" r = n_own\n",
" req = MPI.Isend(view(u,s:s),comm,dest=neig_rank,tag=0)\n",
" push!(reqs_snd,req)\n",
" req = MPI.Irecv!(view(u,r:r),comm,source=neig_rank,tag=0)\n",
" push!(reqs_rcv,req)\n",
" end\n",
" for i in 3:n_own\n",
" u_new[i] = 0.5*(u[i-1]+u[i+1])\n",
" end\n",
" MPI.Waitall(reqs_rcv)\n",
" for i in (2,n_own+1)\n",
" u_new[i] = 0.5*(u[i-1]+u[i+1])\n",
" end\n",
" MPI.Waitall(reqs_snd)\n",
" u, u_new = u_new, u\n",
" end\n",
" u\n",
" end\n",
" niters = 100\n",
" load = 4\n",
" n = load*nw\n",
" jacobi_mpi(n,niters)\n",
"end"
"# TODO"
]
},
{
"cell_type": "markdown",
"id": "6d3430ad",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d72ff47",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -754,11 +754,31 @@
" sleep(i)\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "a5d3730b",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9863011",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.9.0",
"display_name": "Julia 1.9.1",
"language": "julia",
"name": "julia-1.9"
},
@@ -766,7 +786,7 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.9.0"
"version": "1.9.1"
}
},
"nbformat": 4,

View File

@@ -1585,11 +1585,31 @@
"source": [
"# Implement here"
]
},
{
"cell_type": "markdown",
"id": "357e0490",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8d92f25",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.9.0",
"display_name": "Julia 1.9.1",
"language": "julia",
"name": "julia-1.9"
},
@@ -1597,7 +1617,7 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.9.0"
"version": "1.9.1"
}
},
"nbformat": 4,

View File

@@ -1296,10 +1296,22 @@
"We have seen the basics of distributed computing in Julia. The programming model is essentially an extension of tasks and channels to parallel computations on multiple machines. The low-level functions are `remotecall` and `RemoteChannel`, but there are other functions and macros like `pmap` and `@distributed` that simplify the implementation of parallel algorithms."
]
},
{
"cell_type": "markdown",
"id": "9a49ad48",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49d094e4",
"id": "8e36ae43",
"metadata": {},
"outputs": [],
"source": []
@@ -1307,7 +1319,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.9.0",
"display_name": "Julia 1.9.1",
"language": "julia",
"name": "julia-1.9"
},
@@ -1315,7 +1327,7 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.9.0"
"version": "1.9.1"
}
},
"nbformat": 4,

View File

@@ -485,6 +485,24 @@
"\n",
"If you want to interact with the Julia community on discourse, sign in at https://discourse.julialang.org/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -1109,10 +1109,22 @@
"println(\"Efficiency = \", 100*(T1/TP)/P, \"%\")"
]
},
{
"cell_type": "markdown",
"id": "8e171362",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd31d955",
"id": "86b7b044",
"metadata": {},
"outputs": [],
"source": []

View File

@@ -374,10 +374,22 @@
"In this example, the root processor generates random data and then scatters it to all processes using MPI.Scatter. Each process calculates the average of its local data, and then the local averages are gathered using MPI.Gather. The root processor computes the global average of all sub-averages and prints it."
]
},
{
"cell_type": "markdown",
"id": "5e8f6e6a",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fcf34823",
"id": "c9364808",
"metadata": {},
"outputs": [],
"source": []

View File

@@ -1,13 +1,90 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f48b9a60",
"metadata": {},
"source": [
"# Solutions to Notebook Exercises\n",
"\n",
"## Julia Basics: Exercise 1"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a06fd02a",
"metadata": {},
"outputs": [],
"source": [
"function ex1(a)\n",
" j = 1\n",
" m = a[j]\n",
" for (i,ai) in enumerate(a)\n",
" if m < ai\n",
" m = ai\n",
" j = i\n",
" end\n",
" end\n",
" (m,j)\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "175b6c35",
"metadata": {},
"source": [
"## Julia Basics: Exercise 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bb289acd",
"metadata": {},
"outputs": [],
"source": [
"ex2(f,g) = x -> f(x) + g(x) "
]
},
{
"cell_type": "markdown",
"id": "86250e27",
"metadata": {},
"source": [
"## Julia Basics: Exercise 3"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41b537ab",
"metadata": {},
"outputs": [],
"source": [
"function compute_values(n,max_iters)\n",
" x = LinRange(-1.7,0.7,n)\n",
" y = LinRange(-1.2,1.2,n)\n",
" values = zeros(Int,n,n)\n",
" for j in 1:n\n",
" for i in 1:n\n",
" values[i,j] = mandel(x[i],y[j],max_iters)\n",
" end\n",
" end\n",
" values\n",
"end\n",
"values = compute_values(1000,10)\n",
"using GLMakie\n",
"heatmap(x,y,values)"
]
},
{
"cell_type": "markdown",
"id": "d6d12733",
"metadata": {},
"source": [
"# Solutions to Notebook Exercises\n",
"\n",
"## Matrix Multiplication : Implementation of Algorithm 3"
"## Matrix Multiplication : Exercise 1"
]
},
{
@@ -56,9 +133,8 @@
"metadata": {},
"outputs": [],
"source": [
"@everywhere workers() begin\n",
"@mpi_do manager begin\n",
" using MPI\n",
" MPI.Initialized() || MPI.Init()\n",
" comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
" nw = MPI.Comm_size(comm)\n",
" iw = MPI.Comm_rank(comm)+1\n",
@@ -84,8 +160,8 @@
" end\n",
" if iw != nw\n",
" neig_rank = (iw+1)-1\n",
" s = n_own-1\n",
" r = n_own\n",
" s = n_own+1\n",
" r = n_own+2\n",
" req = MPI.Isend(view(u,s:s),comm,dest=neig_rank,tag=0)\n",
" push!(reqs_snd,req)\n",
" req = MPI.Irecv!(view(u,r:r),comm,source=neig_rank,tag=0)\n",
@@ -102,6 +178,7 @@
" u, u_new = u_new, u\n",
" end\n",
" u\n",
" @show u\n",
" end\n",
" niters = 100\n",
" load = 4\n",
@@ -109,6 +186,26 @@
" jacobi_mpi(n,niters)\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "47d88e7a",
"metadata": {},
"source": [
"# License\n",
"\n",
"TODO: replace link to website\n",
"\n",
"This notebook is part of the course [Programming Large Scale Parallel Systems](http://localhost:8000/) at Vrije Universiteit Amsterdam and may be used under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "968304a6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {