The programming of this course will be done using the Julia programming language. Thus, we start by explaining how to get up and running with Julia. After learning this page, you will be able to:
Courses related with high-performance computing (HPC) often use languages such as C, C++, or Fortran. We use Julia instead to make the course accessible to a wider set of students, including the ones that have no experience with C/C++ or Fortran, but are willing to learn parallel programming. Julia is a relatively new programming language specifically designed for scientific computing. It combines a high-level syntax close to interpreted languages like Python with the performance of compiled languages like C, C++, or Fortran. Thus, Julia will allow us to write efficient parallel algorithms with a syntax that is convenient in a teaching setting. In addition, Julia provides easy access to different programming models to write distributed algorithms, which will be useful to learn and experiment with them.
Tip
You can run the code in this link to learn how Julia compares to other languages (C and Python) in terms of performance.
There are several ways of opening Julia depending on your operating system and your IDE, but it is usually as simple as launching the Julia app. With VSCode, open a folder (File > Open Folder). Then, press Ctrl+Shift+P to open the command bar, and execute Julia: Start REPL. If this does not work, make sure you have the Julia extension for VSCode installed. Independently of the method you use, opening Julia results in a window with some text ending with:
julia>
You have just opened the Julia read-evaluate-print loop, or simply the Julia REPL. Congrats! You will spend most of time using the REPL, when working in Julia. The REPL is a console waiting for user input. Just as in other consoles, the string of text right before the input area (julia> in the case) is called the command prompt or simply the prompt.
Curious about what function println does? Enter into help mode to look into the documentation. This is done by typing a question mark (?) into the inut field:
julia> ?
After typing ?, the command prompt changes to help?>. It means we are in help mode. Now, we can type a function name to see its documentation.
The REPL comes with two more modes, namely package and shell modes. To enter package mode type
julia> ]
Package mode is used to install and manage packages. We are going to discuss the package mode in greater detail later. To return back to normal mode press the backspace key several times.
To enter shell mode type semicolon (;)
julia> ;
The prompt should have changed to shell> indicating that we are in shell mode. Now you can type commands that you would normally do on your system command line. For instance,
shell> ls
will display the contents of the current folder in Mac or Linux. Using shell mode in Windows is not straightforward, and thus not recommended for beginners.
Real-world Julia programs are not typed in the REPL in practice. They are written in one or more files and included in the REPL. To try this, create a new file called hello.jl, write the code of the "Hello world" example above, and save it. If you are using VSCode, you can create the file using File > New File > Julia File. Once the file is saved with the name hello.jl, execute it as follows
julia> include("hello.jl")
\warn{ Make sure that the file "hello.jl" is located in the current working directory of your Julia session. You can query the current directory with function pwd(). You can change to another directory with function cd() if needed. Also, make sure that the file extension is .jl.}
The recommended way of running Julia code is using the REPL as we did. But it is also possible to run code directly from the system command line. To this end, open a terminal and call Julia followed buy the path to the file containing the code you want to execute.
$ julia hello.jl
Previous line assumes that you have Julia properly installed in the system and that is usable from the terminal. In UNIX systems (Linux and Mac), the Julia binary needs to be in one of the directories listed in the PATH environment variable. To check that Julia is properly installed, you can use
$ julia --version
If this runs without error and you see a version number, you are good to go!
Note
In this tutorial, when a code snipped starts with $, it should be run in the terminal. Otherwise, the code is to be run in the Julia REPL.
Tip
Avoid calling Julia code from the terminal, use the Julia REPL instead! Each time you call Julia from the terminal, you start a fresh Julia session and Julia will need to compile your code from scratch. This can be time consuming for large projects. In contrast, if you execute code in the REPL, Julia will compile code incrementally, which is much faster. Running code in a cluster (like in DAS-5 for the Julia assignment) is among the few situations you need to run Julia code from the terminal.
The programming of this course will be done using the Julia programming language. Thus, we start by explaining how to get up and running with Julia. After learning this page, you will be able to:
Courses related with high-performance computing (HPC) often use languages such as C, C++, or Fortran. We use Julia instead to make the course accessible to a wider set of students, including the ones that have no experience with C/C++ or Fortran, but are willing to learn parallel programming. Julia is a relatively new programming language specifically designed for scientific computing. It combines a high-level syntax close to interpreted languages like Python with the performance of compiled languages like C, C++, or Fortran. Thus, Julia will allow us to write efficient parallel algorithms with a syntax that is convenient in a teaching setting. In addition, Julia provides easy access to different programming models to write distributed algorithms, which will be useful to learn and experiment with them.
Tip
You can run the code in this link to learn how Julia compares to other languages (C and Python) in terms of performance.
There are several ways of opening Julia depending on your operating system and your IDE, but it is usually as simple as launching the Julia app. With VSCode, open a folder (File > Open Folder). Then, press Ctrl+Shift+P to open the command bar, and execute Julia: Start REPL. If this does not work, make sure you have the Julia extension for VSCode installed. Independently of the method you use, opening Julia results in a window with some text ending with:
julia>
You have just opened the Julia read-evaluate-print loop, or simply the Julia REPL. Congrats! You will spend most of time using the REPL, when working in Julia. The REPL is a console waiting for user input. Just as in other consoles, the string of text right before the input area (julia> in the case) is called the command prompt or simply the prompt.
Curious about what function println does? Enter into help mode to look into the documentation. This is done by typing a question mark (?) into the inut field:
julia> ?
After typing ?, the command prompt changes to help?>. It means we are in help mode. Now, we can type a function name to see its documentation.
The REPL comes with two more modes, namely package and shell modes. To enter package mode type
julia> ]
Package mode is used to install and manage packages. We are going to discuss the package mode in greater detail later. To return back to normal mode press the backspace key several times.
To enter shell mode type semicolon (;)
julia> ;
The prompt should have changed to shell> indicating that we are in shell mode. Now you can type commands that you would normally do on your system command line. For instance,
shell> ls
will display the contents of the current folder in Mac or Linux. Using shell mode in Windows is not straightforward, and thus not recommended for beginners.
Real-world Julia programs are not typed in the REPL in practice. They are written in one or more files and included in the REPL. To try this, create a new file called hello.jl, write the code of the "Hello world" example above, and save it. If you are using VSCode, you can create the file using File > New File > Julia File. Once the file is saved with the name hello.jl, execute it as follows
julia> include("hello.jl")
\warn{ Make sure that the file "hello.jl" is located in the current working directory of your Julia session. You can query the current directory with function pwd(). You can change to another directory with function cd() if needed. Also, make sure that the file extension is .jl.}
The recommended way of running Julia code is using the REPL as we did. But it is also possible to run code directly from the system command line. To this end, open a terminal and call Julia followed buy the path to the file containing the code you want to execute.
$ julia hello.jl
Previous line assumes that you have Julia properly installed in the system and that is usable from the terminal. In UNIX systems (Linux and Mac), the Julia binary needs to be in one of the directories listed in the PATH environment variable. To check that Julia is properly installed, you can use
$ julia --version
If this runs without error and you see a version number, you are good to go!
Note
In this tutorial, when a code snipped starts with $, it should be run in the terminal. Otherwise, the code is to be run in the Julia REPL.
Tip
Avoid calling Julia code from the terminal, use the Julia REPL instead! Each time you call Julia from the terminal, you start a fresh Julia session and Julia will need to compile your code from scratch. This can be time consuming for large projects. In contrast, if you execute code in the REPL, Julia will compile code incrementally, which is much faster. Running code in a cluster (like in DAS-5 for the Julia assignment) is among the few situations you need to run Julia code from the terminal.
Since we are in a parallel computing course, let's run a parallel "hello world" example in Julia. Open a Julia REPL and write
julia> using Distributed
julia> @everywhere println("Hello, world! I am proc $(myid()) from $(nprocs())")
Here, we are using the Distributed package, which is part of the Julia standard library that provides distributed memory parallel support. The code prints the process id and the number of processes in the current Julia session.
You will provably only see output from 1 proces. We need to add more processes to run the example in parallel. This is done with the addprocs function.
julia> addprocs(3)
We have added 3 new processes, plus the old one, we have 4 processes. Run the code again.
julia> @everywhere println("Hello, world! I am proc $(myid()) from $(nprocs())")
Now, you should see output from 4 processes.
It is possible to specify the number of processes when starting Julia from the terminal with the -p argument (useful, e.g., when running in a cluster). If you launch Julia from the terminal as
$ julia -p 3
and then run
julia> @everywhere println("Hello, world! I am proc $(myid()) from $(nprocs())")
One of the most useful features of Julia is its package manager. It allows one to install Julia packages in a straightforward and platform independent way. To illustrate this, let us consider the following parallel "Hello world" example. This example uses the message passing interface (MPI). We will learn more about MPI later in the course.
Copy the following block of code into a new file named "hello_mpi.jl"
You should get an error or a
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
Copy the contents of previous code block into a file called Project.toml and place it in an empty folder named newproject. It is important that the file is named Project.toml. You can create a new folder from the REPL with
julia> mkdir("newproject")
To install all the packages registered in this file you need to activate the folder containing your Project.toml file
(@v1.8) pkg> activate newproject
and then instantiating it
(newproject) pkg> instantiate
The instantiate command will download and install all listed packages and their dependencies in just one click.
In some situations it is required to use package commands in Julia code, e.g., to automatize installation and deployment of Julia applications. This can be done using the Pkg package. For instance
We have learned the basics of how to work with Julia. If you want to further dig into the topics we have covered here, you can take a look and the following links
We have learned the basics of how to work with Julia. If you want to further dig into the topics we have covered here, you can take a look and the following links
This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. In this page, we provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. Further information about the course is found in the study guide (click here) and our Canvas page (for registered students).
Note
This page contains only part of the course material. The rest is available on Canvas. In particular, the lecture notes in this public webpage do not fully cover all topics in the final exam.
Download the notebooks and run them locally on your computer (recommended)
Run the notebooks on the cloud via mybinder.org (high startup time).
You also have the static version of the notebooks displayed in this webpage for quick reference. At each notebook page you will find a green box with links to download the notebook or to open in on mybinder.
This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. Further information about the course is found in the study guide (click here) and our Canvas page (for registered students).
Note
This page contains only part of the course material. The rest is available on Canvas. In particular, the lecture notes in this public webpage do not fully cover all topics in the final exam.
Download the notebooks and run them locally on your computer (recommended)
Run the notebooks on the cloud via mybinder.org (high startup time).
You also have the static version of the notebooks displayed in this webpage for quick reference. At each notebook page you will find a green box with links to download the notebook or to open in on mybinder.
This material was created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.
This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project "Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course" funded by the "Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning".
Settings
This document was generated with Documenter.jl version 0.27.25 on Thursday 10 August 2023. Using Julia version 1.9.2.
+julia> notebook()
These commands will open a jupyter in your web browser. Navigate in jupyter to the notebook file you have downloaded and open it.
This material is created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.
This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project "Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course" funded by the "Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning".
Settings
This document was generated with Documenter.jl version 0.27.25 on Friday 11 August 2023. Using Julia version 1.9.2.
- Download this notebook and run it locally on your machine [recommended]. Click here.
-
-
- You can also run this notebook in the cloud using Binder. Click here
- .
-
-
-
-
-
Settings
This document was generated with Documenter.jl version 0.27.25 on Thursday 10 August 2023. Using Julia version 1.9.2.
diff --git a/dev/notebook-output/jacobi_2D.html b/dev/notebook-html/jacobi_2D.html
similarity index 100%
rename from dev/notebook-output/jacobi_2D.html
rename to dev/notebook-html/jacobi_2D.html
diff --git a/dev/notebook-output/jacobi_method.html b/dev/notebook-html/jacobi_method.html
similarity index 86%
rename from dev/notebook-output/jacobi_method.html
rename to dev/notebook-html/jacobi_method.html
index f4d098a..2d446b2 100644
--- a/dev/notebook-output/jacobi_method.html
+++ b/dev/notebook-html/jacobi_method.html
@@ -7559,7 +7559,7 @@ a.anchor-link {
-Note: The values computed by the Jacobi method are linearly increasing from -1 to 1. It is possible to show mathematically that the method we implemented in the function above approximates a 1D Laplace equation via a finite difference method and the solution of this equation in this setup is a linear function.
+Note: The values computed by the Jacobi method are linearly increasing from -1 to 1. It is possible to show mathematically that the method we implemented in the function above approximates a 1D Laplace equation via a finite difference method and the solution of this equation is a linear function.
Note: In our version of the jacobi method, we return after a given number of iterations. Other stopping criteria are possible. For instance, iterate until the difference between u and u_new is below a tolerance.
@@ -7608,11 +7608,11 @@ a.anchor-link {
A usual way of handling this type of data dependencies is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential jacoby update locally in the processes.
A usual way of handling this type of data dependencies is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential jacoby update locally in the processes.
@@ -7781,7 +7781,7 @@ d) The inner, but not the outer
In [ ]:
-
#TODO
+
#TODO give multiple options
@@ -7794,7 +7794,8 @@ d) The inner, but not the outer
We consider the implementation using MPI. The programming model of MPI is generally better suited for data-parallel algorithms like this one than the task-based model provided by Distributed.jl. In any case, one can also implement it using Distributed, but it requires some extra effort to setup remote channels right for the communication between neighbor processes.
+
Take a look at the implementation below and try to understand it. Note that we have used MPIClustermanagers and Distributed just to run the MPI code on the notebook. When running it on a cluster MPIClustermanagers and Distributed are not needed.
@@ -7855,7 +7856,7 @@ d) The inner, but not the outer
@everywhereworkers()beginusingMPI
-MPI.Initialized()&&MPI.Init()
+MPI.Initialized()||MPI.Init()comm=MPI.Comm_dup(MPI.COMM_WORLD)nw=MPI.Comm_size(comm)iw=MPI.Comm_rank(comm)+1
@@ -7906,7 +7907,24 @@ d) The inner, but not the outer
-
+
+
+
+
+
+
+
+
+Question: How many messages per iteration are sent from a process away from the boundary?
+
+
a) 1
+b) 2
+c) 3
+d) 4
+
+
+
+
@@ -7914,13 +7932,60 @@ d) The inner, but not the outer
In [ ]:
-
+
# TODO (b) 2 mesajes. Add another question if you find it useful
Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with communications that we need to do anyway.
+
The modification of the implementation above to include latency hiding is leaved as an exercise (see below).
Transform the following parallel implementation of the 1d Jacobi method (it is copied from above) to use latency hiding (overlap between computation of interior values and communication)
Message Passing Interface (MPI) is a standardized and portable library specification for communication between parallel processes in distributed memory systems. Julia offers a convenient way to work with MPI for creating efficient parallel and distributed applications. In this tutorial, you will learn how to use MPI from Julia to perform parallel computing tasks.
When you run an MPI-enabled Julia script, MPI takes care of spawning multiple instances of the Julia executable, each acting as a separate process. These workers can communicate with each other using MPI communication functions. This enables parallel processing and distributed computation. Here's a summary of how it works:
+
-- TODO: insert picture here --
+
+
MPI Spawns Processes: The mpiexec command launches multiple instances of the Julia executable, creating separate worker processes. In this example, 4 Julia workers are spawned.
+
+
Worker Communication: These workers can communicate with each other using MPI communication functions, allowing them to exchange data and coordinate actions.
+
+
Parallel Tasks: The workers execute parallel tasks simultaneously, working on different parts of the computation to potentially speed up the process.
+
+
+
Installing MPI.jl and MPIClusterManagers Packages¶
To use MPI in Julia, you'll need the MPI.jl package, and if you intend to run MPI programs in a Jupyter Notebook, you'll also need the MPIClusterManagers package. These packages provide the necessary bindings to the MPI library and cluster management capabilities. To install the packages, open a terminal and run the following commands:
+Tip:
+The package MPI.jl is the Julia interface to MPI. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the documentation of MPI.jl for further details.
+
Let's start by creating a simple MPI program that prints a message along with the rank of each worker.
+
Create a new Julia script, for example, mpi_hello_world.jl:
+
usingMPI
+
+# Initialize MPI
+MPI.Init()
+
+# Get the default communicator (MPI_COMM_WORLD) for all processes
+comm=MPI.COMM_WORLD
+
+# Get the number of processes in this communicator
+nranks=MPI.Comm_size(comm)
+
+# Get the rank of the current process within the communicator
+rank=MPI.Comm_rank(comm)
+
+# Print a message with the rank of the current process
+println("Hello, I am process $rank of $nranks processes!")
+
+# Finalize MPI
+MPI.Finalize()
+
In MPI, a communicator is a context in which a group of processes can communicate with each other. MPI_COMM_WORLD is one of the MPI standard communicators, it represents all processes in the MPI program. Custom communicators can also be created to group processes based on specific requirements or logical divisions.
+
The rank of a processor is a unique identifier assigned to each process within a communicator. It allows processes to distinguish and address each other in communication operations.
To run MPI applications in parallel, you need a launcher like mpiexec. MPI codes written in Julia are not an exception to this rule. From the system terminal, you can run
+
$ mpiexec -np 4 mpi_hello_world.jl
+
+
In this command, -np 4 specifies the desired number of processes.
+But it will probably not work since the version of mpiexec needs to match with the MPI version we are using from Julia. You can find the path to the mpiexec binary you need to use with these commands
+
julia>usingMPI
+julia>MPI.mpiexec_path
+
+
and then try again
+
$ /path/to/my/mpiexec -np 4 julia mpi_hello_world.jl
+
+
with your particular path.
+
However, this is not very convenient. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands:
+
julia>usingMPI
+julia>mpiexec(cmd->run(`$cmd -np 4 julia mpi_hello_world.jl`))
+
+
Now, you should see output from 4 ranks.
+
+
+
+
+
+
+
+
+
+
+
Running MPI Programs in Jupyter Notebook with MPIClusterManagers¶
If you want to run your MPI code from a Jupyter Notebook, you can do so using the MPIClusterManagers package.
+
+
Load the packages and start an MPI cluster with the desired number of workers:
+
+
+
+
+
+
+
+
+
+
In [ ]:
+
+
+
usingMPIClusterManagers
+# Distributed package is needed for addprocs()
+usingDistributed
+
+manager=MPIWorkerManager(4)
+addprocs(manager)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Run your MPI code inside a @mpi_do block to execute it on the cluster workers:
+
+
+
+
+
+
+
+
+
+
In [ ]:
+
+
+
@mpi_domanagerbegin
+usingMPI
+comm=MPI.COMM_WORLD
+rank=MPI.Comm_rank(comm)
+println("Hello from process $rank")
+end
+
+
+
+
+
+
+
+
+
+
+
+
+
MPI is automatically initialized and finalized within the @mpi_do block.
MPI provides point-to-point communication using blocking send and receiving functions MPI.send, MPI.recv; or their non-blocking versions MPI.Isend, and MPI.Irecv!. These functions allow individual processes to send and receive data between each other.
To demonstrate asynchronous communication, let's modify the example using MPI.Isend and MPI.Irecv!:
+
usingMPI
+
+MPI.Init()
+
+comm=MPI.COMM_WORLD
+rank=MPI.Comm_rank(comm)
+
+# Asynchronous communication using MPI.Isend and MPI.Irecv!
+ifrank==0
+data="Hello from process $rank !"
+request=MPI.Isend(data,comm,dest=1)
+# Other computation can happen here
+MPI.Wait(request)
+elseifrank==1
+received_data=Array{UInt8}(undef,50)# Preallocate buffer
+request=MPI.Irecv!(received_data,comm,source=0)
+# Other computation can happen here
+MPI.Wait(request)
+println("Process $rank received: $(String(received_data))")
+end
+
+MPI.Finalize()
+
+
+
+
+
+
+
+
+
+
+
+
In this example, process 0 uses MPI.Isend to send the message asynchronously. This function returns immediately, allowing the sender process to continue its execution. However, the actual sending of data is done asynchronously in the background. Similar to MPI.Isend, MPI.Irecv! returns immediately, allowing the receiver process to continue executing.
+
+Important: In asynchronous communication, always use MPI.Wait() to ensure the communication is finished before accessing the send or receive buffer.
+
MPI provides collective communication functions for communication involving multiple processes. Let's explore some of these functions:
+
+
MPI.Gather: Gathers data from all processes to a single process.
+
MPI.Scatter: Distributes data from one process to all processes.
+
MPI.Bcast: Broadcasts data from one process to all processes.
+
MPI.Barrier: Synchronizes all processes.
+
+
Let's illustrate the use of MPI.Gather and MPI.Scatter with an example:
+
# TODO: check if this runs correctly
+usingMPI
+usingRandom
+
+MPI.Init()
+
+comm=MPI.COMM_WORLD
+rank=MPI.Comm_rank(comm)
+size=MPI.Comm_size(comm)
+
+# Root processor generates random data
+data=rand(rank==0?size*2:0)
+
+# Scatter data to all processes
+local_data=Vector{Float64}(undef,2)
+MPI.Scatter!(data,local_data,comm,root=0)
+
+# Compute local average
+local_average=sum(local_data)/length(local_data)
+
+# Gather local averages at the root processor
+gathered_averages=Vector{Float64}(undef,size)
+MPI.Gather!(local_average,gathered_averages,comm,root=0)
+
+ifrank==0
+# Compute global average of sub-averages
+global_average=sum(gathered_averages)/size
+println("Global average: $global_average")
+end
+
+MPI.Finalize()
+
+
+
+
+
+
+
+
+
+
In [ ]:
+
+
+
usingMPI
+usingRandom
+
+# TODO: check if this runs correctly
+
+MPI.Init()
+
+comm=MPI.COMM_WORLD
+rank=MPI.Comm_rank(comm)
+size=MPI.Comm_size(comm)
+
+# Root processor generates random data
+data=rand(rank==0?size*2:0)
+
+# Scatter data to all processes
+local_data=Vector{Float64}(undef,2)
+MPI.Scatter!(data,local_data,comm,root=0)
+
+# Compute local average
+local_average=sum(local_data)/length(local_data)
+
+# Gather local averages at the root processor
+gathered_averages=Vector{Float64}(undef,size)
+MPI.Gather!(local_average,gathered_averages,comm,root=0)
+
+ifrank==0
+# Compute global average of sub-averages
+global_average=sum(gathered_averages)/size
+println("Global average: $global_average")
+end
+
+MPI.Finalize()
+
+
+
+
+
+
+
+
+
+
+
+
+
In this example, the root processor generates random data and then scatters it to all processes using MPI.Scatter. Each process calculates the average of its local data, and then the local averages are gathered using MPI.Gather. The root processor computes the global average of all sub-averages and prints it.
+
+
+
+
+
+
+
+
+
In [ ]:
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/dev/notebook-output/notebook-hello.html b/dev/notebook-html/notebook-hello.html
similarity index 100%
rename from dev/notebook-output/notebook-hello.html
rename to dev/notebook-html/notebook-hello.html
diff --git a/dev/notebook-output/sol_matrix_matrix.html b/dev/notebook-html/sol_matrix_matrix.html
similarity index 100%
rename from dev/notebook-output/sol_matrix_matrix.html
rename to dev/notebook-html/sol_matrix_matrix.html
diff --git a/dev/notebook-output/tsp.html b/dev/notebook-html/tsp.html
similarity index 100%
rename from dev/notebook-output/tsp.html
rename to dev/notebook-html/tsp.html
diff --git a/dev/notebook-output/Jacobi_2D.html b/dev/notebook-output/Jacobi_2D.html
deleted file mode 100644
index 6787d1f..0000000
--- a/dev/notebook-output/Jacobi_2D.html
+++ /dev/null
@@ -1,16628 +0,0 @@
-
-
-
-
-
-jacobi_2D
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
In this section, we want to examine another parallel algorithm called Successive Over-relaxation (SOR).
-
The SOR algorithm is an iterative method used to solve Laplace equations. The underlying data structure of the SOR algorithm is a two-dimensional grid, whose elements are updated iteratively through some weighted function that considers the old value as well as the values of neighbouring cells.
-
-
This algorithm is applied, for instance, in physics to simulate the climate or temperature of some object.
Consider the diffusion of a chemical substance on a two-dimensional grid. The concentration of the chemical is given as $c(x,y)$, a function of the coordinates $x$ and $y$. We will consider a square grid with $0 \leq x,y \leq 1$ and the boundary conditions $c(x,y=1) = 1$ and $c(x,y=0) = 0$. That is, the concentration at the top of the grid is always 1 and the concentration at the very bottom is always 0. Furthermore, in the x-direction we will assume periodic boundary conditions, i.e. $c(x=0, y) = c(x=1,y)$.
-
-
We will take the initial condition $c(x,y) = 0$ for $0 \leq x \leq 1, 0 \leq y < 1$.
-
The stable state of the diffusion, that is, when the concentraion does not change anymore, can be described by the Laplace equation
-$$
-\nabla^2 c = 0.
-$$
Numerically, we can approximate the solution with the Jacobi iteration
The superscript $k$ denotes the $k$-th iteration. The algorithm stepwise updates the cells of the grid, until a steady state is reached. To determine the end of the algorithm, we use the stopping condition
That is, we stop when all changes to cell values are smaller than some small number, say $\epsilon = 10^{-5}$.
-
Furthermore, for this set of initial and boundary conditions, there exists an analytical solution for the stable state, namely
-$$
-c(x,y) = y.
-$$
-That is, the concentration profile is the identity function of the y-coordinate.
# analytical solution
-functionanalytical_solution(N)
-# Returns the analytical solution as a square grid of size N
-grid=zeros(N,N)
-foriin2:N
-grid[:,i].=(i-1)/(N-1)
-end
-returngrid
-end
-
-# Test if solution is identical with analytical solution
-sol=analytical_solution(N)
-@testmaximum(abs.(sol-new_grid[2:M-1,:]))<0.01*N
-
# to import MPIManager
-usingMPIClusterManagers
-
-# need to also import Distributed to use addprocs()
-usingDistributed
-
-# specify, number of mpi workers, launch cmd, etc.
-manager=MPIWorkerManager(9)
-
-# start mpi workers and add them as julia workers too.
-addprocs(manager)
-
-@mpi_domanagerbegin
-
-functioncalculate_partition(p,N,nrows,ncols)
-# Calculates the row and column indices of this processor
-# Get row and column number for processor p
-ifmod(p,ncols)==0
-i=div(p,ncols)
-else
-i=floor(div(p,ncols))+1
-end
-j=p-(i-1)*ncols
-# Rows
-ifmod(N,nrows)==0
-prows=div(N,nrows)
-row_range=((i-1)*prows+1):(i*prows)
-else
-# nlower processors get the smaller partition
-nlower=nrows-(mod(N,nrows))
-n_floor=floor(div(N,nrows))
-ifi<=nlower
-row_range=((i-1)*n_floor+1):(i*n_floor)
-else
-row_range=((i-1)*n_floor+(i-nlower)):(i*n_floor+(i-nlower))
-end
-end
-# Columns
-ifmod(N,ncols)==0
-prows=div(N,ncols)
-col_range=((j-1)*prows+1):(j*prows)
-else
-nlower=ncols-(mod(N,ncols))
-n_floor=floor(div(N,ncols))
-ifj<=nlower
-col_range=((j-1)*n_floor+1):(j*n_floor)
-else
-col_range=((j-1)*n_floor+(j-nlower)):(j*n_floor+(j-nlower))
-end
-end
-# Add 1 to each column index because of ghost cells
-col_range=col_range.+1
-returnrow_range,col_range
-end
-
-
-functionupdate_grid(grid)
-# Returns the updated grid as M-2 x N-2 matrix where M and N are sizes of grid
-M=size(grid,1)
-N=size(grid,2)
-# Remove ghost cells
-g_left=grid[2:M-1,1:N-2]
-g_right=grid[2:M-1,3:N]
-g_up=grid[1:M-2,2:N-1]
-g_down=grid[3:M,2:N-1]
-# Jacobi iteration
-return0.25*(g_up+g_down+g_left+g_right)
-end
-
-usingMPI
-comm=MPI.COMM_WORLD
-id=MPI.Comm_rank(comm)+1
-
-
-M=50
-N=M+2
-ϵ=10^-5# Stopping threshold
-nrows=3# Number of grid rows
-ncols=3# Number of grid columns
-n_procs=nrows*ncols
-@assertn_procs==MPI.Comm_size(comm)
-max_diffs=ones(n_procs)# Differences between iterations
-max_diff_buf=MPI.UBuffer(max_diffs,1)# Buffer to store maximum differences
-
-
-# initialize grid
-ifid==1
-grid_a=zeros(M,N)
-grid_a[1,:].=1
-grid_b=zeros(M,N)
-grid_b[1,:].=1
-else
-grid_a=nothing
-grid_b=nothing
-end
-
-# Broadcast matrix to other processors
-grid_a=MPI.bcast(grid_a,0,comm)
-grid_b=MPI.bcast(grid_b,0,comm)
-
-# Determine if processor is in top or bottom row of grid
-top_pos=id<=ncols
-bottom_pos=id>((nrows-1)*ncols)
-localgrid_a_old=false# Grid a is the source grid for the first update
-
-# Get local partition
-ind_rows,ind_cols=calculate_partition(id,M,nrows,ncols)
-println("Proc $(id) gets rows $(ind_rows) and columns $(ind_cols)")
-
-# Determine neighbors
-n_left=id-1
-n_right=id+1
-n_down=id+ncols
-n_up=id-ncols
-ifmod(id,ncols)==1
-# Left neighbor is last in row
-n_left=id+ncols-1
-end
-ifmod(id,ncols)==0
-# Right neighbor is first in row
-n_right=id-ncols+1
-end
-
-#println("Proc $(id) has neighbors left $(n_left) and right $(n_right) and up $(n_up) and down $(n_down)")
-
-localfinished=false
-
-#Perform SOR
-while!finished
-# Flip old and new grid
-grid_a_old=!grid_a_old
-
-# Determine which grid is updated
-ifgrid_a_old
-old_grid=grid_a
-new_grid=grid_b
-else
-old_grid=grid_b
-new_grid=grid_a
-end
-
-# send left and right columns
-left_ind=first(ind_cols)
-right_ind=last(ind_cols)
-left_col=old_grid[ind_rows,left_ind]
-right_col=old_grid[ind_rows,right_ind]
-slreq=MPI.Isend(left_col,comm;dest=n_left-1)
-srreq=MPI.Isend(right_col,comm;dest=n_right-1)
-
-# Send bottom row if not bottom
-bottom_ind=last(ind_rows)
-if!bottom_pos
-bottom_col=old_grid[bottom_ind,ind_cols]
-sbreq=MPI.Isend(bottom_col,comm;dest=n_down-1)
-end
-
-# Send top row if not at the top
-top_ind=first(ind_rows)
-if!top_pos
-top_row=old_grid[top_ind,ind_cols]
-streq=MPI.Isend(top_row,comm;dest=n_up-1)
-end
-
-# Receive left and right column
-left_buf=Array{Float64,1}(undef,length(ind_rows))
-right_buf=Array{Float64,1}(undef,length(ind_rows))
-rlreq=MPI.Irecv!(left_buf,comm;source=n_left-1)
-rrreq=MPI.Irecv!(right_buf,comm;source=n_right-1)
-
-# Receive top row if not at the top
-if!top_pos
-top_buf=Array{Float64,1}(undef,length(ind_cols))
-rtreq=MPI.Irecv!(top_buf,comm;source=n_up-1)
-end
-
-# Receive bottom row if not at the bottom
-if!bottom_pos
-bottom_buf=Array{Float64,1}(undef,length(ind_cols))
-rbreq=MPI.Irecv!(bottom_buf,comm;source=n_down-1)
-end
-
-# Wait for results
-statlr=MPI.Waitall([rlreq,rrreq],MPI.Status)
-old_grid[ind_rows,left_ind-1]=left_buf
-old_grid[ind_rows,right_ind+1]=right_buf
-#println("Proc $(id) received left $(old_grid[ind_rows, left_ind - 1]) and right $(old_grid[ind_rows, right_ind + 1])")
-
-if!top_pos
-statt=MPI.Wait(rtreq)
-old_grid[top_ind-1,ind_cols]=top_buf
-#println("Proc $(id) received top $(old_grid[top_ind - 1, ind_cols])")
-end
-
-if!bottom_pos
-statb=MPI.Wait(rbreq)
-old_grid[bottom_ind+1,ind_cols]=bottom_buf
-#println("Proc $(id) received bottom $(old_grid[bottom_ind + 1, ind_cols])")
-end
-
-# Get local subgrid
-if!top_pos&!bottom_pos
-local_with_ghosts=old_grid[top_ind-1:bottom_ind+1,left_ind-1:right_ind+1]
-lb_row=top_ind
-ub_row=bottom_ind
-elseiftop_pos
-local_with_ghosts=old_grid[top_ind:bottom_ind+1,left_ind-1:right_ind+1]
-lb_row=top_ind+1
-ub_row=bottom_ind
-elseifbottom_pos
-local_with_ghosts=old_grid[top_ind-1:bottom_ind,left_ind-1:right_ind+1]
-lb_row=top_ind
-ub_row=bottom_ind-1
-end
-
-# Perform one step of Jacobi iteration
-new_grid[lb_row:ub_row,left_ind:right_ind]=update_grid(local_with_ghosts)
-
-# Calculate max difference
-diffs=abs.(new_grid[lb_row:ub_row,left_ind:right_ind]-old_grid[lb_row:ub_row,left_ind:right_ind])
-maxdiff=maximum(diffs)
-
-# Gather maxdiffs in processor 1
-MPI.Gather!(maxdiff,max_diff_buf,comm;root=0)
-
-# First processor determines if threshold is exeeded
-ifid==1
-ifall(max_diffs.<ϵ)
-finished=true
-println("THRESHOLD SUBCEEDED - TERMINATE SOR")
-end
-end
-
-finished=MPI.bcast(finished,0,comm)
-
-iffinished
-# Set ghost cells to zero again so MPI.Reduce gives correct output
-new_grid[ind_rows,left_ind-1].=0.0
-new_grid[ind_rows,right_ind+1].=0.0
-if!bottom_pos
-new_grid[bottom_ind+1,ind_cols].=0.0
-end
-if!top_pos
-new_grid[top_ind-1,ind_cols].=0.0
-end
-end
-end
-
-usingDelimitedFiles
-
-# Reduce matrix & store result
-if!grid_a_old
-sor_result=grid_a
-else
-sor_result=grid_b
-end
-
-MPI.Reduce!(sor_result,+,comm,root=0)
-sor_result[1,:].=1.0
-ifid==1
-writedlm("SOR_result.txt",sor_result)
-end
-
-MPI.Finalize()
-end
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
From worker 5: Proc 4 gets rows 17:33 and columns 2:17
- From worker 8: Proc 7 gets rows 34:50 and columns 2:17
- From worker 3: Proc 2 gets rows 1:16 and columns 18:34
- From worker 6: Proc 5 gets rows 17:33 and columns 18:34
- From worker 9: Proc 8 gets rows 34:50 and columns 18:34
- From worker 4: Proc 3 gets rows 1:16 and columns 35:51
- From worker 10: Proc 9 gets rows 34:50 and columns 35:51
- From worker 7: Proc 6 gets rows 17:33 and columns 35:51
- From worker 2: Proc 1 gets rows 1:16 and columns 2:17
- From worker 2: THRESHOLD SUBCEEDED - TERMINATE SOR
-
# Test if solution is identical with analytical solution
-sol=analytical_solution(M)
-# Bring solution into correct form
-sol=reverse(transpose(sol),dims=1)
-@testmaximum(abs.(sol-final_grid[:,2:N-1]))<0.01*M
-
Julia has its own way of running code and using packages. Many educational sources about Julia assume that you have this basic knowledge, which can be confusing to new users. In this lesson, we will learn these basic skills so that you can start learning more on Julia.
There are several ways of opening Julia depending on your operating system and your IDE, but it is usually as simple as launching the Julia app. With VSCode, open a folder (File > Open Folder). Then, press Ctrl+Shift+P to open the command bar, and execute Julia: Start REPL. If this does not work, make sure you have the Julia extension for VSCode installed. Independently of the method you use, opening Julia results in a window with some text ending with:
-
julia>
-
-
You have just opened the Julia read-evaluate-print loop, or simply the Julia REPL. Congrats! You will spend most of time using the REPL, when working in Julia. The REPL is a console waiting for user input. Just as in other consoles, the string of text right before the input area (julia> in the case) is called the command prompt or simply the prompt.
Curious about what function println does? Enter into help mode to look into the documentation. This is done by typing a question mark (?) into the inut field:
-
julia>?
-
-
After typing ?, the command prompt changes to help?>. It means we are in help mode. Now, we can type a function name to see its documentation.
The REPL comes with two more modes, namely package and shell modes. To enter package mode type
-
julia>]
-
-
Package mode is used to install and manage packages. We are going to discuss the package mode in greater detail later. To return back to normal mode press the backspace key several times.
-
To enter shell mode type semicolon (;)
-
julia>;
-
-
The prompt should have changed to shell> indicating that we are in shell mode. Now you can type commands that you would normally do on your system command line. For instance,
-
shell>ls
-
-
will display the contents of the current folder in Mac or Linux. Using shell mode in Windows is not straightforward, and thus not recommended for beginners.
Real-world Julia programs are not typed in the REPL in practice. They are written in one or more files and included in the REPL. To try this, create a new file called hello.jl, write the code of the "Hello world" example above, and save it. If you are using VSCode, you can create the file using File > New File > Julia File. Once the file is saved with the name hello.jl, execute it as follows
-
julia>include("hello.jl")
-
-
-Warning: Make sure that the file "hello.jl" is located in the current working directory of your Julia session. You can query the current directory with function pwd(). You can change to another directory with function cd() if needed. Also, make sure that the file extension is .jl.
-
-
The recommended way of running Julia code is using the REPL as we did. But it is also possible to run code directly from the system command line. To this end, open a terminal and call Julia followed buy the path to the file containing the code you want to execute.
-
$ julia hello.jl
-
-
Previous line assumes that you have Julia properly installed in the system and that is usable from the terminal. In UNIX systems (Linux and Mac), the Julia binary needs to be in one of the directories listed in the PATH environment variable. To check that Julia is properly installed, you can use
-
$ julia --version
-
-
If this runs without error and you see a version number, you are good to go!
-
-Tip: In this tutorial, when a code snipped starts with $, it should be run in the terminal. Otherwise, the code is to be run in the Julia REPL.
-
-
-Tip: Avoid calling Julia code from the terminal, use the Julia REPL instead! Each time you call Julia from the terminal, you start a fresh Julia session and Julia will need to compile your code from scratch. This can be time consuming for large projects. In contrast, if you execute code in the REPL, Julia will compile code incrementally, which is much faster. Running code in a cluster (like in DAS-5 for the Julia assignment) is among the few situations you need to run Julia code from the terminal.
-
Since we are in a parallel computing course, let's run a parallel "hello world" example in Julia. Open a Julia REPL and write
-
julia>usingDistributed
-julia>@everywhereprintln("Hello, world! I am proc $(myid()) from $(nprocs())")
-
-
Here, we are using the Distributed package, which is part of the Julia standard library that provides distributed memory parallel support. The code prints the process id and the number of processes in the current Julia session.
-
You will provably only see output from 1 proces. We need to add more processes to run the example in parallel. This is done with the addprocs function.
-
julia>addprocs(3)
-
-
We have added 3 new processes, plus the old one, we have 4 processes. Run the code again.
-
julia>@everywhereprintln("Hello, world! I am proc $(myid()) from $(nprocs())")
-
-
Now, you should see output from 4 processes.
-
It is possible to specify the number of processes when starting Julia from the terminal with the -p argument (useful, e.g., when running in a cluster). If you launch Julia from the terminal as
-
$ julia -p 3
-
-
and then run
-
julia>@everywhereprintln("Hello, world! I am proc $(myid()) from $(nprocs())")
-
One of the most useful features of Julia is its package manager. It allows one to install Julia packages in a straightforward and platform independent way. To illustrate this, let us consider the following parallel "Hello world" example.
-
Copy the following block of code into a new file named "hello_mpi.jl"
-
# file hello_mpi.jl
-usingMPI
-MPI.Init()
-comm=MPI.COMM_WORLD
-rank=MPI.Comm_rank(comm)
-nranks=MPI.Comm_size(comm)
-println("Hello world, I am rank $rank of $nranks")
-
-
As you can see from this example, one can access MPI from Julia in a clean way, without type annotations and other complexities of C/C++ code.
-
Now, run the file from the REPL
-
julia>incude("hello_mpi.jl")
-
-
It provably didn't work, right? Read the error message and note that the MPI package needs to be installed to run this code.
-
To install a package, we need to enter package mode. Remember that we entered into help mode by typing ?. Package mode is activated by typing ]
-
julia>]
-
-
At this point, the promp should have changed to (@v1.8) pkg> indicating that we are in package mode. The text between parenthesis indicates which is the active project, i.e., where packages are going to be installed. In this case, we are working with the global project associated with our Julia installation (which is Julia 1.8 in this example, but it can be another version in your case).
-
To install the MPI package, type
-
(@v1.8)pkg>addMPI
-
-
Congrats, you have installed MPI!
-
- Tip: Many Julia package names end with .jl. This is just a way of signaling that a package is written in Julia. When using such packages, the .jl needs to be ommited. In this case, we have isntalled the MPI.jl package even though we have only typed MPI in the REPL.
-
-
- Tip:
-The package you have installed it is the Julia interface to MPI, called MPI.jl. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the documentation of MPI.jl for further details.
-
-
To check that the package was installed properly, exit package mode by pressing the backspace key several times, and run it again
-
julia>incude("hello_mpi.jl")
-
-
Now, it should work, but you provably get output from a single MPI rank only.
To run MPI applications in parallel, you need a launcher like mpiexec. MPI codes written in Julia are not an exception to this rule. From the system terminal, you can run
-
$ mpiexec -np 4 julia hello_mpi.jl
-
-
But it will provably don't work since the version of mpiexec needs to match with the MPI version we are using from Julia. You can find the path to the mpiexec binary you need to use with these commands
-
julia>usingMPI
-julia>MPI.mpiexec_path
-
-
and then try again
-
$ /path/to/my/mpiexec -np 4 julia hello_mpi.jl
-
-
with your particular path.
-
However, this is not very convenient. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands:
-
julia>usingMPI
-julia>mpiexec(cmd->run(`$cmd -np 4 julia hello_mpi.jl`))
-
We have installed the MPI package globally and it will be available in all Julia sessions. However, in some situations, we want to work with different versions of the same package or to install packages in an isolated way to avoid potential conflicts with other packages. This can be done by using local projects.
-
A project is simply a folder in the hard disk. To use a particular folder as your project, you need to activate it. This is done by entering package mode and using the activate command followed by the path to the folder you want to activate.
-
(@v1.8)pkg>activate.
-
-
Previous command will activate the current working directory. Note that the dot . is indeed the path to the current folder.
-
The prompt has changed to (lessons) pkg> indicating that we are in the project within the lessons folder. The particular folder name can be different in your case.
-
- Tip: You can activate a project directly when opening Julia from the terminal using the --project flag. The command $ julia --project=. will open Julia and activate a project in the current directory. You can also achieve the same effect by setting the environment variable JULIA_PROJECT with the path of the folder you want to activate.
-
-
- Tip: The active project folder and the current working directory are two independent concepts! For instance, (@v1.8) pkg> activate folderB and then julia> cd("folderA"), will activate the project in folderB and change the current working directory to folderA.
-
-
At this point all package-related operations will be local to the new project. For instance, install the DataFrames package.
Now, we can return to the global project to check that DataFrames has not been installed there. To return to the global environment, use activate without a folder name.
The information about a project is stored in two files Project.toml and Manifest.toml.
-
-
Project.toml contains the packages explicitly installed (the direct dependencies)
-
-
Manifest.toml contains direct and indirect dependencies along with the concrete version of each package.
-
-
-
In other words, Project.toml contains the packages relevant for the user, whereas Manifest.toml is the detailed snapshot of all dependencies. The Manifest.toml can be used to reproduce the same envinonment in another machine.
-
You can see the path to the current Project.toml file by using the status operator (or st in its short form) while in package mode
-
(@v1.8)pkg>status
-
-
The information about the Manifest.toml can be inspected by passing the -m flag.
Copy the contents of previous code block into a file called Project.toml and place it in an empty folder named newproject. It is important that the file is named Project.toml. You can create a new folder from the REPL with
-
julia>mkdir("newproject")
-
-
To install all the packages registered in this file you need to activate the folder containing your Project.toml file
-
(@v1.8)pkg>activatenewproject
-
-
and then instantiating it
-
(newproject)pkg>instantiate
-
-
The instantiate command will download and install all listed packages and their dependencies in just one click.
In some situations it is required to use package commands in Julia code, e.g., to automatize installation and deployment of Julia applications. This can be done using the Pkg package. For instance
We have learned the basics of how to work with Julia. Now, you should be ready to start learning more on the language. If you want to further dig into the topics we have covered here, you can take a look and the following links
In this notebook, we will learn the basics of distributed computing in Julia. In particular, we will focus on the Distributed module available in the Julia standard library. The main topics we are going to cover are:
-
-
How to create Julia processes
-
How to execute code remotely
-
How to send and receive data
-
-
With this knowledge you will be able to implement simple and complex parallel algorithms in Julia.
First of all, we need several processes in order to run parallel algorithms in parallel. In this section, we discuss different ways to create new processes in Julia.
The simplest way of creating processes for parallel computing is to add them locally in the current Julia session. This is done by using the following commands.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
usingDistributed
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
addprocs(3)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Last cell created 3 new Julia processes. By default, they run locally in the same computer as the current Julia session, using multiple cores if possible. However, it is also possible to start the new processes in other machines as long as they are interconnected (more details on this later).
-
-Tip: We can also start new processes when launching Julia from the command line by suing the `-p` command-line argument. E.g., `$ julia -p 3 ` would launch Julia with 3 extra processes.
-
When adding the new processes, you can imagine that 3 new Julia REPLs have started under the hood (see figure below). The main point of the Distributed module is to provide a way of coordinating all these Julia processes to run code in parallel. It is important to note that each process runs in a separated Julia instance. This means that each process has its own memory space and therefore they do not share memory. This results in distributed-memory parallelism, and allows one to run processes in different machines.
The following functions provide basic information about the underlying processes. If more than one process is available, the first process is called the main or master and the other the workers. If only a single process is available, it is the master and the first worker simultaneously.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
procs()
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
workers()
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
nprocs()
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
nworkers()
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
myid()
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
@everywhereprintln(myid())
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
In previous cell, we have used the macro @everywhere that evaluates the given code on all processes. As a result, each process will print its own process id.
For large parallel computations, one typically needs to use different computers in parallel. Function addprocs also provides a low-level method to start workers in other machines. Next code example would create 3 workers in server1 and 4 new workers in server server2 (see figure below). Under the hood, Julia connects via ssh to the other machines and starts the new processes there. In order this to work, the local computer and the remote servers need to be properly configured (see the Julia manual for details).
Previous way of starting workers in other machines is very low-level. Happily, there is a Julia package called ClusterManagers.jl that helps to create workers remotely in number of usual scenarios. For instance, when running the following code from the login node in a computer cluster, it will submit a job to the cluster queue allocating 128 threads. A worker will be generated for each one of these threads. If the compute node have 64 cores, 2 compute nodes will be used to create to contain the 128 workers (see below).
The most basic thing we can do with a remote processor is to execute a given function on it. This is done by using function remotecall. To make clear how local and remote executions compare, let's call a function locally and then remotely. Next cell uses function ones to create a matrix locally.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
a=ones(2,3)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
The next cell does the same operation, but remotely on process 2. Note that remotecall takes the function we want to execute remotely, the process id where we want to execute it and, finally, the function arguments.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
proc=2
-ftr=remotecall(ones,proc,2,3)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Note that remotecall does not return the result of the underlying function, but a Future. This object represents a reference to a task running on the remote process. To move a copy of the result to the current process we can use fetch.
It is important to note that remotecall does not wait for the remote process to finish. It turns immediately. This can be checked be calling remotely the following function that sleeps for 10 secods and then generates a matrix.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
fun=(m,n)->(sleep(10);ones(m,n))
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
When running next cell, it will return immediately, event though the remote process will sleep for 10 seconds. We can even run code in parallel. To try this execute the second next cell while the remote call is running in the worker.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
proc=2
-ftr=remotecall(fun,proc,2,3)
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
1+1
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
However, when fetching the result, the current process blocks waiting until the result is available in the remote process and arrives to its destination.
You have provably realized that in order to use remotecall we have written auxiliary anonymous functions. They are needed to wrap the code we want to execute remotely. Writing these functions can be tedious. Happily, the macro @spawnat generates an auxiliary function from the given block of code and calls remotecall for us. For instance, the two following cells are equivalent.
The relation between @async and @pawnat is obvious. From the user perspective they work almost in the same way. However, @async generates a task that runs asynchronously in the current process, whereas @spawnat executes a task in a remote process in parallel. In both cases, the result is obtained using fetch.
Data movement is a crucial part in distributed-memory computations and it is usually one of its main computational
-bottlenecks. Being aware of the data we are moving when using functions such as remotecall is important to write efficient distributed algorithms in Julia. Julia also provides a special type of channel, called remote channel, to send and receive data between processes.
When usig remotecall we send to the remote process a function and its arguments. In this example, we send function name + and matrices a and b to proc 4. When fetching the result we receive a copy of the matrix from proc 4.
Be aware that data movements can be implicit. This usually happens when we execute remotely functions that capture variables. In the following example, we are also sending matrices a and b to proc 4, even though they do not appear as arguments in the remote call. These variables are captured by the anonymous function and will be sent to proc 4.
Another way of moving data between processes is to use remote channels. Their usage is very similar to conventional channels for moving data between tasks, but there are some important differences. In the next cell, we create a remote channel. Process 4 puts several values and closes the channel. Like for conventional channels, calls to put! are blocking, but next cell is not blocking the master process since the call to put! runs asynchronously on process 4.
We can take values from the remote channel form any process using take!. Run next cell several times. The sixth time it should raise and error since the channel was closed.
Just like conventional channels, remote channels can be buffered. The buffer is stored in the process that owns the remote channel. By default this corresponds to process that creates the remote channel, but it can be a different one. For instance, process 3 will be the owner in the following example.
@spawnat4begin
-println("start")
-foriin1:5
-put!(chnl,i)
-println("I have put $i")
-end
-close(chnl)
-println("stop")
-end;
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Note that since the channel is buffered, worker 4 can start putting values into it before any call to take!. Run next cell several times until the channel is closed.
Now, try to iterate over the channel in a for loop. It will result in an error since channels are not iterable.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
forjinchnl
-@showj
-end
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
If we want to take values form a remote channel and stop automatically when the channel is closed, we can combine a while loop and a try-catch statement. This works since take! raises an error if the channel is closed, which will execute the catch block and breaks the loop.
Remember: each process runs in a separated Julia instance¶
In particular, this means that each process can load different functions or packages. In consequence, it is important to make sure that the code we run is defined in the corresponding process.
This is a very common pitfall when running parallel code. If we define a function in a process, it is not automatically available in the other processes. This is illustrated in the next example. The remote call in the last line in next cell will fail since the function sleep_ones is only being defined in the local process.
If a function has a name, Julia only sends the function name to the corresponding process. Then, Julia looks for the corresponding function code in the remote process and executes it. This is why the function needs to be defined also in the remote process. However, if a function is anonymous, Julia needs to send the complete function definition to the remote process. This is why anonymous functions do not need to be defined with the macro @everywhere to work in a remote call.
When using a package in a process, it is not available in the other ones. For instance, if we load the LinearAlgebra package in the current process and use one of its exported functions in another process, we will get an error.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
usingLinearAlgebra
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
@fetchfrom3norm([1,2,3])
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
To fix this, we can load the package on all processors with the @everywhere macro.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
@everywhereusingLinearAlgebra
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
@fetchfrom3norm([1,2,3])
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Each process has its own active package environment¶
This is another very common source of errors. You can check that if you activate the current directory, this will have no effect in the other processes.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
]activate.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
We have activated the current folder. Now let's see which is the active project in another process, say process 2. You will see that process 2 is provably still using the global package environment.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
@everywhereusingPkg
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
@spawnat2Pkg.status();
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
To fix this, you need to activate the current directory on all processes.
This macro is used when we want to perform a very large for loops made of independent small iterations. To illustrate this, let's consider again the function that computes $\pi$ with Leibniz formula.
Paralelizing this function might require some work with low-level functions like remotecall, but it is trivial using the macro @distributed. This macro runs the for loop using the available processes and optionally reduces the result using a given reduction function (+ in this case).
This function is used when we want to call a very expensive function a small number of evaluations and we want to distribute these evaluations over the available processes. To illustrate the usage of pmap consider the following example. Next cell generates sixty 30x30 matrices. The goal is to compute the singular value decomposition of all of them. This operation is known to be expensive for large matrices. Thus, this is a perfect scenario for pmap.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
a=[rand(300,300)foriin1:60];
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
First, lets measure the serial performance
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
usingLinearAlgebra
-
-
-
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
@timesvd.(a);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
If we use pmap instead of broadcast, the different calls to svd will be distributed over the available processes.
We have seen the basics of distributed computing in Julia. The programming model is essentially an extension of tasks and channels to parallel computations on multiple machines. The low-level functions are remotecall and RemoteChannel, but there are other functions and macros like pmap and @distributed that simplify the implementation of parallel algorithms.
-
-
-
-
-
-
-
-
-
-
In [ ]:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/dev/notebooks/figures/fig_matmu_intro.svg b/dev/notebooks/figures/fig_matmu_intro.svg
deleted file mode 100644
index 33e31dd..0000000
--- a/dev/notebooks/figures/fig_matmu_intro.svg
+++ /dev/null
@@ -1,6487 +0,0 @@
-
-
diff --git a/dev/notebooks/figures/fig_matmul_0.png b/dev/notebooks/figures/fig_matmul_0.png
deleted file mode 100644
index a7e161f..0000000
Binary files a/dev/notebooks/figures/fig_matmul_0.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_1.png b/dev/notebooks/figures/fig_matmul_1.png
deleted file mode 100644
index 38f3fdc..0000000
Binary files a/dev/notebooks/figures/fig_matmul_1.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_1.svg b/dev/notebooks/figures/fig_matmul_1.svg
deleted file mode 100644
index ebfd026..0000000
--- a/dev/notebooks/figures/fig_matmul_1.svg
+++ /dev/null
@@ -1,510 +0,0 @@
-
-
diff --git a/dev/notebooks/figures/fig_matmul_2.png b/dev/notebooks/figures/fig_matmul_2.png
deleted file mode 100644
index 872dec5..0000000
Binary files a/dev/notebooks/figures/fig_matmul_2.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_dist.svg b/dev/notebooks/figures/fig_matmul_dist.svg
deleted file mode 100644
index 3dcae26..0000000
--- a/dev/notebooks/figures/fig_matmul_dist.svg
+++ /dev/null
@@ -1,611 +0,0 @@
-
-
diff --git a/dev/notebooks/figures/fig_matmul_intro_0.png b/dev/notebooks/figures/fig_matmul_intro_0.png
deleted file mode 100644
index 2b59373..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_0.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_2.png b/dev/notebooks/figures/fig_matmul_intro_2.png
deleted file mode 100644
index 0065858..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_2.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_3.png b/dev/notebooks/figures/fig_matmul_intro_3.png
deleted file mode 100644
index edd478b..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_3.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_4.png b/dev/notebooks/figures/fig_matmul_intro_4.png
deleted file mode 100644
index 77acc18..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_4.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_algs.png b/dev/notebooks/figures/fig_matmul_intro_algs.png
deleted file mode 100644
index 738b99f..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_algs.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_algs_1.png b/dev/notebooks/figures/fig_matmul_intro_algs_1.png
deleted file mode 100644
index 0065858..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_algs_1.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_q_1.png b/dev/notebooks/figures/fig_matmul_intro_q_1.png
deleted file mode 100644
index 412f50d..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_q_1.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_q_2.png b/dev/notebooks/figures/fig_matmul_intro_q_2.png
deleted file mode 100644
index c4f337c..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_q_2.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_intro_q_3.png b/dev/notebooks/figures/fig_matmul_intro_q_3.png
deleted file mode 100644
index c21b3b2..0000000
Binary files a/dev/notebooks/figures/fig_matmul_intro_q_3.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_machines.png b/dev/notebooks/figures/fig_matmul_machines.png
deleted file mode 100644
index c43343f..0000000
Binary files a/dev/notebooks/figures/fig_matmul_machines.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_machines.svg b/dev/notebooks/figures/fig_matmul_machines.svg
deleted file mode 100644
index 1303720..0000000
--- a/dev/notebooks/figures/fig_matmul_machines.svg
+++ /dev/null
@@ -1,4868 +0,0 @@
-
-
diff --git a/dev/notebooks/figures/fig_matmul_machines_1.png b/dev/notebooks/figures/fig_matmul_machines_1.png
deleted file mode 100644
index e2c3675..0000000
Binary files a/dev/notebooks/figures/fig_matmul_machines_1.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_machines_2.png b/dev/notebooks/figures/fig_matmul_machines_2.png
deleted file mode 100644
index 28836fb..0000000
Binary files a/dev/notebooks/figures/fig_matmul_machines_2.png and /dev/null differ
diff --git a/dev/notebooks/figures/fig_matmul_machines_3.png b/dev/notebooks/figures/fig_matmul_machines_3.png
deleted file mode 100644
index 5acbfdd..0000000
Binary files a/dev/notebooks/figures/fig_matmul_machines_3.png and /dev/null differ
diff --git a/dev/notebooks/jacobi_2D/index.html b/dev/notebooks/jacobi_2D/index.html
new file mode 100644
index 0000000..fbc6b9a
--- /dev/null
+++ b/dev/notebooks/jacobi_2D/index.html
@@ -0,0 +1,21 @@
+
+- · XM_40017
\n",
- "Note: The values computed by the Jacobi method are linearly increasing from -1 to 1. It is possible to show mathematically that the method we implemented in the function above approximates a 1D Laplace equation via a finite difference method and the solution of this equation in this setup is a linear function.\n",
+ "Note: The values computed by the Jacobi method are linearly increasing from -1 to 1. It is possible to show mathematically that the method we implemented in the function above approximates a 1D Laplace equation via a finite difference method and the solution of this equation is a linear function.\n",
"
\n",
"\n",
"
\n",
@@ -151,11 +151,11 @@
"metadata": {},
"outputs": [],
"source": [
- "function gauss_seidel(n,nsteps)\n",
+ "function gauss_seidel(n,niters)\n",
" u = zeros(n+2)\n",
" u[1] = -1\n",
" u[end] = 1\n",
- " for t in 1:nsteps\n",
+ " for t in 1:niters\n",
" for i in 2:(n+1)\n",
" u[i] = 0.5*(u[i-1]+u[i+1])\n",
" end\n",
@@ -192,7 +192,7 @@
"
\n",
"\n",
"```julia\n",
- "for t in 1:nsteps\n",
+ "for t in 1:niters\n",
" for i in 2:(n+1)\n",
" u[i] = 0.5*(u[i-1]+u[i+1])\n",
" end\n",
@@ -265,7 +265,7 @@
"id": "1b3c8c05",
"metadata": {},
"source": [
- "### Ghost cells\n",
+ "### Ghost (aka halo) cells\n",
"\n",
"A usual way of handling this type of data dependencies is using so-called ghost cells. Ghost cells represent the missing data dependencies in the data owned by each process. After importing the appropriate values from the neighbor processes one can perform the usual sequential jacoby update locally in the processes."
]
@@ -297,7 +297,7 @@
"metadata": {},
"outputs": [],
"source": [
- "#TODO"
+ "#TODO give multiple options"
]
},
{
@@ -305,7 +305,11 @@
"id": "8ed4129c",
"metadata": {},
"source": [
- "## Implementation"
+ "## Implementation\n",
+ "\n",
+ "We consider the implementation using MPI. The programming model of MPI is generally better suited for data-parallel algorithms like this one than the task-based model provided by Distributed.jl. In any case, one can also implement it using Distributed, but it requires some extra effort to setup remote channels right for the communication between neighbor processes.\n",
+ "\n",
+ "Take a look at the implementation below and try to understand it. Note that we have used MPIClustermanagers and Distributed just to run the MPI code on the notebook. When running it on a cluster MPIClustermanagers and Distributed are not needed.\n"
]
},
{
@@ -352,7 +356,189 @@
"source": [
"@everywhere workers() begin\n",
" using MPI\n",
- " MPI.Initialized() && MPI.Init()\n",
+ " MPI.Initialized() || MPI.Init()\n",
+ " comm = MPI.Comm_dup(MPI.COMM_WORLD)\n",
+ " nw = MPI.Comm_size(comm)\n",
+ " iw = MPI.Comm_rank(comm)+1\n",
+ " function jacobi_mpi(n,niters)\n",
+ " if mod(n,nw) != 0\n",
+ " println(\"n must be a multiple of nw\")\n",
+ " MPI.Abort(comm,1)\n",
+ " end\n",
+ " n_own = div(n,nw)\n",
+ " u = zeros(n_own+2)\n",
+ " u[1] = -1\n",
+ " u[end] = 1\n",
+ " u_new = copy(u)\n",
+ " for t in 1:niters\n",
+ " reqs = MPI.Request[]\n",
+ " if iw != 1\n",
+ " neig_rank = (iw-1)-1\n",
+ " req = MPI.Isend(view(u,2:2),comm,dest=neig_rank,tag=0)\n",
+ " push!(reqs,req)\n",
+ " req = MPI.Irecv!(view(u,1:1),comm,source=neig_rank,tag=0)\n",
+ " push!(reqs,req)\n",
+ " end\n",
+ " if iw != nw\n",
+ " neig_rank = (iw+1)-1\n",
+ " s = n_own-1\n",
+ " r = n_own\n",
+ " req = MPI.Isend(view(u,s:s),comm,dest=neig_rank,tag=0)\n",
+ " push!(reqs,req)\n",
+ " req = MPI.Irecv!(view(u,r:r),comm,source=neig_rank,tag=0)\n",
+ " push!(reqs,req)\n",
+ " end\n",
+ " MPI.Waitall(reqs)\n",
+ " for i in 2:(n_own+1)\n",
+ " u_new[i] = 0.5*(u[i-1]+u[i+1])\n",
+ " end\n",
+ " u, u_new = u_new, u\n",
+ " end\n",
+ " u\n",
+ " @show u\n",
+ " end\n",
+ " niters = 100\n",
+ " load = 4\n",
+ " n = load*nw\n",
+ " jacobi_mpi(n,niters)\n",
+ "end"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "eff25246",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "Question: How many messages per iteration are sent from a process away from the boundary?\n",
+ "
\n",
+ "\n",
+ " a) 1\n",
+ " b) 2\n",
+ " c) 3\n",
+ " d) 4\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "98bd9b5e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# TODO (b) 2 mesajes. Add another question if you find it useful"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c9aa2901",
+ "metadata": {},
+ "source": [
+ "### Latency hiding\n",
+ "\n",
+ "Note that we only need communications to update the values at the boundary of the portion owned by each process. The other values (the one in green in the figure below) can be updated without communications. This provides the opportunity of overlapping the computation of the interior values (green cells in the figure) with the communication of the ghost values. This technique is called latency hiding, since we are hiding communication latency by overlapping it with communications that we need to do anyway.\n",
+ "\n",
+ "The modification of the implementation above to include latency hiding is leaved as an exercise (see below).\n"
+ ]
+ },
+ {
+ "attachments": {
+ "fig.png": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAA6gAAADyCAYAAABAvOgkAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAewgAAHsIBbtB1PgAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAACAASURBVHic7d17fFxlgf/xz0nTO71ZKKWFUrByKbctlB+2ogJVi0q5rCCLrhVd7bqC6y6yK+yq4OruqoDu/lxXWryh+/uhFERIqSDIRS2lVKwCcoe2SIG29ELb9Jpk9o9n0pxMZ5I0JPM8ST7v12tePU/mzMy3STrNN+ec58nmzp1bQJIkSZKkyGpiB5AkSZIkCSyokiRJkqRE1JaMZ2ZZtiJGEEmSJElS31IoFCYCdzaPWxXULMtWzJkz5+lqh5IkSZIk9T3z5s2jUGiZFslTfCVJkiRJSbCgSpIkSZKSYEGVJEmSJCXBgipJkiRJSoIFVZIkSZKUBAuqJEmSJCkJFlRJkiRJUhIsqJIkSZKkJFhQJUmSJElJsKBKkiRJkpJgQZUkSZIkJcGCKkmSJElKggVVkiRJkpQEC6okSZIkKQkWVEmSJElSEiyokiRJkqQkWFAlSZIkSUmwoEqSJEmSkmBBlSRJkiQlwYIqSZIkSUqCBVWSJEmSlAQLqiRJkiQpCRZUSZIkSVISLKiSJEmSpCRYUCVJkiRJSbCgSpIkSZKSYEGVJEmSJCXBgipJkiRJSkJt7ACSJCk5Q4D3FrcXAvURs1TboOJtF33r7y1JSbCgSpKkUqOBG4vbbwSe7+Tz1ABvAqYCU4BhxY9fQrrl75+AzwP3ADMiZ5GkPseCKkmSusN/Ax8Ehpe573LSLagxvBc4E3gGuDpyFkmKyoIqSZJKbQQuK26v7+RzTKalnK4qPudRrzNXNfwJeBD4YxVf83hgDvArqldQDwJOyN0OAzLgJuCzVcogSXuwoEqSpFKbga++zue4DrgKeBh4Bfgw8IPX+ZzVcF3x1ptNBZZWuG+/agaRpFIWVEmS1B3+X+wAPch3CJNRbdnLxx0GHEe4RvjhTrzuBuC3xcd+CBjfieeQpC7lMjOSJKnUQUCheDs0cpZquxC4F7imiq/5MqEkPrWXj5tFmMxqzl4+7mnC5FdvAN5FuCZ43V4+hyR1C4+gSpIktTgUOAVoquJrTiZcn7sWuK8Kr7epeJOk5HgEVZIkKa73EY6EfjF2EEmKzYIqSZIkSUqCBVWSJEmSlASvQZUkSR01AphU4b5n8LpGSdLrZEGVJEkd9Tbgtgr3nQ7cWcUsfcWhVJ6l98TinycBX6mwzy3Akq4OJUndxYIqSZI6aiOV19v06Gn3mAB8tp19jiveylmOBVVSD2JBlSRJHfVrYGrsEH3MC8BXK9x3InAa8Afgjgr7/L47QklSd7GgSpIkpet54LIK932GUFCXtLGPJPUozuIrSZIkSUqCR1AlSVJ3OI4wcVKz43Pbnwa25sbfArZUI5QkKW0WVEmS1B3amln2CyXjH2JBlSRhQZUkSXvaAswrbnd2dt4ncs/Rnq3t71I1W4C1wIbYQbrZ94EDc+NDi3/OBO7Kffxe4N+qFUqSLKiSJKnUBuCvX+dz/Lp462m+Vrz1dtOAw8t8fFzx1mxtdeJIUmBBlSRJiutm4En2vgzWEZaheb4Tr/l3wLAO7PdCJ55bkjrNgipJkhTX48Xb3nq6eOuMSuumSlJULjMjSZIkSUqCBVWSJEmSlAQLqiRJkiQpCRZUSZIkSVISLKiSJEmSpCRYUCVJkiRJSXCZGSWvUChkdXV1D8XO0ZZrrrnm4OXLlw8B2Lx586sbNmxIbWHzfYH9itvbgBXxopQ1GJiYGz8FNMWJUt6ECRMOp/hLvQsvvPDFqVOnbo4cqaIsy26cNWvWVdV4rbq6uh8UCoWjqvFanTF//vwx999//2iAHTt21K9evTq1NR2HAhNy4ydiBWnDkbntF4D6WEEqmED4PAKsA9ZEzLKHkSNHjh4+fPgYgPHjx2+//PLLl8fO1JaBAweeMnPmzG7/Gt9+++1va2xsvKa7X6eztm/fXnPppZce3jxevXr1ih07dmyLmamMiYT/PyGsoftqvChl7Uf4+QNgK7AyYpY9DBo0aMiYMWMObh5/85vffCLLspiR2vOpM88888HYIarBgqrkffGLX8xOOOGEqbFztGX9+vW88MLun3uHAge3sXtsQ2n5DyNVx8cOUCr39aWxsfHwxP8TW1KtFyoUCkdlWZbsv89t27aV/tscEzFORyT7uSw6sv1doiot/NFt3LiRjRs3AjBgwIChWZaNjhypTTU1NVX52bCxsXFUyu8dWZa1et8Hkv1FXNFQWv+iNzVDaflFeRK2b99e+jWemvL/7VmWjYidoVo8xVeSJEmSlASPoKonui7Lsudih8jbtWvXFRRPsxk/fvzKVatWfTtypFJ/BbypuL0T+ELELOVMBc7Njb8MbImUpaza2tqvNDQ0ALB+/fp7syy7M3KkUrMLhcLkmAEKhcIdNTU198XMUGrz5s2fpHhEbfTo0dvWrVv3xciRSr0NeE9ufFmsIG34Ssvm+Adg/2XxopTz+49B08CwPeIVeOPNcfO0NmZM/aw1a56aANDYWGh4/vmRc2NnyhsyZOeYsWO3nhc5xq4syz4fOUMrO3bs2Af4XPN47NixN73yyiu/jRipnH8BBhS3nwW+EzFLOZ+g5ajuduDKaEnKGDdu3MkvvfTSGc3jhoaGfx44cGBjzEylCoXCvwPpHtbtJhZU9TiFQuHHs2bNuid2jrxJkyZdRrGgTpgwYdWqVau+GjlSqXfQUlB3Aanl+xitC+p/AasjZSmrX79+uwvq8uXLl5xxxhlJfQ4XLFgwDYhaULMs+1Vqn5fp06efTbGgjho1avu6deuSyke41jpfUL8GFCJlqSRXUCf/Dn7xo3hRyun/oVxBfRUe/mHcPK3tv//7TmgpqI0Nn/70yUnl++hHH598zjnPRy+oqb13XHDBBfuTK6gnnnjinXV1dakVwM/RUlBXkt7/7bNoXVCTynfCCSfszBfUH//4x1fPnz9/Z8xMperq6v6NPlhQPcVXkiRJkpQEC6okSZIkKQkWVEmSJElSEiyokiRJkqQkWFAlSZIkSUmwoEqSJEmSkmBBlSRJkiQlwYIqSZIkSUqCBVWSJEmSlAQLqiRJkiQpCRZUSZIkSVISLKiSJEmSpCRYUCVJkiRJSbCgSpIkSZKSYEGVJEmSJCXBgipJkiRJSoIFVZIkSZKUhNrYAdRtBgOTc+NlQFM7j6kBpuTGjwPbujiXJEmSJJVlQe29DgV+mxsPof2yOaDkMccAj3VxLkmSJEkqy1N8JUmSJElJsKBKkiRJkpJgQZUkSZIkJcGCKkmSJElKggVVkiRJkpQEC6okSZIkKQkWVEmSJElSEiyokiRJkqQkWFAlSZIkSUmwoEqSJEmSkmBB7b0KJeOsA4/ZpzuCSJIkSVJHWFB7ry0l446Uz4O7I4gkSZIkdYQFtfdaT+ujqJM68Jh3dFMWSZIkSWqXBbX32gIsz41Pb2f/YcBF3RdHkiRJktpmQe3d7s5tXwyMr7DfQOB64KBuTyRJkiRJFVhQe7d5QFNxexTwa+BcWq5H3Rc4H3gIOAf4fbUDSpIkSVKz2tgB1K0eBr4JfLo4PgSYX9xuovUvKF4ilNdnq5au8y6qq6s7K3aIvH/8x38c3Lz97LPPHgL8Z8Q45RyW2x5AevmOKhl/GdgaI0glDQ0Nu7cnTZr0jrq6uiER4+yhUCgcEzsDcEZdXd3Y2CHyvv3tb09s3l67du1g0vveP75k/B9RUnTY798Kk/aLnaK1pkEt2+sPgEmXxMuypxdfbJjYvN2vX03/6667N6l8/fs3vCF2BmBAXV1dUv82N23aNOSGG27YPV60aNFfACm8z+YNzG0fRnrvb4fktpN7/128ePGU/PgDH/jANbNnz26qtH8kHVmFo9exoPZ+nyn++SlaF9L89m+AvwDWVSvU65Fl2Z/HzlCqtrbln9LatWsPAP42Xpp29SftfAAfix2gVGNj4+7tESNGTAWmxkuTrOnFWzKGDGn5PcJrr702iPS/9xPPt/a4cEvVltGw5fzYKfI2bGjZ7tevpt/YsfVJ5UtELYl97w8YMKDVeP369TOAGXHSdMhBJPY5LDGQxPK9+uqrrcY1NTUXR4qiEhbU3q8R+Dvgu4QSejywH2GW3+cIR1TvIxxR7Qe8P/fYP1UzqCRJkqS+zYLadzxavLWlkZZTgJNxxRVXFBYsWHBb7BxtGTdu3BGNjY3DATZs2PDSyy+//GLsTCXGAQcWt7cAj0fMUs4+wOTc+Le0XD+dhCOOOOKEmpqafgD9+/d/BtjQzkOiybLskSq+1v2ESwSSNHz48AMnT548DmDbtm2bli9f/mTsTCWGA0fkxg/FCtKG/5PbfhLYFCtIBYcDI4rbL5PYL1f333//A0aPHn1Qcbse+GPkSG3avn37rmq8TpZlLwMp/99eM3ny5N1nyqxcufLx+vr60jXmY5tMy7wiL5Lee/GBhJ8/ILxvJPX+u88++wybMGHCkc3jQqGwlNZLNCalsbFxTewM1ZLNnTt39xciy7LD58yZ83TMQJIkSZKkvmHevHmHFQqFp5rHzuIrSZIkSUqCBVWSJCl9A9rfRZJ6PguqJElS+j5PmHV/39hBJKk7WVAlSZLS9xvgBOAR4LTIWSSp21hQJUmS0rcYeA04ALidsHxc/6iJJKkbWFAlSZLStwloKG4PAj4ALAMOjZZInTUodgApZRZUSZKknuGe3PYgwjqYi4G/jBNHnTCIcLr2XJz4SirLgipJktQz3AbU58YZMAb4r+J9Q2KE0l65lnAt8dnAfpGzSEmyoEqSJPUM9wNbynx8BDATeAI4tqqJtDc+CXwYaCQc9V4VN46UJguqJElSz7AW2FnhvgHABMJpwJdVLZE6ahrwjeL2Z4G7ImaRkmZBlSRJ6jkebuf+NwBfAm6pQhZ1zP7AjYRfItwCfD1uHClttbEDSJIkqcNuBd4NDCz5+FbgWeAm4HfA+irnUnm1wE+AA4GngAuBQsxAUuosqJIkST3HfcBGwlE5CKf8DiDMDnspnjqamq8Bbwc2A+cQlguS1AZP8ZUkSeo5VhAm2YEwo+/Pge8Sfqb7EXBAnFgq43zg7wlHTD9CmMRKUjssqJIkST3L44TS8wTwPuBTwKOEo6rX4893KZhK+MUBwL8DN0fMIvUovoFJkiT1LLcRjqKeW/xzG+FoXT3wTuAL8aIJmAjUAUOBhfj1kPaKBVWSJKlnWQj8X2Bl7mNPAJ8obn8BOLvaoQTAKMLXZyxhsqrzaTklW1IHWFAlSZJ6lueAz5T5+P8Q1trMCNejHl3NUGIAMB84ElgFnAVsiZpI6oEsqJIkSb3HPwB3APsAPwVGxo3TZ2TAd4AZhJl63wO8GDWR1ENZUCVJknqPRuAvgeXAm4D/jz/vVcOXgQ8Rlv05B3gkbhyp5/INS5IkqXdZR7gGtR54N/CtuHF6vb8C/okws/Ic4J64caSezYIqSZLU+zwCXAg0ESZPuiRqmt7rvcC1xe0rCMv8SHodLKiSJEm9002Ea1IBrgLOi5ilNzqN8DmuBb4HfCluHKl3sKBKkiT1Xl8H/pPwM98PgZPjxuk1pgG3AoOAnwF/HTeO1HtYUCVJknq3Swgz+jaXKZefeX1OBH5OmCl5IWGt04aoiaRexIIqSZLUuzURZvZ9ABgN3AUcHjVRz3US8AtgBGEypHMJM/dK6iIWVEmSpN5vG2FCn98BY4FfAm+MmqjnOZlQTkcCvwLOInxeJXUhC6okSVLfsBGYASwDxgP3AhNjBupB3kY4nXc4cD+h7G+JmkjqpSyokiRJfcdGwtqoTwIHAXdjSW3PmcAdwLDin+/Gcip1GwuqJElS37IaeAfwLOE0318DR0ZNlK6PEyaYGkyYtfdsPK1X6lYWVEmSpL5nFfBW4A/AgcAi4M1RE6Xns8A8oB/wA8KESDtiBpL6AguqJElS3/QKcBqwBBhFmADo1KiJ0jAQ+D7wleL4X4CP4FIyUlVYUCVJkvqu9YTTfe+m5RrLj0ZNFNf+hOVjLiQU0k8AV8QMJPU1FlRJkqS+bQtwBvATYADwXeAawqmtfckU4CFgOqG4vxuYGzWR1AdZUCVJkrQDuAC4DGgCLgFuJ6z52RfMBn4DTACeAd5COKosqcosqJIkSQIoAF8F3g/UAzOBpcDUmKG62UhgPnA9MAS4DTiRsAyPpAgsqJIkScq7mTDD7wpgEmGG388AWcRM3eEU4HeE2Xl3An9PWEbmtYiZpD7PgipJktTirwjrgn4jdpDIlhGuybyRcF3q1cBC4ICYobrIPsB/ESZDOgR4jnBK738QjiJLisiCKkmS1OJg4GTg2NhBErAROB/4OLAVOB14AvgkPfdnyNOBR4CLiuNrCUX8t9ESSWqlp765SJIkqTq+Q7gO9UFgBPAtwoRCx8QMtZfeRLi+9OeEo6bLCcvr/A2wOWIuSSUsqJIkSS2+AuwHnBM7SGKeIJwGexHhGs1phOs3rwMOjJirPfsRTk9+DJhFuNb0asIR8nsi5pJUgQVVkiSpxVbgVWBT7CAJagL+G5hMmPm2FvgYYVmWq4F940XbwxjgKsKR0s8QrqO9nXDU9x8Ia79KSpAFVZIkqcU0whqg58YOkrCXCEvRTAPuBQYRSuBKYB5xT/2dUsywHLgUGAosISyZcwbwdLxokjrCgipJktTi3cA1hGsT1bYHgdOAdxFK4BDChEqPEIrrbMI6o91tv+LrPkg47fjjxSwPAu8B3gz8ogo5JHWB2tgBJEmS1KPdVbxNB/4W+HPCGqOnEK75vAv4KXA/YUmX16sGOAo4lXCt8FuBfsX7dhDWcb2WsFyQpB7GgipJkqSu8EDxNo6wnuy5hMmI3lu8AbwCLCIc6XyueFsJrCNc45o3iDBr8KHAYYSZeKcQinD+yGwBWEpYs/V6YG3X/rUkVZMFVZIkSV3pJeBLxdvhhKJ6OmGpmrHA+4q3UrtombxoGG3/nLoZWExYNuanwAtdEVxSfBZUSZIkdZengH8t3gYSSup04EjgjYSjo+OBDOgPjCp5fBOhfD5LmC34ccIR2EeAxu6PL6naLKiSJEmqhh2Ecrmo5OP9gOHAYMJpvRlhuZ96XO5H6nMsqJIkSYqpEdhQvEnq41xmRpIkSZKUBAuqJEmSJCkJFlRJkiRJUhIsqJIkSZKkJFhQJUmSJElJsKBKkiRJkpLgMjOSJEkt5gK3A6/FDiJJfZEFVZIkqcWq4k2SFIGn+EqSJEmSkmBBlSRJkiQlwYIqSZIkSUqCBVWSJEmSlAQLqiRJkiQpCRZUSZIkSVISXGZGPcItt9wyMXaGttTV1e27YcOGQQAvvfTSpiVLlmyKnanEMGBEcXsnsCZilnIGAGNy41VAIVKWss4666xxNTU1NQDHH3/8uqOPPnpb7EyVDB48eNPpp5++vhqvdccddxywbdu2gdV4rc5YtGjR8Oeee244wJYtW7bfddddr8bOVGIgsF9u/GKsIG04MLe9FtgRK0gF+wKDitubSWz90mnTpg0bO3bsCIBhw4btPOecc1J7/23l7LPPXpllWbe//954442D+/fvv393v05n7dq1K7vhhhvGN48feuihNatWrdoZM1MZYwj/f0L4vt8cMUs5w4s3CO8bayNm2cMhhxwy8M/+7M92v//Onj07xfff3UaOHPnKqaeeuj12jmqwoCp5V155ZU1tbe3y2DnasnTpUh599NHYMdSNbr311t3bkydPprY23bfPXbt2fQu4uBqvtXPnzttqa2unVuO1OmPFihXccsstsWOoD1u8ePHu7UmTJnHeeedFTNO+u+++eyRVKPmDBg16V5ZlP+vu1+mshoYG3zt6ueXLl7N8ecuPlxdeeCHF30Mnqb6+/nTgztg5qiHdr4IkSZIkqU+xoEqSJEmSkpDuOWpSZX+ZZdni9nernu3bty+jeJ3FlClTHl62bNn7I0cqdT1wcnF7K3BMxCzlvB/499z4JCCpawUHDhz43I4d4dK7J5988tpp06ZdFTlSqWsLhcI7I2e4KsuyayNnaGXNmjU3AVMAJk6c+NqKFSuOjxyp1MeBy3aPvsEn40Wp4O/5793bb+cWzuauiGn29A9cTQNDwmDc0/BPl7X9gOo67LBbP//003dNAdi1a9e2LMuOjp0pr6mp6ZgETrVN7vPypz/9aV9gSfP4pJNOunzJkiU3RoxUzqPQ/L3PImB2xCzl/ARovgRkE8X34lRMnz79ow888MA/N483bdp0xKhRo3bFzFSqUCg8Qx88oGhBVY9TKBRenjVr1vOxc+RNmjSpqXl70KBBO4Ck8gH5i+oLpJevtIyuBFbHCNIR9fX1G88444ykPocLFizYGjsDsCG1z8v06dN3T+hTW1vbRHrf+60nszqNNdSkNUFYKyPZwjsS+7eZ0dQyqN0JF62KF2ZPAwfes/t7sFAoNKX2b+S2224bHTsDUEjt83LBBRfU58djxox5lfTeP3Lf+2wnvXz5CdWSe/8dPXr0uvz4tttuWz5//vykJsKqq6uLHSGKPtfIJUmSJElpsqBKkiRJkpJgQZUkSZIkJcGCKkmSJElKggVVkiRJkpQEC6okSZIkKQkWVEmSJElSEiyokiRJkqQkWFAlSZIkSUmojR2gl8uANwLjgUHAS8DjQOPrfN5JwFhgGLAReB5Y/TqfU5IkSZKiSv0I6kTgudxtWAceU1PymKO7K1zRJ3Ov9T+5DJ8CngSeAe4D7gAeIRTJLwFD9vJ1DgK+BbxYfM5fAwuBB4CXgd8BH6Htr+nluaz/2c7r/TOtP48faWf/m3L7fridfSVJkiRpD6kfQe0PHJobd7RQ5x8zsOvilDUq93orCCX6JuBdFfYfDXwOeAcwE9jUgde4CLiacBS2nAyYAnyPUCTPBtaX2e+xXNbzgL8DChWe8xxafx7PBL5fYd99gFnAgOL49xX2kyRJkqSKUi+oPU0N4Sjquwin8d4PPAxsBg4GzgL2Le77ZkLpnNPOc14JXJEb1wM/B5YBrwFjCEX3pOL9bwXuAt4CbC95rvuBBsLX/QBgMvDHMq/5BkLhzTsF6Ef505PfSks5XUs4UixJkiRJe8WC2rVOJnxO/wicz57l71LgZuC04vijwJeBFyo83yzgC7nxTwhHU9eV7HcF4YjoDwinDh8P/CvwmZL9NgFLgWnF8YwyGQFOpeVo9Q7CUeiRwAnAQ2X2n5HbvofKR2UlSZIkqaLUr0HtaWqBVYSCV674bQQ+AGwpjvsRimU5/QnXnGbF8Y3ABexZTpvNBz6WG3+ScHS11C9z2zPK3F/68Wv3cv97KuwjSZIkSW2yoHa9zxFOc61kNbAgN55aYb/zCBMjQSi0F9P+kckbCEdIIVyvekGZffIF8u2UP4reXDg3A18Fmko+nrcvcGxu/Msy+0iSJElSuyyoXWsX4Uhnex7ObU+ssM+Zue1babv05v00t/3WMvc/AGwrbo8ATiy5/yDgsOL2rwgzBP+hOH4LMLhk/9No+T5aQZjFV5IkSZL2mgW1az0FbO3Afmty2yMq7JMvl4v3IsOTue0jy9y/A/hNblx6VDQ/vrv4Z/NR0UHA9Db29+ipJEmSpE5zkqSuVW5pl3J25rYHlLl/IDAuN76E9tchhbDETX6t2DdU2O+XwDuL2zMIEzWRG+f3a/7z0tz9+SJ6Wpn9JUmSJGmvWVC7VkMXPc+okvGhZfdq39AKH89fhzqNMPNv85Hf5sL5CmHdVIBfE0r1AFoX2IOBScXtAnBvJ3NKkiRJkgU1Uf1KxvdQefbetmyr8PHfARsIRXggYXmcXxDWRW0+cptfLqYeeBB4G2GpmVHFx+fL6h8JpVaSJEmSOqU3FtRyp8z2NKVl9JvAz7rw+RuB+4BziuMZhILa1um6vyQU1H7AKcAteP2pJEmSpC6U+iRJ20vGgzrwmHJrf/Y022l9NPLYSju+DuXWQy03QVKl/TO8/lSSJElSF0q9oG4qGR/Qgcec1B1BIshfz3lmxb06L18opxCK/SnF8TPACyX7P0RYFxVCQT0KGFscNwD3d0NGSZIkSX1I6gX1NWB1bvyWDjzm492Updpuzm2fALy7i5//SWBVcbuGMEvvyOK43NHQXYR1UQGOAGbn7vste/4yQZIkSZL2SuoFFVqvAfoJ2r5u9iO0LJ/S091CmHio2feAQ/bi8fvRUjgryc/me3Fuu/T03mb54npxhY9LkiRJUqf0hIL6o9z20cD1wPCSfYYAXwCuA7ZUKVd3ayIU7ubrcMcCS4GPUnkiqH7AqcA8YCXwxnZeI18sB+det9JyMeX2L/24JEmSJHVKT5jF92fAIlpO7/0AcEbxY68RjhS+mbDmZxPh1NOfVj9mt1hK+Pv8kDBB1Gjgu8DXCUeWXyQU2JGEo6vHAfvsxfOXK5bLgPUV9n8UWEPriai20footyRJkiR1Sk8oqE3A+cBdwJHFjw1nz2sytwJ/A9xavWhVMR9YAfyAsE4pwAjg9HYe9zKwsZ19XgSeAg7PfazS6b0Q1kW9B/iL3McWsedsy5IkSZK013pCQYUwmc+JwCXAB2ldqDYQStx/AE8Qlj+ZX3J/d3oi93qPdfAxL+Qes7qtHYuWAscAfw6cSziNt3Q5nS2EiY9+BdxJODra2IHn/ibw9tz45ko7Fv2IcCpxR/eXJEmSpA7pKQUVoB74UvE2jHBq70b2PB21ALy/irl+yt6fUvxA8bY3moCbijcIpzSPJnwNy30eOupbxVtHLSzeJEmSJKlL9aSCmreZljU5+6r64k2SJEmSeoWeMIuvJEmSJKkPsKBKkiRJkpJgQZUkSZIkJaGnXoPaWR8CBnfB8zwN3NcFz6NOyLLsxrq6up2xc+Rdfvnlw5u3ly5dOhV4KWKcct6Q2x5CevlK/13+gTAxWDJ27mz5lpsyZcrFF1100YcjxtlDoVAYFTsDcHldXd2nYofI+/73vz+6eXv58uUjSO97f2ir0UlcFylHxyzkXO7kjNgxWmnIfw5XHQEDbo8XZk+PP56NaN4eMGDAEI2W2AAAAwZJREFUkLq6utS+B/vHDgAMTu3zsnXr1pobbrhh93jhwoVXAf8SL1FZ+fePk0nv/W10bju599+FCxe2ev/94Ac/uGL27Nmx4lTSJw8m9rWCehWwfxc8z/VYUGMa3f4u1ZVl2e7thoaGAcAB8dK0KyPtfNA1/067VKFQ2L1dU1OzD7BPvDTJGla8JaOmpuX/9sbGxhpS/97fTgq/aKhsF4PZ1SW/6O0mjbXQuG/sFHmNrRd86wnvvzEk93nJv3cANDY2jgRGxknTIQNJ7HNYIrmvcWPpP84sSypfX9YnW7kkSZIkKT197Qjq4XRNKU/q9NLe7sorr2xasGDBh2LnaMuxxx571Lhx40YCrFq16k+PPfbYC7EzlTgQOLi4vRl4JGKWcoYBx+bGDwKNFfaNYsaMGW+ura3tBzB06NAnsizr7NrD3a6pqempar1Wv379Pl8oFJI6YpU3bty4g2fOnHkgwKZNmzYuXrz4j7EzlRgBHJ0bL4oVpA1vyW0/BrwWK0gFR9FyZOtFYGXELHs48sgjx0+YMGEiwMiRIzdnWZba+28ro0aN2lqN16mtrX24qakp2f/bsyzrN3PmzDc3j5ctW/bImjVrUlvi8Big+RKjlYTv/5RMAA4qbr9GeP9IxtixY0ccd9xxu99/syx7IMuyQluPiam2tjbp946ulM2dO3f3FyLLssPnzJnzdMxAkiRJkqS+Yd68eYcVCoXdv1z3FF9JkiRJUhIsqJIkSZKkJFhQJUmSJElJsKBKkiRJkpJgQZUkSZIkJcGCKkmSJElKggVVkiRJkpQEC6okSZIkKQkWVEmSJElSEiyokiRJkqQkWFAlSZIkSUmwoEqSJEmSkmBBlSRJkiQlwYIqSZIkSUqCBVWSJEmSlAQLqiRJkiQpCRZUSZIkSVISLKiSJEmSpCRYUCVJkiRJSbCgSpIkSZKSYEGVJEmSJCWhNj8oFAoT582bFyuLJEmSJKkPKRQKE/Pj2pL77ywUCtVLI0mSJElSkaf4SpIkSZKSYEGVJEmSJCXhfwG8dZWmdb83nAAAAABJRU5ErkJggg=="
+ }
+ },
+ "cell_type": "markdown",
+ "id": "5ae8701f",
+ "metadata": {},
+ "source": [
+ "
+ Download this notebook and run it locally on your machine [recommended]. Click here.
+
+
+ You can also run this notebook in the cloud using Binder. Click here
+ .
+
+
+
+
+
Settings
This document was generated with Documenter.jl version 0.27.25 on Friday 11 August 2023. Using Julia version 1.9.2.
diff --git a/dev/notebooks/mpi_tutorial.ipynb b/dev/notebooks/mpi_tutorial.ipynb
new file mode 100644
index 0000000..bcebddd
--- /dev/null
+++ b/dev/notebooks/mpi_tutorial.ipynb
@@ -0,0 +1,401 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "8d800917",
+ "metadata": {},
+ "source": [
+ "# Tutorial: Using MPI in Julia\n",
+ "Message Passing Interface (MPI) is a standardized and portable library specification for communication between parallel processes in distributed memory systems. Julia offers a convenient way to work with MPI for creating efficient parallel and distributed applications. In this tutorial, you will learn how to use MPI from Julia to perform parallel computing tasks.\n",
+ "\n",
+ "## MPI launches separate Julia instances\n",
+ "When you run an MPI-enabled Julia script, MPI takes care of spawning multiple instances of the Julia executable, each acting as a separate process. These workers can communicate with each other using MPI communication functions. This enables parallel processing and distributed computation. Here's a summary of how it works:\n",
+ "\n",
+ "-- TODO: insert picture here --\n",
+ "\n",
+ "- **MPI Spawns Processes**: The `mpiexec` command launches multiple instances of the Julia executable, creating separate worker processes. In this example, 4 Julia workers are spawned.\n",
+ "\n",
+ "- **Worker Communication**: These workers can communicate with each other using MPI communication functions, allowing them to exchange data and coordinate actions.\n",
+ "\n",
+ "- **Parallel Tasks**: The workers execute parallel tasks simultaneously, working on different parts of the computation to potentially speed up the process.\n",
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "## Installing MPI.jl and MPIClusterManagers Packages\n",
+ "To use MPI in Julia, you'll need the MPI.jl package, and if you intend to run MPI programs in a Jupyter Notebook, you'll also need the MPIClusterManagers package. These packages provide the necessary bindings to the MPI library and cluster management capabilities. To install the packages, open a terminal and run the following commands:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3cb5f151",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "using Pkg\n",
+ "Pkg.add(\"MPI\")\n",
+ "Pkg.add(\"MPIClusterManagers\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ed45a4b2",
+ "metadata": {},
+ "source": [
+ "
\n",
+ " Tip:\n",
+ "The package MPI.jl is the Julia interface to MPI. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the documentation of MPI.jl for further details.\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7a36916e",
+ "metadata": {},
+ "source": [
+ "## Writing a HelloWorld MPI Program in Julia\n",
+ "Let's start by creating a simple MPI program that prints a message along with the rank of each worker. \n",
+ "\n",
+ "Create a new Julia script, for example, `mpi_hello_world.jl`:\n",
+ "\n",
+ "```julia\n",
+ "using MPI\n",
+ "\n",
+ "# Initialize MPI\n",
+ "MPI.Init()\n",
+ "\n",
+ "# Get the default communicator (MPI_COMM_WORLD) for all processes\n",
+ "comm = MPI.COMM_WORLD\n",
+ "\n",
+ "# Get the number of processes in this communicator\n",
+ "nranks = MPI.Comm_size(comm)\n",
+ "\n",
+ "# Get the rank of the current process within the communicator\n",
+ "rank = MPI.Comm_rank(comm)\n",
+ "\n",
+ "# Print a message with the rank of the current process\n",
+ "println(\"Hello, I am process $rank of $nranks processes!\")\n",
+ "\n",
+ "# Finalize MPI\n",
+ "MPI.Finalize()\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6caa8d74",
+ "metadata": {},
+ "source": [
+ "### MPI Communicators\n",
+ "In MPI, a **communicator** is a context in which a group of processes can communicate with each other. `MPI_COMM_WORLD` is one of the MPI standard communicators, it represents all processes in the MPI program. Custom communicators can also be created to group processes based on specific requirements or logical divisions. \n",
+ "\n",
+ "The **rank** of a processor is a unique identifier assigned to each process within a communicator. It allows processes to distinguish and address each other in communication operations. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "19f41e38",
+ "metadata": {},
+ "source": [
+ "## Running the HelloWorld MPI Program\n",
+ "\n",
+ "To run MPI applications in parallel, you need a launcher like `mpiexec`. MPI codes written in Julia are not an exception to this rule. From the system terminal, you can run\n",
+ "```\n",
+ "$ mpiexec -np 4 mpi_hello_world.jl\n",
+ "```\n",
+ "In this command, `-np 4` specifies the desired number of processes. \n",
+ "But it will probably not work since the version of `mpiexec` needs to match with the MPI version we are using from Julia. You can find the path to the `mpiexec` binary you need to use with these commands\n",
+ "\n",
+ "```julia\n",
+ "julia> using MPI\n",
+ "julia> MPI.mpiexec_path\n",
+ "```\n",
+ "\n",
+ "and then try again\n",
+ "```\n",
+ "$ /path/to/my/mpiexec -np 4 julia mpi_hello_world.jl\n",
+ "```\n",
+ "with your particular path.\n",
+ "\n",
+ "However, this is not very convenient. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands:\n",
+ "```julia\n",
+ "julia> using MPI\n",
+ "julia> mpiexec(cmd->run(`$cmd -np 4 julia mpi_hello_world.jl`))\n",
+ "```\n",
+ "\n",
+ "Now, you should see output from 4 ranks.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0592e58c",
+ "metadata": {},
+ "source": [
+ "## Running MPI Programs in Jupyter Notebook with MPIClusterManagers\n",
+ "If you want to run your MPI code from a Jupyter Notebook, you can do so using the `MPIClusterManagers` package.\n",
+ "\n",
+ "1. Load the packages and start an MPI cluster with the desired number of workers:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "cf66dd39",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "using MPIClusterManagers\n",
+ "# Distributed package is needed for addprocs()\n",
+ "using Distributed\n",
+ "\n",
+ "manager = MPIWorkerManager(4)\n",
+ "addprocs(manager)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d40fe3ee",
+ "metadata": {},
+ "source": [
+ "2. Run your MPI code inside a `@mpi_do` block to execute it on the cluster workers:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0a51d1f2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "@mpi_do manager begin\n",
+ " using MPI\n",
+ " comm = MPI.COMM_WORLD\n",
+ " rank = MPI.Comm_rank(comm)\n",
+ " println(\"Hello from process $rank\")\n",
+ "end\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "38ed88c1",
+ "metadata": {},
+ "source": [
+ "MPI is automatically initialized and finalized within the `@mpi_do` block.\n",
+ "\n",
+ "3. Remove processes when done:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e0b53cc1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "rmprocs(manager)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5466a650",
+ "metadata": {},
+ "source": [
+ "## Point-to-Point Communication with MPI\n",
+ "MPI provides point-to-point communication using blocking send and receiving functions `MPI.send`, `MPI.recv`; or their non-blocking versions `MPI.Isend`, and `MPI.Irecv!`. These functions allow individual processes to send and receive data between each other.\n",
+ "\n",
+ "### Blocking communication\n",
+ "\n",
+ "Let's demonstrate how to send and receive with an example:\n",
+ "\n",
+ "```julia\n",
+ "using MPI\n",
+ "\n",
+ "MPI.Init()\n",
+ "\n",
+ "comm = MPI.COMM_WORLD\n",
+ "rank = MPI.Comm_rank(comm)\n",
+ "\n",
+ "# Send and receive messages using blocking MPI.send and MPI.recv\n",
+ "if rank == 0\n",
+ " data = \"Hello from process $rank !\"\n",
+ " MPI.send(data, comm, dest=1)\n",
+ "elseif rank == 1\n",
+ " received_data = MPI.recv(comm, source=0)\n",
+ " println(\"Process $rank received: $received_data\")\n",
+ "end\n",
+ "\n",
+ "MPI.Finalize()\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d4dfe654",
+ "metadata": {},
+ "source": [
+ "In this example, process 0 sends a message using `MPI.send`, and process 1 receives it using `MPI.recv`.\n",
+ "\n",
+ "### Non-blocking communication\n",
+ "\n",
+ "To demonstrate asynchronous communication, let's modify the example using `MPI.Isend` and `MPI.Irecv!`:\n",
+ "\n",
+ "```julia\n",
+ "using MPI\n",
+ "\n",
+ "MPI.Init()\n",
+ "\n",
+ "comm = MPI.COMM_WORLD\n",
+ "rank = MPI.Comm_rank(comm)\n",
+ "\n",
+ "# Asynchronous communication using MPI.Isend and MPI.Irecv!\n",
+ "if rank == 0\n",
+ " data = \"Hello from process $rank !\"\n",
+ " request = MPI.Isend(data, comm, dest=1)\n",
+ " # Other computation can happen here\n",
+ " MPI.Wait(request)\n",
+ "elseif rank == 1\n",
+ " received_data = Array{UInt8}(undef, 50) # Preallocate buffer\n",
+ " request = MPI.Irecv!(received_data, comm, source=0)\n",
+ " # Other computation can happen here\n",
+ " MPI.Wait(request)\n",
+ " println(\"Process $rank received: $(String(received_data))\")\n",
+ "end\n",
+ "\n",
+ "MPI.Finalize()\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "024db538",
+ "metadata": {},
+ "source": [
+ "In this example, process 0 uses `MPI.Isend` to send the message asynchronously. This function returns immediately, allowing the sender process to continue its execution. However, the actual sending of data is done asynchronously in the background. Similar to `MPI.Isend`, `MPI.Irecv!` returns immediately, allowing the receiver process to continue executing. \n",
+ "\n",
+ "
\n",
+ "Important: In asynchronous communication, always use MPI.Wait() to ensure the communication is finished before accessing the send or receive buffer.\n",
+ "
\n",
+ "\n",
+ "\n",
+ "## Collective Communication with MPI\n",
+ "MPI provides collective communication functions for communication involving multiple processes. Let's explore some of these functions:\n",
+ "\n",
+ "- MPI.Gather: Gathers data from all processes to a single process.\n",
+ "- MPI.Scatter: Distributes data from one process to all processes.\n",
+ "- MPI.Bcast: Broadcasts data from one process to all processes.\n",
+ "- MPI.Barrier: Synchronizes all processes.\n",
+ "\n",
+ "\n",
+ "Let's illustrate the use of `MPI.Gather` and `MPI.Scatter` with an example:\n",
+ "\n",
+ "```julia\n",
+ "# TODO: check if this runs correctly\n",
+ "using MPI\n",
+ "using Random\n",
+ "\n",
+ "MPI.Init()\n",
+ "\n",
+ "comm = MPI.COMM_WORLD\n",
+ "rank = MPI.Comm_rank(comm)\n",
+ "size = MPI.Comm_size(comm)\n",
+ "\n",
+ "# Root processor generates random data\n",
+ "data = rand(rank == 0 ? size * 2 : 0)\n",
+ "\n",
+ "# Scatter data to all processes\n",
+ "local_data = Vector{Float64}(undef, 2)\n",
+ "MPI.Scatter!(data, local_data, comm, root=0)\n",
+ "\n",
+ "# Compute local average\n",
+ "local_average = sum(local_data) / length(local_data)\n",
+ "\n",
+ "# Gather local averages at the root processor\n",
+ "gathered_averages = Vector{Float64}(undef, size)\n",
+ "MPI.Gather!(local_average, gathered_averages, comm, root=0)\n",
+ "\n",
+ "if rank == 0\n",
+ " # Compute global average of sub-averages\n",
+ " global_average = sum(gathered_averages) / size\n",
+ " println(\"Global average: $global_average\")\n",
+ "end\n",
+ "\n",
+ "MPI.Finalize()\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e65cb53f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "using MPI\n",
+ "using Random\n",
+ "\n",
+ "# TODO: check if this runs correctly\n",
+ "\n",
+ "MPI.Init()\n",
+ "\n",
+ "comm = MPI.COMM_WORLD\n",
+ "rank = MPI.Comm_rank(comm)\n",
+ "size = MPI.Comm_size(comm)\n",
+ "\n",
+ "# Root processor generates random data\n",
+ "data = rand(rank == 0 ? size * 2 : 0)\n",
+ "\n",
+ "# Scatter data to all processes\n",
+ "local_data = Vector{Float64}(undef, 2)\n",
+ "MPI.Scatter!(data, local_data, comm, root=0)\n",
+ "\n",
+ "# Compute local average\n",
+ "local_average = sum(local_data) / length(local_data)\n",
+ "\n",
+ "# Gather local averages at the root processor\n",
+ "gathered_averages = Vector{Float64}(undef, size)\n",
+ "MPI.Gather!(local_average, gathered_averages, comm, root=0)\n",
+ "\n",
+ "if rank == 0\n",
+ " # Compute global average of sub-averages\n",
+ " global_average = sum(gathered_averages) / size\n",
+ " println(\"Global average: $global_average\")\n",
+ "end\n",
+ "\n",
+ "MPI.Finalize()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dfd5da9e",
+ "metadata": {},
+ "source": [
+ "In this example, the root processor generates random data and then scatters it to all processes using MPI.Scatter. Each process calculates the average of its local data, and then the local averages are gathered using MPI.Gather. The root processor computes the global average of all sub-averages and prints it."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "fcf34823",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Julia 1.9.1",
+ "language": "julia",
+ "name": "julia-1.9"
+ },
+ "language_info": {
+ "file_extension": ".jl",
+ "mimetype": "application/julia",
+ "name": "julia",
+ "version": "1.9.1"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/dev/notebooks/mpi_tutorial/index.html b/dev/notebooks/mpi_tutorial/index.html
new file mode 100644
index 0000000..86758ed
--- /dev/null
+++ b/dev/notebooks/mpi_tutorial/index.html
@@ -0,0 +1,21 @@
+
+- · XM_40017
This document was generated with Documenter.jl version 0.27.25 on Thursday 10 August 2023. Using Julia version 1.9.2.
+Search · XM_40017
Loading search...
Settings
This document was generated with Documenter.jl version 0.27.25 on Friday 11 August 2023. Using Julia version 1.9.2.
diff --git a/dev/search_index.js b/dev/search_index.js
index 85f2cda..d8c371f 100644
--- a/dev/search_index.js
+++ b/dev/search_index.js
@@ -1,3 +1,3 @@
var documenterSearchIndex = {"docs":
-[{"location":"getting_started_with_julia/#Getting-started","page":"Getting started","title":"Getting started","text":"","category":"section"},{"location":"getting_started_with_julia/#Introduction","page":"Getting started","title":"Introduction","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The programming of this course will be done using the Julia programming language. Thus, we start by explaining how to get up and running with Julia. After learning this page, you will be able to:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Use the Julia REPL;\nRun serial and parallel code;\nInstall and manage Julia packages.","category":"page"},{"location":"getting_started_with_julia/#Why-Julia?","page":"Getting started","title":"Why Julia?","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Courses related with high-performance computing (HPC) often use languages such as C, C++, or Fortran. We use Julia instead to make the course accessible to a wider set of students, including the ones that have no experience with C/C++ or Fortran, but are willing to learn parallel programming. Julia is a relatively new programming language specifically designed for scientific computing. It combines a high-level syntax close to interpreted languages like Python with the performance of compiled languages like C, C++, or Fortran. Thus, Julia will allow us to write efficient parallel algorithms with a syntax that is convenient in a teaching setting. In addition, Julia provides easy access to different programming models to write distributed algorithms, which will be useful to learn and experiment with them.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"tip: Tip\nYou can run the code in this link to learn how Julia compares to other languages (C and Python) in terms of performance.","category":"page"},{"location":"getting_started_with_julia/#Installing-Julia","page":"Getting started","title":"Installing Julia","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"This is a tutorial-like page. Follow these steps before you continue reading the document.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Download and install Julia from julialang.org;\nFollow the specific instructions for your operating system: Windows, MacOS, or Linux\nDownload and install VSCode and its Julia extension;","category":"page"},{"location":"getting_started_with_julia/#The-Julia-REPL","page":"Getting started","title":"The Julia REPL","text":"","category":"section"},{"location":"getting_started_with_julia/#Starting-Julia","page":"Getting started","title":"Starting Julia","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"There are several ways of opening Julia depending on your operating system and your IDE, but it is usually as simple as launching the Julia app. With VSCode, open a folder (File > Open Folder). Then, press Ctrl+Shift+P to open the command bar, and execute Julia: Start REPL. If this does not work, make sure you have the Julia extension for VSCode installed. Independently of the method you use, opening Julia results in a window with some text ending with:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia>","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You have just opened the Julia read-evaluate-print loop, or simply the Julia REPL. Congrats! You will spend most of time using the REPL, when working in Julia. The REPL is a console waiting for user input. Just as in other consoles, the string of text right before the input area (julia> in the case) is called the command prompt or simply the prompt.","category":"page"},{"location":"getting_started_with_julia/#Basic-usage","page":"Getting started","title":"Basic usage","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The usage of the REPL is as follows:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You write some input\npress enter\nyou get the output","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"For instance, try this","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> 1 + 1","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"A \"Hello world\" example looks like this in Julia","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> println(\"Hello, world!\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Try to run it in the REPL.","category":"page"},{"location":"getting_started_with_julia/#Help-mode","page":"Getting started","title":"Help mode","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Curious about what function println does? Enter into help mode to look into the documentation. This is done by typing a question mark (?) into the inut field:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ?","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"After typing ?, the command prompt changes to help?>. It means we are in help mode. Now, we can type a function name to see its documentation.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"help?> println","category":"page"},{"location":"getting_started_with_julia/#Package-and-shell-modes","page":"Getting started","title":"Package and shell modes","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The REPL comes with two more modes, namely package and shell modes. To enter package mode type","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ]","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Package mode is used to install and manage packages. We are going to discuss the package mode in greater detail later. To return back to normal mode press the backspace key several times.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To enter shell mode type semicolon (;)","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ;","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The prompt should have changed to shell> indicating that we are in shell mode. Now you can type commands that you would normally do on your system command line. For instance,","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"shell> ls","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"will display the contents of the current folder in Mac or Linux. Using shell mode in Windows is not straightforward, and thus not recommended for beginners.","category":"page"},{"location":"getting_started_with_julia/#Running-Julia-code","page":"Getting started","title":"Running Julia code","text":"","category":"section"},{"location":"getting_started_with_julia/#Running-more-complex-code","page":"Getting started","title":"Running more complex code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Real-world Julia programs are not typed in the REPL in practice. They are written in one or more files and included in the REPL. To try this, create a new file called hello.jl, write the code of the \"Hello world\" example above, and save it. If you are using VSCode, you can create the file using File > New File > Julia File. Once the file is saved with the name hello.jl, execute it as follows","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> include(\"hello.jl\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"\\warn{ Make sure that the file \"hello.jl\" is located in the current working directory of your Julia session. You can query the current directory with function pwd(). You can change to another directory with function cd() if needed. Also, make sure that the file extension is .jl.}","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The recommended way of running Julia code is using the REPL as we did. But it is also possible to run code directly from the system command line. To this end, open a terminal and call Julia followed buy the path to the file containing the code you want to execute.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ julia hello.jl","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Previous line assumes that you have Julia properly installed in the system and that is usable from the terminal. In UNIX systems (Linux and Mac), the Julia binary needs to be in one of the directories listed in the PATH environment variable. To check that Julia is properly installed, you can use","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ julia --version","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"If this runs without error and you see a version number, you are good to go!","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nIn this tutorial, when a code snipped starts with $, it should be run in the terminal. Otherwise, the code is to be run in the Julia REPL.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"tip: Tip\nAvoid calling Julia code from the terminal, use the Julia REPL instead! Each time you call Julia from the terminal, you start a fresh Julia session and Julia will need to compile your code from scratch. This can be time consuming for large projects. In contrast, if you execute code in the REPL, Julia will compile code incrementally, which is much faster. Running code in a cluster (like in DAS-5 for the Julia assignment) is among the few situations you need to run Julia code from the terminal.","category":"page"},{"location":"getting_started_with_julia/#Running-parallel-code","page":"Getting started","title":"Running parallel code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Since we are in a parallel computing course, let's run a parallel \"hello world\" example in Julia. Open a Julia REPL and write","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using Distributed\njulia> @everywhere println(\"Hello, world! I am proc $(myid()) from $(nprocs())\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Here, we are using the Distributed package, which is part of the Julia standard library that provides distributed memory parallel support. The code prints the process id and the number of processes in the current Julia session.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You will provably only see output from 1 proces. We need to add more processes to run the example in parallel. This is done with the addprocs function.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> addprocs(3)","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"We have added 3 new processes, plus the old one, we have 4 processes. Run the code again.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> @everywhere println(\"Hello, world! I am proc $(myid()) from $(nprocs())\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, you should see output from 4 processes.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"It is possible to specify the number of processes when starting Julia from the terminal with the -p argument (useful, e.g., when running in a cluster). If you launch Julia from the terminal as","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ julia -p 3","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"and then run","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> @everywhere println(\"Hello, world! I am proc $(myid()) from $(nprocs())\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You should get output from 4 processes as before.","category":"page"},{"location":"getting_started_with_julia/#Installing-packages","page":"Getting started","title":"Installing packages","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"One of the most useful features of Julia is its package manager. It allows one to install Julia packages in a straightforward and platform independent way. To illustrate this, let us consider the following parallel \"Hello world\" example. This example uses the message passing interface (MPI). We will learn more about MPI later in the course.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Copy the following block of code into a new file named \"hello_mpi.jl\"","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"# file hello_mpi.jl\nusing MPI\nMPI.Init()\ncomm = MPI.COMM_WORLD\nrank = MPI.Comm_rank(comm)\nnranks = MPI.Comm_size(comm)\nprintln(\"Hello world, I am rank $rank of $nranks\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"As you can see from this example, one can access MPI from Julia in a clean way, without type annotations and other complexities of C/C++ code.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, run the file from the REPL","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> incude(\"hello_mpi.jl\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"It provably didn't work, right? Read the error message and note that the MPI package needs to be installed to run this code.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To install a package, we need to enter package mode. Remember that we entered into help mode by typing ?. Package mode is activated by typing ]","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ]","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"At this point, the promp should have changed to (@v1.8) pkg> indicating that we are in package mode. The text between parenthesis indicates which is the active project, i.e., where packages are going to be installed. In this case, we are working with the global project associated with our Julia installation (which is Julia 1.8 in this example, but it can be another version in your case).","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To install the MPI package, type","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> add MPI","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Congrats, you have installed MPI!","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nMany Julia package names end with .jl. This is just a way of signaling that a package is written in Julia. When using such packages, the .jl needs to be ommited. In this case, we have isntalled the MPI.jl package even though we have only typed MPI in the REPL.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nThe package you have installed it is the Julia interface to MPI, called MPI.jl. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the documentation of MPI.jl for further details.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To check that the package was installed properly, exit package mode by pressing the backspace key several times, and run it again","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> incude(\"hello_mpi.jl\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, it should work, but you provably get output from a single MPI rank only.","category":"page"},{"location":"getting_started_with_julia/#Running-MPI-code","page":"Getting started","title":"Running MPI code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To run MPI applications in parallel, you need a launcher like mpiexec. MPI codes written in Julia are not an exception to this rule. From the system terminal, you can run","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ mpiexec -np 4 julia hello_mpi.jl","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"But it will provably don't work since the version of mpiexec needs to match with the MPI version we are using from Julia. You can find the path to the mpiexec binary you need to use with these commands","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using MPI\njulia> MPI.mpiexec_path","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"and then try again","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ /path/to/my/mpiexec -np 4 julia hello_mpi.jl","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"with your particular path.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"However, this is not very convenient. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using MPI\njulia> mpiexec(cmd->run(`$cmd -np 4 julia hello_mpi.jl`))","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, you should see output from 4 ranks.","category":"page"},{"location":"getting_started_with_julia/#Package-manager","page":"Getting started","title":"Package manager","text":"","category":"section"},{"location":"getting_started_with_julia/#Installing-packages-locally","page":"Getting started","title":"Installing packages locally","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"We have installed the MPI package globally and it will be available in all Julia sessions. However, in some situations, we want to work with different versions of the same package or to install packages in an isolated way to avoid potential conflicts with other packages. This can be done by using local projects.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"A project is simply a folder in the hard disk. To use a particular folder as your project, you need to activate it. This is done by entering package mode and using the activate command followed by the path to the folder you want to activate.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> activate .","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Previous command will activate the current working directory. Note that the dot . is indeed the path to the current folder.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The prompt has changed to (lessons) pkg> indicating that we are in the project within the lessons folder. The particular folder name can be different in your case.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"tip: Tip\nYou can activate a project directly when opening Julia from the terminal using the --project flag. The command $ julia --project=. will open Julia and activate a project in the current directory. You can also achieve the same effect by setting the environment variable JULIA_PROJECT with the path of the folder you want to activate.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nThe active project folder and the current working directory are two independent concepts! For instance, (@v1.8) pkg> activate folderB and then julia> cd(\"folderA\"), will activate the project in folderB and change the current working directory to folderA.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"At this point all package-related operations will be local to the new project. For instance, install the DataFrames package.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(lessons) pkg> add DataFrames","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Use the package to check that it is installed","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using DataFrames\njulia> DataFrame(a=[1,2],b=[3,4])","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, we can return to the global project to check that DataFrames has not been installed there. To return to the global environment, use activate without a folder name.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(lessons) pkg> activate","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The prompt is again (@v1.8) pkg>","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, try to use DataFrames.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using DataFrames\njulia> DataFrame(a=[1,2],b=[3,4])","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You should get an error or a warning unless you already had DataFrames installed globally.","category":"page"},{"location":"getting_started_with_julia/#Project-and-Manifest-files","page":"Getting started","title":"Project and Manifest files","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The information about a project is stored in two files Project.toml and Manifest.toml.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Project.toml contains the packages explicitly installed (the direct dependencies)\nManifest.toml contains direct and indirect dependencies along with the concrete version of each package.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"In other words, Project.toml contains the packages relevant for the user, whereas Manifest.toml is the detailed snapshot of all dependencies. The Manifest.toml can be used to reproduce the same envinonment in another machine.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You can see the path to the current Project.toml file by using the status operator (or st in its short form) while in package mode","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> status","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The information about the Manifest.toml can be inspected by passing the -m flag.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> status -m","category":"page"},{"location":"getting_started_with_julia/#Installing-packages-from-a-project-file","page":"Getting started","title":"Installing packages from a project file","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Project files can be used to install lists of packages defined by others. E.g., to install all the dependencies of a Julia application.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Assume that a colleague has sent to you a Project.toml file with this content:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"[deps]\nBenchmarkTools = \"6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf\"\nDataFrames = \"a93c6f00-e57d-5684-b7b6-d8193f3e46c0\"\nMPI = \"da04e1cc-30fd-572f-bb4f-1f8673147195\"","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Copy the contents of previous code block into a file called Project.toml and place it in an empty folder named newproject. It is important that the file is named Project.toml. You can create a new folder from the REPL with","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> mkdir(\"newproject\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To install all the packages registered in this file you need to activate the folder containing your Project.toml file","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> activate newproject","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"and then instantiating it","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(newproject) pkg> instantiate","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The instantiate command will download and install all listed packages and their dependencies in just one click.","category":"page"},{"location":"getting_started_with_julia/#Getting-help-in-package-mode","page":"Getting started","title":"Getting help in package mode","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You can get help about a particular package operator by writing help in front of it","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> help activate","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You can get an overview of all package commands by typing help alone","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> help","category":"page"},{"location":"getting_started_with_julia/#Package-operations-in-Julia-code","page":"Getting started","title":"Package operations in Julia code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"In some situations it is required to use package commands in Julia code, e.g., to automatize installation and deployment of Julia applications. This can be done using the Pkg package. For instance","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using Pkg\njulia> Pkg.status()","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"is equivalent to call status in package mode.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> status","category":"page"},{"location":"getting_started_with_julia/#Conclusion","page":"Getting started","title":"Conclusion","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"We have learned the basics of how to work with Julia. If you want to further dig into the topics we have covered here, you can take a look and the following links","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Julia Manual\nPackage manager","category":"page"},{"location":"tsp/","page":"-","title":"-","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/tsp.ipynb\"","category":"page"},{"location":"tsp/","page":"-","title":"-","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"jacobi_2D/","page":"-","title":"-","text":"\n","category":"page"},{"location":"julia_async/","page":"Tasks and channels","title":"Tasks and channels","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/julia_async.ipynb\"","category":"page"},{"location":"julia_async/","page":"Tasks and channels","title":"Tasks and channels","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"julia_async/","page":"Tasks and channels","title":"Tasks and channels","text":"\n","category":"page"},{"location":"julia_distributed_test/","page":"-","title":"-","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/julia_distributed_test.ipynb\"","category":"page"},{"location":"julia_distributed_test/","page":"-","title":"-","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"julia_distributed_test/","page":"-","title":"-","text":"\n","category":"page"},{"location":"julia_distributed/","page":"Remote calls and remote channels","title":"Remote calls and remote channels","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/julia_distributed.ipynb\"","category":"page"},{"location":"julia_distributed/","page":"Remote calls and remote channels","title":"Remote calls and remote channels","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"julia_distributed/","page":"Remote calls and remote channels","title":"Remote calls and remote channels","text":"\n","category":"page"},{"location":"julia_jacobi/","page":"-","title":"-","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/julia_jacobi.ipynb\"","category":"page"},{"location":"julia_jacobi/","page":"-","title":"-","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"matrix_matrix/","page":"Matrix Multiplication","title":"Matrix Multiplication","text":"\n","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = XM_40017","category":"page"},{"location":"#Programming-Large-Scale-Parallel-Systems-(XM_40017)","page":"Home","title":"Programming Large-Scale Parallel Systems (XM_40017)","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Welcome to the interactive lecture notes of the Programming Large-Scale Parallel Systems course at VU Amsterdam!","category":"page"},{"location":"#What","page":"Home","title":"What","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. In this page, we provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. Further information about the course is found in the study guide (click here) and our Canvas page (for registered students). ","category":"page"},{"location":"","page":"Home","title":"Home","text":"note: Note\nThis page contains only part of the course material. The rest is available on Canvas. In particular, the lecture notes in this public webpage do not fully cover all topics in the final exam.","category":"page"},{"location":"#How-to-use-this-page","page":"Home","title":"How to use this page","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You have two main ways of running the notebooks:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Download the notebooks and run them locally on your computer (recommended)\nRun the notebooks on the cloud via mybinder.org (high startup time).","category":"page"},{"location":"","page":"Home","title":"Home","text":"You also have the static version of the notebooks displayed in this webpage for quick reference. At each notebook page you will find a green box with links to download the notebook or to open in on mybinder.","category":"page"},{"location":"#How-to-run-the-notebooks-locally","page":"Home","title":"How to run the notebooks locally","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To run a notebook locally follow these steps:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Install Julia (if not done already). More information in Getting started.\nDownload the notebook.\nLaunch Julia. More information in Getting started.\nExecute these commands in the Julia command line:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Pkg\njulia> Pkg.add(\"IJulia\")\njulia> using IJulia\njulia> notebook()","category":"page"},{"location":"","page":"Home","title":"Home","text":"These commands will open a jupyter in your web browser. Navigate in jupyter to the notebook file you have downloaded and open it.","category":"page"},{"location":"#Authorship","page":"Home","title":"Authorship","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This material was created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.","category":"page"},{"location":"#License","page":"Home","title":"License","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"All material in this page that is original to this course may be used under a CC BY 4.0 license.","category":"page"},{"location":"#Acknowledgment","page":"Home","title":"Acknowledgment","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project \"Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course\" funded by the \"Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning\".","category":"page"},{"location":"sol_matrix_matrix/","page":"Solutions","title":"Solutions","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/sol_matrix_matrix.ipynb\"","category":"page"},{"location":"sol_matrix_matrix/","page":"Solutions","title":"Solutions","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"sol_matrix_matrix/","page":"Solutions","title":"Solutions","text":"\n","category":"page"}]
+[{"location":"getting_started_with_julia/#Getting-started","page":"Getting started","title":"Getting started","text":"","category":"section"},{"location":"getting_started_with_julia/#Introduction","page":"Getting started","title":"Introduction","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The programming of this course will be done using the Julia programming language. Thus, we start by explaining how to get up and running with Julia. After learning this page, you will be able to:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Use the Julia REPL;\nRun serial and parallel code;\nInstall and manage Julia packages.","category":"page"},{"location":"getting_started_with_julia/#Why-Julia?","page":"Getting started","title":"Why Julia?","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Courses related with high-performance computing (HPC) often use languages such as C, C++, or Fortran. We use Julia instead to make the course accessible to a wider set of students, including the ones that have no experience with C/C++ or Fortran, but are willing to learn parallel programming. Julia is a relatively new programming language specifically designed for scientific computing. It combines a high-level syntax close to interpreted languages like Python with the performance of compiled languages like C, C++, or Fortran. Thus, Julia will allow us to write efficient parallel algorithms with a syntax that is convenient in a teaching setting. In addition, Julia provides easy access to different programming models to write distributed algorithms, which will be useful to learn and experiment with them.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"tip: Tip\nYou can run the code in this link to learn how Julia compares to other languages (C and Python) in terms of performance.","category":"page"},{"location":"getting_started_with_julia/#Installing-Julia","page":"Getting started","title":"Installing Julia","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"This is a tutorial-like page. Follow these steps before you continue reading the document.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Download and install Julia from julialang.org;\nFollow the specific instructions for your operating system: Windows, MacOS, or Linux\nDownload and install VSCode and its Julia extension;","category":"page"},{"location":"getting_started_with_julia/#The-Julia-REPL","page":"Getting started","title":"The Julia REPL","text":"","category":"section"},{"location":"getting_started_with_julia/#Starting-Julia","page":"Getting started","title":"Starting Julia","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"There are several ways of opening Julia depending on your operating system and your IDE, but it is usually as simple as launching the Julia app. With VSCode, open a folder (File > Open Folder). Then, press Ctrl+Shift+P to open the command bar, and execute Julia: Start REPL. If this does not work, make sure you have the Julia extension for VSCode installed. Independently of the method you use, opening Julia results in a window with some text ending with:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia>","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You have just opened the Julia read-evaluate-print loop, or simply the Julia REPL. Congrats! You will spend most of time using the REPL, when working in Julia. The REPL is a console waiting for user input. Just as in other consoles, the string of text right before the input area (julia> in the case) is called the command prompt or simply the prompt.","category":"page"},{"location":"getting_started_with_julia/#Basic-usage","page":"Getting started","title":"Basic usage","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The usage of the REPL is as follows:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You write some input\npress enter\nyou get the output","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"For instance, try this","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> 1 + 1","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"A \"Hello world\" example looks like this in Julia","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> println(\"Hello, world!\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Try to run it in the REPL.","category":"page"},{"location":"getting_started_with_julia/#Help-mode","page":"Getting started","title":"Help mode","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Curious about what function println does? Enter into help mode to look into the documentation. This is done by typing a question mark (?) into the inut field:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ?","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"After typing ?, the command prompt changes to help?>. It means we are in help mode. Now, we can type a function name to see its documentation.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"help?> println","category":"page"},{"location":"getting_started_with_julia/#Package-and-shell-modes","page":"Getting started","title":"Package and shell modes","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The REPL comes with two more modes, namely package and shell modes. To enter package mode type","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ]","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Package mode is used to install and manage packages. We are going to discuss the package mode in greater detail later. To return back to normal mode press the backspace key several times.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To enter shell mode type semicolon (;)","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ;","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The prompt should have changed to shell> indicating that we are in shell mode. Now you can type commands that you would normally do on your system command line. For instance,","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"shell> ls","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"will display the contents of the current folder in Mac or Linux. Using shell mode in Windows is not straightforward, and thus not recommended for beginners.","category":"page"},{"location":"getting_started_with_julia/#Running-Julia-code","page":"Getting started","title":"Running Julia code","text":"","category":"section"},{"location":"getting_started_with_julia/#Running-more-complex-code","page":"Getting started","title":"Running more complex code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Real-world Julia programs are not typed in the REPL in practice. They are written in one or more files and included in the REPL. To try this, create a new file called hello.jl, write the code of the \"Hello world\" example above, and save it. If you are using VSCode, you can create the file using File > New File > Julia File. Once the file is saved with the name hello.jl, execute it as follows","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> include(\"hello.jl\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"\\warn{ Make sure that the file \"hello.jl\" is located in the current working directory of your Julia session. You can query the current directory with function pwd(). You can change to another directory with function cd() if needed. Also, make sure that the file extension is .jl.}","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The recommended way of running Julia code is using the REPL as we did. But it is also possible to run code directly from the system command line. To this end, open a terminal and call Julia followed buy the path to the file containing the code you want to execute.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ julia hello.jl","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Previous line assumes that you have Julia properly installed in the system and that is usable from the terminal. In UNIX systems (Linux and Mac), the Julia binary needs to be in one of the directories listed in the PATH environment variable. To check that Julia is properly installed, you can use","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ julia --version","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"If this runs without error and you see a version number, you are good to go!","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nIn this tutorial, when a code snipped starts with $, it should be run in the terminal. Otherwise, the code is to be run in the Julia REPL.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"tip: Tip\nAvoid calling Julia code from the terminal, use the Julia REPL instead! Each time you call Julia from the terminal, you start a fresh Julia session and Julia will need to compile your code from scratch. This can be time consuming for large projects. In contrast, if you execute code in the REPL, Julia will compile code incrementally, which is much faster. Running code in a cluster (like in DAS-5 for the Julia assignment) is among the few situations you need to run Julia code from the terminal.","category":"page"},{"location":"getting_started_with_julia/#Running-parallel-code","page":"Getting started","title":"Running parallel code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Since we are in a parallel computing course, let's run a parallel \"hello world\" example in Julia. Open a Julia REPL and write","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using Distributed\njulia> @everywhere println(\"Hello, world! I am proc $(myid()) from $(nprocs())\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Here, we are using the Distributed package, which is part of the Julia standard library that provides distributed memory parallel support. The code prints the process id and the number of processes in the current Julia session.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You will provably only see output from 1 proces. We need to add more processes to run the example in parallel. This is done with the addprocs function.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> addprocs(3)","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"We have added 3 new processes, plus the old one, we have 4 processes. Run the code again.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> @everywhere println(\"Hello, world! I am proc $(myid()) from $(nprocs())\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, you should see output from 4 processes.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"It is possible to specify the number of processes when starting Julia from the terminal with the -p argument (useful, e.g., when running in a cluster). If you launch Julia from the terminal as","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ julia -p 3","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"and then run","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> @everywhere println(\"Hello, world! I am proc $(myid()) from $(nprocs())\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You should get output from 4 processes as before.","category":"page"},{"location":"getting_started_with_julia/#Installing-packages","page":"Getting started","title":"Installing packages","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"One of the most useful features of Julia is its package manager. It allows one to install Julia packages in a straightforward and platform independent way. To illustrate this, let us consider the following parallel \"Hello world\" example. This example uses the message passing interface (MPI). We will learn more about MPI later in the course.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Copy the following block of code into a new file named \"hello_mpi.jl\"","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"# file hello_mpi.jl\nusing MPI\nMPI.Init()\ncomm = MPI.COMM_WORLD\nrank = MPI.Comm_rank(comm)\nnranks = MPI.Comm_size(comm)\nprintln(\"Hello world, I am rank $rank of $nranks\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"As you can see from this example, one can access MPI from Julia in a clean way, without type annotations and other complexities of C/C++ code.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, run the file from the REPL","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> incude(\"hello_mpi.jl\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"It provably didn't work, right? Read the error message and note that the MPI package needs to be installed to run this code.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To install a package, we need to enter package mode. Remember that we entered into help mode by typing ?. Package mode is activated by typing ]","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> ]","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"At this point, the promp should have changed to (@v1.8) pkg> indicating that we are in package mode. The text between parenthesis indicates which is the active project, i.e., where packages are going to be installed. In this case, we are working with the global project associated with our Julia installation (which is Julia 1.8 in this example, but it can be another version in your case).","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To install the MPI package, type","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> add MPI","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Congrats, you have installed MPI!","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nMany Julia package names end with .jl. This is just a way of signaling that a package is written in Julia. When using such packages, the .jl needs to be ommited. In this case, we have isntalled the MPI.jl package even though we have only typed MPI in the REPL.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nThe package you have installed it is the Julia interface to MPI, called MPI.jl. Note that it is not a MPI library by itself. It is just a thin wrapper between MPI and Julia. To use this interface, you need an actual MPI library installed in your system such as OpenMPI or MPICH. Julia downloads and installs a MPI library for you, but it is also possible to use a MPI library already available in your system. This is useful, e.g., when running on HPC clusters. See the documentation of MPI.jl for further details.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To check that the package was installed properly, exit package mode by pressing the backspace key several times, and run it again","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> incude(\"hello_mpi.jl\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, it should work, but you provably get output from a single MPI rank only.","category":"page"},{"location":"getting_started_with_julia/#Running-MPI-code","page":"Getting started","title":"Running MPI code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To run MPI applications in parallel, you need a launcher like mpiexec. MPI codes written in Julia are not an exception to this rule. From the system terminal, you can run","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ mpiexec -np 4 julia hello_mpi.jl","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"But it will provably don't work since the version of mpiexec needs to match with the MPI version we are using from Julia. You can find the path to the mpiexec binary you need to use with these commands","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using MPI\njulia> MPI.mpiexec_path","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"and then try again","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"$ /path/to/my/mpiexec -np 4 julia hello_mpi.jl","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"with your particular path.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"However, this is not very convenient. Don't worry if you could not make it work! A more elegant way to run MPI code is from the Julia REPL directly, by using these commands:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using MPI\njulia> mpiexec(cmd->run(`$cmd -np 4 julia hello_mpi.jl`))","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, you should see output from 4 ranks.","category":"page"},{"location":"getting_started_with_julia/#Package-manager","page":"Getting started","title":"Package manager","text":"","category":"section"},{"location":"getting_started_with_julia/#Installing-packages-locally","page":"Getting started","title":"Installing packages locally","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"We have installed the MPI package globally and it will be available in all Julia sessions. However, in some situations, we want to work with different versions of the same package or to install packages in an isolated way to avoid potential conflicts with other packages. This can be done by using local projects.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"A project is simply a folder in the hard disk. To use a particular folder as your project, you need to activate it. This is done by entering package mode and using the activate command followed by the path to the folder you want to activate.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> activate .","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Previous command will activate the current working directory. Note that the dot . is indeed the path to the current folder.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The prompt has changed to (lessons) pkg> indicating that we are in the project within the lessons folder. The particular folder name can be different in your case.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"tip: Tip\nYou can activate a project directly when opening Julia from the terminal using the --project flag. The command $ julia --project=. will open Julia and activate a project in the current directory. You can also achieve the same effect by setting the environment variable JULIA_PROJECT with the path of the folder you want to activate.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"note: Note\nThe active project folder and the current working directory are two independent concepts! For instance, (@v1.8) pkg> activate folderB and then julia> cd(\"folderA\"), will activate the project in folderB and change the current working directory to folderA.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"At this point all package-related operations will be local to the new project. For instance, install the DataFrames package.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(lessons) pkg> add DataFrames","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Use the package to check that it is installed","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using DataFrames\njulia> DataFrame(a=[1,2],b=[3,4])","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, we can return to the global project to check that DataFrames has not been installed there. To return to the global environment, use activate without a folder name.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(lessons) pkg> activate","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The prompt is again (@v1.8) pkg>","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Now, try to use DataFrames.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using DataFrames\njulia> DataFrame(a=[1,2],b=[3,4])","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You should get an error or a warning unless you already had DataFrames installed globally.","category":"page"},{"location":"getting_started_with_julia/#Project-and-Manifest-files","page":"Getting started","title":"Project and Manifest files","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The information about a project is stored in two files Project.toml and Manifest.toml.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Project.toml contains the packages explicitly installed (the direct dependencies)\nManifest.toml contains direct and indirect dependencies along with the concrete version of each package.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"In other words, Project.toml contains the packages relevant for the user, whereas Manifest.toml is the detailed snapshot of all dependencies. The Manifest.toml can be used to reproduce the same envinonment in another machine.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You can see the path to the current Project.toml file by using the status operator (or st in its short form) while in package mode","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> status","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The information about the Manifest.toml can be inspected by passing the -m flag.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> status -m","category":"page"},{"location":"getting_started_with_julia/#Installing-packages-from-a-project-file","page":"Getting started","title":"Installing packages from a project file","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Project files can be used to install lists of packages defined by others. E.g., to install all the dependencies of a Julia application.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Assume that a colleague has sent to you a Project.toml file with this content:","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"[deps]\nBenchmarkTools = \"6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf\"\nDataFrames = \"a93c6f00-e57d-5684-b7b6-d8193f3e46c0\"\nMPI = \"da04e1cc-30fd-572f-bb4f-1f8673147195\"","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Copy the contents of previous code block into a file called Project.toml and place it in an empty folder named newproject. It is important that the file is named Project.toml. You can create a new folder from the REPL with","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> mkdir(\"newproject\")","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"To install all the packages registered in this file you need to activate the folder containing your Project.toml file","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> activate newproject","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"and then instantiating it","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(newproject) pkg> instantiate","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"The instantiate command will download and install all listed packages and their dependencies in just one click.","category":"page"},{"location":"getting_started_with_julia/#Getting-help-in-package-mode","page":"Getting started","title":"Getting help in package mode","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You can get help about a particular package operator by writing help in front of it","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> help activate","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"You can get an overview of all package commands by typing help alone","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> help","category":"page"},{"location":"getting_started_with_julia/#Package-operations-in-Julia-code","page":"Getting started","title":"Package operations in Julia code","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"In some situations it is required to use package commands in Julia code, e.g., to automatize installation and deployment of Julia applications. This can be done using the Pkg package. For instance","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"julia> using Pkg\njulia> Pkg.status()","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"is equivalent to call status in package mode.","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"(@v1.8) pkg> status","category":"page"},{"location":"getting_started_with_julia/#Conclusion","page":"Getting started","title":"Conclusion","text":"","category":"section"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"We have learned the basics of how to work with Julia. If you want to further dig into the topics we have covered here, you can take a look and the following links","category":"page"},{"location":"getting_started_with_julia/","page":"Getting started","title":"Getting started","text":"Julia Manual\nPackage manager","category":"page"},{"location":"notebooks/tsp/","page":"-","title":"-","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/tsp.ipynb\"","category":"page"},{"location":"notebooks/tsp/","page":"-","title":"-","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"notebooks/julia_tutorial/","page":"-","title":"-","text":"\n","category":"page"},{"location":"notebooks/julia_async/","page":"Tasks and channels","title":"Tasks and channels","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/julia_async.ipynb\"","category":"page"},{"location":"notebooks/julia_async/","page":"Tasks and channels","title":"Tasks and channels","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"notebooks/julia_async/","page":"Tasks and channels","title":"Tasks and channels","text":"\n","category":"page"},{"location":"notebooks/sol_matrix_matrix/","page":"Solutions","title":"Solutions","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/sol_matrix_matrix.ipynb\"","category":"page"},{"location":"notebooks/sol_matrix_matrix/","page":"Solutions","title":"Solutions","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"notebooks/mpi_tutorial/","page":"-","title":"-","text":"\n","category":"page"},{"location":"notebooks/julia_distributed/","page":"Remote calls and remote channels","title":"Remote calls and remote channels","text":"EditURL = \"https://github.com/fverdugo/XM_40017/blob/main/docs/src/notebooks/julia_distributed.ipynb\"","category":"page"},{"location":"notebooks/julia_distributed/","page":"Remote calls and remote channels","title":"Remote calls and remote channels","text":"
\n Tip\n
\n
\n
\n Download this notebook and run it locally on your machine [recommended]. Click here.\n
\n
\n You can also run this notebook in the cloud using Binder. Click here\n .\n
\n
\n
\n
","category":"page"},{"location":"notebooks/julia_distributed/","page":"Remote calls and remote channels","title":"Remote calls and remote channels","text":"\n","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = XM_40017","category":"page"},{"location":"#Programming-Large-Scale-Parallel-Systems-(XM_40017)","page":"Home","title":"Programming Large-Scale Parallel Systems (XM_40017)","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Welcome to the interactive lecture notes of the Programming Large-Scale Parallel Systems course at VU Amsterdam!","category":"page"},{"location":"#What","page":"Home","title":"What","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. Further information about the course is found in the study guide (click here) and our Canvas page (for registered students). ","category":"page"},{"location":"","page":"Home","title":"Home","text":"note: Note\nThis page contains only part of the course material. The rest is available on Canvas. In particular, the lecture notes in this public webpage do not fully cover all topics in the final exam.","category":"page"},{"location":"#How-to-use-this-page","page":"Home","title":"How to use this page","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You have two main ways of running the notebooks:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Download the notebooks and run them locally on your computer (recommended)\nRun the notebooks on the cloud via mybinder.org (high startup time).","category":"page"},{"location":"","page":"Home","title":"Home","text":"You also have the static version of the notebooks displayed in this webpage for quick reference. At each notebook page you will find a green box with links to download the notebook or to open in on mybinder.","category":"page"},{"location":"#How-to-run-the-notebooks-locally","page":"Home","title":"How to run the notebooks locally","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To run a notebook locally follow these steps:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Install Julia (if not done already). More information in Getting started.\nDownload the notebook.\nLaunch Julia. More information in Getting started.\nExecute these commands in the Julia command line:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Pkg\njulia> Pkg.add(\"IJulia\")\njulia> using IJulia\njulia> notebook()","category":"page"},{"location":"","page":"Home","title":"Home","text":"These commands will open a jupyter in your web browser. Navigate in jupyter to the notebook file you have downloaded and open it.","category":"page"},{"location":"#Authors","page":"Home","title":"Authors","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This material is created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.","category":"page"},{"location":"#License","page":"Home","title":"License","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"All material in this page that is original to this course may be used under a CC BY 4.0 license.","category":"page"},{"location":"#Acknowledgment","page":"Home","title":"Acknowledgment","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project \"Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course\" funded by the \"Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning\".","category":"page"}]
}
diff --git a/dev/sol_matrix_matrix/index.html b/dev/sol_matrix_matrix/index.html
deleted file mode 100644
index f608d13..0000000
--- a/dev/sol_matrix_matrix/index.html
+++ /dev/null
@@ -1,21 +0,0 @@
-
-Solutions · XM_40017