diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 1aa1a33..4288600 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-27T08:33:26","documenter_version":"1.6.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-02T11:57:03","documenter_version":"1.6.0"}} \ No newline at end of file diff --git a/dev/LEQ/index.html b/dev/LEQ/index.html index 6ea8514..3718643 100644 --- a/dev/LEQ/index.html +++ b/dev/LEQ/index.html @@ -1,5 +1,5 @@ -- · XM_40017
+- · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/asp/index.html b/dev/asp/index.html index c30324a..73807f7 100644 --- a/dev/asp/index.html +++ b/dev/asp/index.html @@ -1,5 +1,5 @@ -- · XM_40017
+- · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/getting_started_with_julia/index.html b/dev/getting_started_with_julia/index.html index a2d3018..9a8c3b8 100644 --- a/dev/getting_started_with_julia/index.html +++ b/dev/getting_started_with_julia/index.html @@ -1,5 +1,5 @@ -Getting started · XM_40017

Getting started

Introduction

The programming of this course will be done using the Julia programming language. Thus, we start by explaining how to get up and running with Julia. After studying this page, you will be able to:

  • Use the Julia REPL,
  • Run serial and parallel code,
  • Install and manage Julia packages.

Why Julia?

Courses related with high-performance computing (HPC) often use languages such as C, C++, or Fortran. We use Julia instead to make the course accessible to a wider set of students, including the ones that have no experience with C/C++ or Fortran, but are willing to learn parallel programming. Julia is a relatively new programming language specifically designed for scientific computing. It combines a high-level syntax close to interpreted languages like Python with the performance of compiled languages like C, C++, or Fortran. Thus, Julia will allow us to write efficient parallel algorithms with a syntax that is convenient in a teaching setting. In addition, Julia provides easy access to different programming models to write distributed algorithms, which will be useful to learn and experiment with them.

Tip

You can run the code in this link to learn how Julia compares to other languages (C and Python) in terms of performance.

Installing Julia

This is a tutorial-like page. Follow these steps before you continue reading the document.

The Julia REPL

Starting Julia

There are several ways of opening Julia depending on your operating system and your IDE, but it is usually as simple as launching the Julia app. With VSCode, open a folder (File > Open Folder). Then, press Ctrl+Shift+P to open the command bar, and execute Julia: Start REPL. If this does not work, make sure you have the Julia extension for VSCode installed. Independently of the method you use, opening Julia results in a window with some text ending with:

julia>

You have just opened the Julia read-evaluate-print loop, or simply the Julia REPL. Congrats! You will spend most of time using the REPL, when working in Julia. The REPL is a console waiting for user input. Just as in other consoles, the string of text right before the input area (julia> in the case) is called the command prompt or simply the prompt.

Basic usage

The usage of the REPL is as follows:

  • You write some input
  • press enter
  • you get the output

For instance, try this

julia> 1 + 1

A "Hello world" example looks like this in Julia

julia> println("Hello, world!")

Try to run it in the REPL.

Help mode

Curious about what the function println does? Enter into help mode to look into the documentation. This is done by typing a question mark (?) into the input field:

julia> ?

After typing ?, the command prompt changes to help?>. It means we are in help mode. Now, we can type a function name to see its documentation.

help?> println

Package and shell modes

The REPL comes with two more modes, namely package and shell modes. To enter package mode type

julia> ]

Package mode is used to install and manage packages. We are going to discuss the package mode in greater detail later. To return back to normal mode press the backspace key several times.

To enter shell mode type semicolon (;)

julia> ;

The prompt should have changed to shell> indicating that we are in shell mode. Now you can type commands that you would normally do on your system command line. For instance,

shell> ls

will display the contents of the current folder in Mac or Linux. Using shell mode in Windows is not straightforward, and thus not recommended for beginners.

Running Julia code

Running more complex code

Real-world Julia programs are not typed in the REPL in practice. They are written in one or more files and included in the REPL. To try this, create a new file called hello.jl, write the code of the "Hello world" example above, and save it. If you are using VSCode, you can create the file using File > New File > Julia File. Once the file is saved with the name hello.jl, execute it as follows

julia> include("hello.jl")
Warning

Make sure that the file "hello.jl" is located in the current working directory of your Julia session. You can query the current directory with function pwd(). You can change to another directory with function cd() if needed. Also, make sure that the file extension is .jl.

The recommended way of running Julia code is using the REPL as we did. But it is also possible to run code directly from the system command line. To this end, open a terminal and call Julia followed by the path to the file containing the code you want to execute.

$ julia hello.jl

The previous line assumes that you have Julia properly installed in the system and that it's usable from the terminal. In UNIX systems (Linux and Mac), the Julia binary needs to be in one of the directories listed in the PATH environment variable. To check that Julia is properly installed, you can use

$ julia --version

If this runs without error and you see a version number, you are good to go!

You can also run julia code from the terminal using the -e flag:

$ julia -e 'println("Hello, world!")'
Note

In this tutorial, when a code snipped starts with $, it should be run in the terminal. Otherwise, the code is to be run in the Julia REPL.

Tip

Avoid calling Julia code from the terminal, use the Julia REPL instead! Each time you call Julia from the terminal, you start a fresh Julia session and Julia will need to compile your code from scratch. This can be time consuming for large projects. In contrast, if you execute code in the REPL, Julia will compile code incrementally, which is much faster. Running code in a cluster (like in DAS-5 for the Julia assignment) is among the few situations you need to run Julia code from the terminal. Visit this link (Julia workflow tips) from the official Julia documentation for further information about how to develop Julia code effectivelly.

Running parallel code

Since we are in a parallel computing course, let's run a parallel "Hello world" example in Julia. Open a Julia REPL and write

julia> using Distributed
+Getting started · XM_40017

Getting started

Introduction

The programming of this course will be done using the Julia programming language. Thus, we start by explaining how to get up and running with Julia. After studying this page, you will be able to:

  • Use the Julia REPL,
  • Run serial and parallel code,
  • Install and manage Julia packages.

Why Julia?

Courses related with high-performance computing (HPC) often use languages such as C, C++, or Fortran. We use Julia instead to make the course accessible to a wider set of students, including the ones that have no experience with C/C++ or Fortran, but are willing to learn parallel programming. Julia is a relatively new programming language specifically designed for scientific computing. It combines a high-level syntax close to interpreted languages like Python with the performance of compiled languages like C, C++, or Fortran. Thus, Julia will allow us to write efficient parallel algorithms with a syntax that is convenient in a teaching setting. In addition, Julia provides easy access to different programming models to write distributed algorithms, which will be useful to learn and experiment with them.

Tip

You can run the code in this link to learn how Julia compares to other languages (C and Python) in terms of performance.

Installing Julia

This is a tutorial-like page. Follow these steps before you continue reading the document.

The Julia REPL

Starting Julia

There are several ways of opening Julia depending on your operating system and your IDE, but it is usually as simple as launching the Julia app. With VSCode, open a folder (File > Open Folder). Then, press Ctrl+Shift+P to open the command bar, and execute Julia: Start REPL. If this does not work, make sure you have the Julia extension for VSCode installed. Independently of the method you use, opening Julia results in a window with some text ending with:

julia>

You have just opened the Julia read-evaluate-print loop, or simply the Julia REPL. Congrats! You will spend most of time using the REPL, when working in Julia. The REPL is a console waiting for user input. Just as in other consoles, the string of text right before the input area (julia> in the case) is called the command prompt or simply the prompt.

Basic usage

The usage of the REPL is as follows:

  • You write some input
  • press enter
  • you get the output

For instance, try this

julia> 1 + 1

A "Hello world" example looks like this in Julia

julia> println("Hello, world!")

Try to run it in the REPL.

Help mode

Curious about what the function println does? Enter into help mode to look into the documentation. This is done by typing a question mark (?) into the input field:

julia> ?

After typing ?, the command prompt changes to help?>. It means we are in help mode. Now, we can type a function name to see its documentation.

help?> println

Package and shell modes

The REPL comes with two more modes, namely package and shell modes. To enter package mode type

julia> ]

Package mode is used to install and manage packages. We are going to discuss the package mode in greater detail later. To return back to normal mode press the backspace key several times.

To enter shell mode type semicolon (;)

julia> ;

The prompt should have changed to shell> indicating that we are in shell mode. Now you can type commands that you would normally do on your system command line. For instance,

shell> ls

will display the contents of the current folder in Mac or Linux. Using shell mode in Windows is not straightforward, and thus not recommended for beginners.

Running Julia code

Running more complex code

Real-world Julia programs are not typed in the REPL in practice. They are written in one or more files and included in the REPL. To try this, create a new file called hello.jl, write the code of the "Hello world" example above, and save it. If you are using VSCode, you can create the file using File > New File > Julia File. Once the file is saved with the name hello.jl, execute it as follows

julia> include("hello.jl")
Warning

Make sure that the file "hello.jl" is located in the current working directory of your Julia session. You can query the current directory with function pwd(). You can change to another directory with function cd() if needed. Also, make sure that the file extension is .jl.

The recommended way of running Julia code is using the REPL as we did. But it is also possible to run code directly from the system command line. To this end, open a terminal and call Julia followed by the path to the file containing the code you want to execute.

$ julia hello.jl

The previous line assumes that you have Julia properly installed in the system and that it's usable from the terminal. In UNIX systems (Linux and Mac), the Julia binary needs to be in one of the directories listed in the PATH environment variable. To check that Julia is properly installed, you can use

$ julia --version

If this runs without error and you see a version number, you are good to go!

You can also run julia code from the terminal using the -e flag:

$ julia -e 'println("Hello, world!")'
Note

In this tutorial, when a code snipped starts with $, it should be run in the terminal. Otherwise, the code is to be run in the Julia REPL.

Tip

Avoid calling Julia code from the terminal, use the Julia REPL instead! Each time you call Julia from the terminal, you start a fresh Julia session and Julia will need to compile your code from scratch. This can be time consuming for large projects. In contrast, if you execute code in the REPL, Julia will compile code incrementally, which is much faster. Running code in a cluster (like in DAS-5 for the Julia assignment) is among the few situations you need to run Julia code from the terminal. Visit this link (Julia workflow tips) from the official Julia documentation for further information about how to develop Julia code effectivelly.

Running parallel code

Since we are in a parallel computing course, let's run a parallel "Hello world" example in Julia. Open a Julia REPL and write

julia> using Distributed
 julia> @everywhere println("Hello, world! I am proc $(myid()) from $(nprocs())")

Here, we are using the Distributed package, which is part of the Julia standard library that provides distributed memory parallel support. The code prints the process id and the number of processes in the current Julia session.

You will probably only see output from 1 process. We need to add more processes to run the example in parallel. This is done with the addprocs function.

julia> addprocs(3)

We have added 3 new processes. Plus the old one, we have 4 processes. Run the code again.

julia> @everywhere println("Hello, world! I am proc $(myid()) from $(nprocs())")

Now, you should see output from 4 processes.

It is possible to specify the number of processes when starting Julia from the terminal with the -p argument (useful, e.g., when running in a cluster). If you launch Julia from the terminal as

$ julia -p 3

and then run

julia> @everywhere println("Hello, world! I am proc $(myid()) from $(nprocs())")

You should get output from 4 processes as before.

Installing packages

One of the most useful features of Julia is its package manager. It allows one to install Julia packages in a straightforward and platform independent way. To illustrate this, let us consider the following parallel "Hello world" example. This example uses the Message Passing Interface (MPI). We will learn more about MPI later in the course.

Copy the following block of code into a new file named "hello_mpi.jl"

# file hello_mpi.jl
 using MPI
 MPI.Init()
@@ -15,4 +15,4 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
 MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"

Copy the contents of previous code block into a file called Project.toml and place it in an empty folder named newproject. It is important that the file is named Project.toml. You can create a new folder from the REPL with

julia> mkdir("newproject")

To install all the packages registered in this file you need to activate the folder containing your Project.toml file

(@v1.10) pkg> activate newproject

and then instantiating it

(newproject) pkg> instantiate

The instantiate command will download and install all listed packages and their dependencies in just one click.

Getting help in package mode

You can get help about a particular package operator by writing help in front of it

(@v1.10) pkg> help activate

You can get an overview of all package commands by typing help alone

(@v1.10) pkg> help

Package operations in Julia code

In some situations it is required to use package commands in Julia code, e.g., to automatize installation and deployment of Julia applications. This can be done using the Pkg package. For instance

julia> using Pkg
 julia> Pkg.status()

is equivalent to calling status in package mode.

(@v1.10) pkg> status

Creating you own package

In many situations, it is useful to create your own package, for instance, when working with a large code base, when you want to reduce compilation latency using Revise.jl, or if you want to eventually register your package and share it with others.

The simplest way of generating a package (called MyPackage) is as follows. Open Julia, go to package mode, and type

(@v1.10) pkg> generate MyPackage

This will crate a minimal package consisting of a new folder MyPackage with two files:

  • MyPackage/Project.toml: Project file defining the direct dependencies of your package.
  • MyPackage/src/MyPackage.jl: Main source file of your package. You can split your code in several files if needed, and include them in the package main file using function include.
Tip

This approach only generates a very minimal package. To create a more sophisticated package skeleton (including unit testing, code coverage, readme file, licence, etc.) use PkgTemplates.jl or BestieTemplate.jl. The later one is developed in Amsterdam at the Netherlands eScience Center.

You can add dependencies to the package by activating the MyPackage folder in package mode and adding new dependencies as always:

(@v1.10) pkg> activate MyPackage
 (MyPackage) pkg> add MPI

This will add MPI to your package dependencies.

Using your own package

To use your package you first need to add it to a package environment of your choice. This is done by changing to package mode and typing develop followed by the path to the folder containing the package. For instance:

(@v1.10) pkg> develop MyPackage
Note

You do not need to "develop" your package if you activated the package folder MyPackage.

Now, we can go back to standard Julia mode and use it as any other package:

using MyPackage
-MyPackage.greet()

Here, we just called the example function defined in MyPackage/src/MyPackage.jl.

Conclusion

We have learned the basics of how to work with Julia, including how to run serial and parallel code, and how to manage, create, and use Julia packages. This knowledge will allow you to follow the course effectively! If you want to further dig into the topics we have covered here, you can take a look at the following links:

+MyPackage.greet()

Here, we just called the example function defined in MyPackage/src/MyPackage.jl.

Conclusion

We have learned the basics of how to work with Julia, including how to run serial and parallel code, and how to manage, create, and use Julia packages. This knowledge will allow you to follow the course effectively! If you want to further dig into the topics we have covered here, you can take a look at the following links:

diff --git a/dev/index.html b/dev/index.html index 40819f4..6cad136 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,5 +1,5 @@ -Home · XM_40017

Programming Large-Scale Parallel Systems (XM_40017)

Welcome to the interactive lecture notes of the Programming Large-Scale Parallel Systems course at VU Amsterdam!

What

This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. Further information about the course is found in the study guide (click here) and our Canvas page (for registered students).

Note

Material will be added incrementally to the website as the course advances.

Warning

This page will eventually contain only a part of the course material. The rest will be available on Canvas. In particular, the material in this public webpage does not fully cover all topics in the final exam.

How to use this page

You have two main ways of studying the notebooks:

  • Download the notebooks and run them locally on your computer (recommended). At each notebook page you will find a green box with links to download the notebook.
  • You also have the static version of the notebooks displayed in this webpage for quick reference.

How to run the notebooks locally

To run a notebook locally follow these steps:

  • Install Julia (if not done already). More information in Getting started.
  • Download the notebook.
  • Launch Julia. More information in Getting started.
  • Execute these commands in the Julia command line:
julia> using Pkg
+Home · XM_40017

Programming Large-Scale Parallel Systems (XM_40017)

Welcome to the interactive lecture notes of the Programming Large-Scale Parallel Systems course at VU Amsterdam!

What

This page contains part of the course material of the Programming Large-Scale Parallel Systems course at VU Amsterdam. We provide several lecture notes in jupyter notebook format, which will help you to learn how to design, analyze, and program parallel algorithms on multi-node computing systems. Further information about the course is found in the study guide (click here) and our Canvas page (for registered students).

Note

Material will be added incrementally to the website as the course advances.

Warning

This page will eventually contain only a part of the course material. The rest will be available on Canvas. In particular, the material in this public webpage does not fully cover all topics in the final exam.

How to use this page

You have two main ways of studying the notebooks:

  • Download the notebooks and run them locally on your computer (recommended). At each notebook page you will find a green box with links to download the notebook.
  • You also have the static version of the notebooks displayed in this webpage for quick reference.

How to run the notebooks locally

To run a notebook locally follow these steps:

  • Install Julia (if not done already). More information in Getting started.
  • Download the notebook.
  • Launch Julia. More information in Getting started.
  • Execute these commands in the Julia command line:
julia> using Pkg
 julia> Pkg.add("IJulia")
 julia> using IJulia
-julia> notebook()
  • These commands will open a jupyter in your web browser. Navigate in jupyter to the notebook file you have downloaded and open it.

Authors

This material is created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.

License

All material on this page that is original to this course may be used under a CC BY 4.0 license.

Acknowledgment

This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project "Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course" funded by the "Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning".

+julia> notebook()
  • These commands will open a jupyter in your web browser. Navigate in jupyter to the notebook file you have downloaded and open it.

Authors

This material is created by Francesc Verdugo with the help of Gelieza Kötterheinrich. Part of the notebooks are based on the course slides by Henri Bal.

License

All material on this page that is original to this course may be used under a CC BY 4.0 license.

Acknowledgment

This page was created with the support of the Faculty of Science of Vrije Universiteit Amsterdam in the framework of the project "Interactive lecture notes and exercises for the Programming Large-Scale Parallel Systems course" funded by the "Innovation budget BETA 2023 Studievoorschotmiddelen (SVM) towards Activated Blended Learning".

diff --git a/dev/jacobi_2D/index.html b/dev/jacobi_2D/index.html index dbe58ed..38e561d 100644 --- a/dev/jacobi_2D/index.html +++ b/dev/jacobi_2D/index.html @@ -1,5 +1,5 @@ -- · XM_40017
+- · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/jacobi_method/index.html b/dev/jacobi_method/index.html index c45d6fa..f6abdec 100644 --- a/dev/jacobi_method/index.html +++ b/dev/jacobi_method/index.html @@ -1,5 +1,5 @@ -Jacobi method · XM_40017
+Jacobi method · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/julia_async/index.html b/dev/julia_async/index.html index c4b8591..fc3a355 100644 --- a/dev/julia_async/index.html +++ b/dev/julia_async/index.html @@ -1,5 +1,5 @@ -Asynchronous programming in Julia · XM_40017
+Asynchronous programming in Julia · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/julia_basics/index.html b/dev/julia_basics/index.html index 5efb9de..1564ca9 100644 --- a/dev/julia_basics/index.html +++ b/dev/julia_basics/index.html @@ -1,5 +1,5 @@ -Julia Basics · XM_40017
+Julia Basics · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/julia_distributed/index.html b/dev/julia_distributed/index.html index 5c28921..577c810 100644 --- a/dev/julia_distributed/index.html +++ b/dev/julia_distributed/index.html @@ -1,5 +1,5 @@ -Distributed computing in Julia · XM_40017
+Distributed computing in Julia · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/julia_intro/index.html b/dev/julia_intro/index.html index 3e6645a..012ccd3 100644 --- a/dev/julia_intro/index.html +++ b/dev/julia_intro/index.html @@ -1,5 +1,5 @@ -- · XM_40017
+- · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/julia_jacobi/index.html b/dev/julia_jacobi/index.html index 35780db..7fcfa50 100644 --- a/dev/julia_jacobi/index.html +++ b/dev/julia_jacobi/index.html @@ -1,5 +1,5 @@ -- · XM_40017
+- · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/julia_mpi/index.html b/dev/julia_mpi/index.html index e4bfc15..3f4c139 100644 --- a/dev/julia_mpi/index.html +++ b/dev/julia_mpi/index.html @@ -1,5 +1,5 @@ -MPI (point-to-point) · XM_40017
+MPI (point-to-point) · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/julia_tutorial/index.html b/dev/julia_tutorial/index.html index 8285fd9..e9c0714 100644 --- a/dev/julia_tutorial/index.html +++ b/dev/julia_tutorial/index.html @@ -1,5 +1,5 @@ -- · XM_40017
+- · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/matrix_matrix/index.html b/dev/matrix_matrix/index.html index 9646a62..cd3abad 100644 --- a/dev/matrix_matrix/index.html +++ b/dev/matrix_matrix/index.html @@ -1,5 +1,5 @@ -Matrix-matrix multiplication · XM_40017
+Matrix-matrix multiplication · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/mpi_collectives.ipynb b/dev/mpi_collectives.ipynb index 8faf97a..1cb55f7 100644 --- a/dev/mpi_collectives.ipynb +++ b/dev/mpi_collectives.ipynb @@ -39,7 +39,7 @@ "source": [ "## Collective communication\n", "\n", - "MPI provides collective communication functions for communication involving multiple processes. Some usual collective functions are:\n", + "MPI provides a set of routines for communication involving multiple processes. These are called *collective communication* operations. Some usual collective operations are:\n", "\n", "\n", "- `MPI_Barrier`: Synchronize all processes\n", @@ -61,9 +61,9 @@ "id": "4ffa5e56", "metadata": {}, "source": [ - "## Why collective primitives?\n", + "## Why collective operations?\n", "\n", - "Point-to-point communication primitives provide all the building blocks needed in parallel programs and could be used to implement the collective functions described above. Then, why does MPI provide collective communication directives? There are several reasons:\n", + "Point-to-point communication functions provide all the building blocks needed in parallel programs and could be used to implement the collective functions described above. Then, why does MPI provide collective communication functions? There are several reasons:\n", "\n", "- Ease of use: It is handy for users to have these functions readily available instead of having to implement them.\n", "- Performance: Library implementations typically use efficient algorithms (such as reduction trees).\n", @@ -77,6 +77,8 @@ "source": [ "## Semantics of collective operations\n", "\n", + "These are key properties of collective operations:\n", + "\n", "\n", "- Completeness: All the collective communication directives above are *complete* operations. Thus, it is safe to use and reset the buffers once the function returns.\n", "- Standard mode: Collective directives are in standard mode only, like `MPI_Send`. Assuming that they block is erroneous, assuming that they do not block is also erroneous.\n", @@ -84,7 +86,7 @@ "\n", "\n", "
\n", - "Note: Recent versions of the MPI standard also include non-blocking (incomplete) versions of collective operations (not covered in this notebook). A particularly funny one is the non-blocking barrier `MPI_Ibarrier`.\n", + "Note: Recent versions of the MPI standard also include non-blocking (i.e., incomplete) versions of collective operations (not covered in this notebook). A particularly funny one is the non-blocking barrier `MPI_Ibarrier`.\n", "
" ] }, @@ -145,7 +147,7 @@ "source": [ "## MPI_Reduce\n", "\n", - "Combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process).\n", + "This function combines values provided by different processors according to a given reduction operation. The result is received in a single process (called the root process).\n", "\n", "In Julia:\n", "```julia\n", @@ -702,7 +704,7 @@ "When you write an MPI program it is very likely that you are going to use libraries that also use MPI to send messages. Ideally, these libraries should not interfere with application messages. Using tags to isolate the messages send by your application does not solve the problem. MPI communicators fix this problem as they provided an isolated communication context. For instance, `MPI_SEND` and `MPI_RECV` specify a communicator. `MPI_RECV` can only receive messages sent to same communicator. The same is also true for collective communication directives. If two libraries use different communicators, their message will never interfere. In particular it is recommended to never use the default communicator, `MPI_COMM_WORLD`, directly when working with other libraries. A new isolated communicator can be created with `MPI_Comm_dup`.\n", "\n", "\n", - "### Process groups\n", + "### Groups of processes\n", "\n", "On the other hand, imagine that we want to use an MPI communication directive like `MPI_Gather`, but we only want a subset of the processes to participate in the operation. So far, we have used always the default communication `MPI_COMM_WORLD`, which represents all processes. Thus, by using this communicator, we are including all processes in the operation. We can create other communicators that contain only a subset of processes. To this end, we can use function `MPI_Comm_split`.\n" ] @@ -793,7 +795,7 @@ "\n", "There are two key parameters:\n", "\n", - "- `color`: all processes with the same color will be grouped in the same communicator.\n", + "- `color`: all processes with the same color will be grouped in the same new communicator.\n", "- `key`: The processes will be ranked in the new communicator according to key, breaking ties with the rank in the old communicator. \n", "\n" ] @@ -872,6 +874,101 @@ "run(`$(mpiexec()) -np 4 julia --project=. -e $code`);" ] }, + { + "cell_type": "markdown", + "id": "d465ebce", + "metadata": {}, + "source": [ + "Try to run the code without splitting the communicator. I.e., replace `newcomm = MPI.Comm_split(comm, color, key)` with `newcomm = comm`. Try to figure out what will happen before executing the code." + ] + }, + { + "cell_type": "markdown", + "id": "d334aea1", + "metadata": {}, + "source": [ + "## Conclusion\n", + "\n", + "- MPI also defines operations involving several processes called, collective operations.\n", + "- These are provided both for convenience and performance.\n", + "- The semantics are equivalent to \"standard mode\" `MPI_Send`, but there are also non-blocking versions (not discussed in this notebook).\n", + "- Discovering message sizes is often done by communicating the message size, instead of using `MPI_Probe`.\n", + "- Finally, we discussed MPI communicators. They provide two key features: isolated communication context and creating groups of processes. They are useful, for instance, to combine different libraries using MPI in the same application, and to use collective operations in a subset of the processes.\n", + "\n", + "After learning this material and the previous MPI notebook, you have a solid basis to start implementing sophisticated parallel algorithms using MPI." + ] + }, + { + "cell_type": "markdown", + "id": "c6b23485", + "metadata": {}, + "source": [ + "## Exercises" + ] + }, + { + "cell_type": "markdown", + "id": "90dc58bb", + "metadata": {}, + "source": [ + "### Exercise 1\n", + "\n", + "In the parallel implementation of the Jacobi method in previous notebook, we assumed that the method runs for a given number of iterations. However, other stopping criteria are used in practice. The following sequential code implements a version of Jacobi in which the method iterates until the norm of the difference between u and u_new is below a tolerance.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0fcb0cd6", + "metadata": {}, + "outputs": [], + "source": [ + "function jacobi_with_tol(n,tol)\n", + " u = zeros(n+2)\n", + " u[1] = -1\n", + " u[end] = 1\n", + " u_new = copy(u)\n", + " increment = similar(u)\n", + " while true\n", + " for i in 2:(n+1)\n", + " u_new[i] = 0.5*(u[i-1]+u[i+1])\n", + " end\n", + " increment .= u_new .- u\n", + " norm_increment = 0.0\n", + " for i in 1:n\n", + " increment_i = increment[i]\n", + " norm_increment += increment_i*increment_i\n", + " end\n", + " norm_increment = sqrt(norm_increment)\n", + " if norm_increment < tol*n\n", + " return u_new\n", + " end\n", + " u, u_new = u_new, u\n", + " end\n", + " u\n", + "end" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dbf0c3b8", + "metadata": {}, + "outputs": [], + "source": [ + "n = 10\n", + "tol = 1e-12\n", + "jacobi_with_tol(n,tol)" + ] + }, + { + "cell_type": "markdown", + "id": "aab1455e", + "metadata": {}, + "source": [ + "Implement a parallel version of this algorithm. Recommended: start with the parallel implementation given in the previous notebook (see function `jacobi_mpi`) and introduce the new stopping criteria. Think carefully about which MPI operations you need to use in this case." + ] + }, { "cell_type": "markdown", "id": "5e8f6e6a", diff --git a/dev/mpi_collectives/index.html b/dev/mpi_collectives/index.html index 0cca6c6..ded119c 100644 --- a/dev/mpi_collectives/index.html +++ b/dev/mpi_collectives/index.html @@ -1,5 +1,5 @@ -- · XM_40017
+MPI (collectives) · XM_40017
Tip
    @@ -14,4 +14,4 @@ var myIframe = document.getElementById("notebook"); iFrameResize({log:true}, myIframe); }); -
+
diff --git a/dev/mpi_collectives_src/index.html b/dev/mpi_collectives_src/index.html index 6bcdb86..7fa938b 100644 --- a/dev/mpi_collectives_src/index.html +++ b/dev/mpi_collectives_src/index.html @@ -7556,7 +7556,7 @@ a.anchor-link {
-

Collective communication

MPI provides collective communication functions for communication involving multiple processes. Some usual collective functions are:

+

Collective communication

MPI provides a set of routines for communication involving multiple processes. These are called collective communication operations. Some usual collective operations are:

@@ -7665,7 +7666,7 @@ a.anchor-link {
@@ -8330,7 +8331,7 @@ a.anchor-link {

There are two key parameters:

@@ -8421,6 +8422,121 @@ a.anchor-link { +
+
+ + +
+
+
+
+ + +
+
+
+
+ + +
+
+
+
+ + +
+
+
+ + +
+
+
+ + +
+
+
+
+ + +
+