\n",
"Tip: Did you know that Jupyter stands for Julia, Python and R?\n",
diff --git a/notebooks/julia_distributed.ipynb b/notebooks/julia_distributed.ipynb
index fa7b1a6..8569e20 100644
--- a/notebooks/julia_distributed.ipynb
+++ b/notebooks/julia_distributed.ipynb
@@ -137,7 +137,7 @@
"\n",
"\n",
"\n",
- "Tip: We can also start new processes when launching Julia from the command line by suing the `-p` command-line argument. E.g., `$ julia -p 3 ` would launch Julia with 3 extra processes.\n",
+ "Tip: We can also start new processes when launching Julia from the command line by using the `-p` command-line argument. E.g., `$ julia -p 3 ` would launch Julia with 3 extra processes.\n",
"
\n"
]
},
@@ -251,7 +251,7 @@
"source": [
"### Creating workers in other machines\n",
"\n",
- "For large parallel computations, one typically needs to use different computers in parallel. Function `addprocs` also provides a low-level method to start workers in other machines. Next code example would create 3 workers in `server1` and 4 new workers in server `server2` (see figure below). Under the hood, Julia connects via ssh to the other machines and starts the new processes there. In order this to work, the local computer and the remote servers need to be properly configured (see the Julia manual for details). \n",
+ "For large parallel computations, one typically needs to use different computers in parallel. Function `addprocs` also provides a low-level method to start workers in other machines. Next code example would create 3 workers in `server1` and 4 new workers in `server2` (see figure below). Under the hood, Julia connects via ssh to the other machines and starts the new processes there. In order this to work, the local computer and the remote servers need to be properly configured (see the Julia manual for details). \n",
"\n",
"\n",
"\n",
@@ -514,7 +514,7 @@
"id": "10899cd4",
"metadata": {},
"source": [
- "### Another usefull macro: `@fetchfrom`\n",
+ "### Another useful macro: `@fetchfrom`\n",
"\n",
"Macro `@fetchfrom` is the blocking version of `@spawnat`. It blocks and returns the corresponding result instead of a `Future` object. "
]
@@ -552,7 +552,7 @@
"source": [
"### Explicit data movement in `remotecall` / `fetch`\n",
"\n",
- "When usig `remotecall` we send to the remote process a function and its arguments. In this example, we send function name `+` and matrices `a` and `b` to proc 4. When fetching the result we receive a copy of the matrix from proc 4."
+ "When using `remotecall` we send to the remote process a function and its arguments. In this example, we send function name `+` and matrices `a` and `b` to proc 4. When fetching the result we receive a copy of the matrix from proc 4."
]
},
{
diff --git a/notebooks/julia_mpi.ipynb b/notebooks/julia_mpi.ipynb
index 3011772..05caecd 100644
--- a/notebooks/julia_mpi.ipynb
+++ b/notebooks/julia_mpi.ipynb
@@ -167,7 +167,7 @@
"```julia\n",
"using MPI\n",
"MPI.Init()\n",
- "# Your MPI programm here\n",
+ "# Your MPI program here\n",
"MPI.Finalize() # Optional\n",
"```\n",
"\n",
@@ -176,7 +176,7 @@
"```julia\n",
"using MPI\n",
"MPI.Init(finalize_atexit=false)\n",
- "# Your MPI programm here\n",
+ "# Your MPI program here\n",
"MPI.Finalize() # Mandatory\n",
"```\n",
"\n",
@@ -186,7 +186,7 @@
"#include \n",
"int main(int argc, char** argv) {\n",
" MPI_Init(NULL, NULL);\n",
- " /* Your MPI Programm here */\n",
+ " /* Your MPI Program here */\n",
" MPI_Finalize();\n",
"}\n",
"```\n",
@@ -612,7 +612,7 @@
"id": "4b455f98",
"metadata": {},
"source": [
- "So, the full MPI program needs to be in the source file passed to Julia or the quote block. In practice, long MPI programms are written as Julia packages using several files, which are then loaded by each MPI process. For our simple example, we just need to include the definition of `foo` inside the quote block."
+ "So, the full MPI program needs to be in the source file passed to Julia or the quote block. In practice, long MPI programs are written as Julia packages using several files, which are then loaded by each MPI process. For our simple example, we just need to include the definition of `foo` inside the quote block."
]
},
{
@@ -920,7 +920,7 @@
" source = MPI.ANY_SOURCE\n",
" tag = MPI.ANY_TAG\n",
" status = MPI.Probe(comm,MPI.Status; source, tag)\n",
- " count = MPI.Get_count(status,Int) # Get incomming message length\n",
+ " count = MPI.Get_count(status,Int) # Get incoming message length\n",
" println(\"I am about to receive $count integers.\")\n",
" rcvbuf = zeros(Int,count) # Allocate \n",
" MPI.Recv!(rcvbuf, comm, MPI.Status; source, tag)\n",
@@ -973,7 +973,7 @@
" if rank == 3\n",
" rcvbuf = zeros(Int,5)\n",
" MPI.Recv!(rcvbuf, comm, MPI.Status; source=2, tag=0)\n",
- " # recvbuf will have the incomming message fore sure. Recv! has returned.\n",
+ " # recvbuf will have the incoming message fore sure. Recv! has returned.\n",
" @show rcvbuf\n",
" end\n",
"end\n",
diff --git a/notebooks/matrix_matrix.ipynb b/notebooks/matrix_matrix.ipynb
index 955abd9..1448e87 100644
--- a/notebooks/matrix_matrix.ipynb
+++ b/notebooks/matrix_matrix.ipynb
@@ -293,7 +293,7 @@
"## Where can we exploit parallelism?\n",
"\n",
"\n",
- "The matrix-matrix multiplication is an example of [embarrassingly parallel algorithm](https://en.wikipedia.org/wiki/Embarrassingly_parallel). An embarrassingly parallel (also known as trivially parallel) algorithm is an algorithm that can be split in parallel tasks with no (or very few) dependences between them. Such algorithms are typically easy to parallelize.\n",
+ "The matrix-matrix multiplication is an example of [embarrassingly parallel algorithm](https://en.wikipedia.org/wiki/Embarrassingly_parallel). An embarrassingly parallel (also known as trivially parallel) algorithm is an algorithm that can be split in parallel tasks with no (or very few) dependencies between them. Such algorithms are typically easy to parallelize.\n",
"\n",
"Which parts of an algorithm are completely independent and thus trivially parallel? To answer this question, it is useful to inspect the for loops, which are potential sources of parallelism. If the iterations are independent of each other, then they are trivial to parallelize. An easy check to find out if the iterations are dependent or not is to change their order (for instance changing `for j in 1:n` by `for j in n:-1:1`, i.e. doing the loop in reverse). If the result changes, then the iterations are not independent.\n",
"\n",
@@ -314,7 +314,7 @@
"Note that:\n",
"\n",
"- Loops over `i` and `j` are trivially parallel.\n",
- "- The loop over `k` is not trivially parallel. The accumulation into the reduction variable `Cij` introduces extra dependences. In addition, remember that the addition of floating point numbers is not strictly associative due to rounding errors. Thus, the result of this loop may change with the loop order when using floating point numbers. In any case, this loop can also be parallelized, but it requires a parallel *fold* or a parallel *reduction*.\n",
+ "- The loop over `k` is not trivially parallel. The accumulation into the reduction variable `Cij` introduces extra dependencies. In addition, remember that the addition of floating point numbers is not strictly associative due to rounding errors. Thus, the result of this loop may change with the loop order when using floating point numbers. In any case, this loop can also be parallelized, but it requires a parallel *fold* or a parallel *reduction*.\n",
"\n"
]
},