Show More
@@ -32,34 +32,34 b' Starting the engines with MPI enabled' | |||||
32 | To use code that calls MPI, there are typically two things that MPI requires. |
|
32 | To use code that calls MPI, there are typically two things that MPI requires. | |
33 |
|
33 | |||
34 | 1. The process that wants to call MPI must be started using |
|
34 | 1. The process that wants to call MPI must be started using | |
35 |
:command:`mpi |
|
35 | :command:`mpiexec` or a batch system (like PBS) that has MPI support. | |
36 | 2. Once the process starts, it must call :func:`MPI_Init`. |
|
36 | 2. Once the process starts, it must call :func:`MPI_Init`. | |
37 |
|
37 | |||
38 | There are a couple of ways that you can start the IPython engines and get these things to happen. |
|
38 | There are a couple of ways that you can start the IPython engines and get these things to happen. | |
39 |
|
39 | |||
40 |
Automatic starting using :command:`mpi |
|
40 | Automatic starting using :command:`mpiexec` and :command:`ipcluster` | |
41 | ------------------------------------------------------------------- |
|
41 | ------------------------------------------------------------------- | |
42 |
|
42 | |||
43 |
The easiest approach is to use the `mpi |
|
43 | The easiest approach is to use the `mpiexec` mode of :command:`ipcluster`, which will first start a controller and then a set of engines using :command:`mpiexec`:: | |
44 |
|
44 | |||
45 |
$ ipcluster mpi |
|
45 | $ ipcluster mpiexec -n 4 | |
46 |
|
46 | |||
47 | This approach is best as interrupting :command:`ipcluster` will automatically |
|
47 | This approach is best as interrupting :command:`ipcluster` will automatically | |
48 | stop and clean up the controller and engines. |
|
48 | stop and clean up the controller and engines. | |
49 |
|
49 | |||
50 |
Manual starting using :command:`mpi |
|
50 | Manual starting using :command:`mpiexec` | |
51 | --------------------------------------- |
|
51 | --------------------------------------- | |
52 |
|
52 | |||
53 |
If you want to start the IPython engines using the :command:`mpi |
|
53 | If you want to start the IPython engines using the :command:`mpiexec`, just do:: | |
54 |
|
54 | |||
55 |
$ mpi |
|
55 | $ mpiexec -n 4 ipengine --mpi=mpi4py | |
56 |
|
56 | |||
57 | This requires that you already have a controller running and that the FURL |
|
57 | This requires that you already have a controller running and that the FURL | |
58 | files for the engines are in place. We also have built in support for |
|
58 | files for the engines are in place. We also have built in support for | |
59 | PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by |
|
59 | PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by | |
60 | starting the engines with:: |
|
60 | starting the engines with:: | |
61 |
|
61 | |||
62 |
mpi |
|
62 | mpiexec -n 4 ipengine --mpi=pytrilinos | |
63 |
|
63 | |||
64 | Automatic starting using PBS and :command:`ipcluster` |
|
64 | Automatic starting using PBS and :command:`ipcluster` | |
65 | ----------------------------------------------------- |
|
65 | ----------------------------------------------------- | |
@@ -84,7 +84,7 b' First, lets define a simply function that uses MPI to calculate the sum of a dis' | |||||
84 |
|
84 | |||
85 | Now, start an IPython cluster in the same directory as :file:`psum.py`:: |
|
85 | Now, start an IPython cluster in the same directory as :file:`psum.py`:: | |
86 |
|
86 | |||
87 |
$ ipcluster mpi |
|
87 | $ ipcluster mpiexec -n 4 | |
88 |
|
88 | |||
89 | Finally, connect to the cluster and use this function interactively. In this case, we create a random array on each engine and sum up all the random arrays using our :func:`psum` function: |
|
89 | Finally, connect to the cluster and use this function interactively. In this case, we create a random array on each engine and sum up all the random arrays using our :func:`psum` function: | |
90 |
|
90 |
General Comments 0
You need to be logged in to leave comments.
Login now