Show More
@@ -1,154 +1,154 b'' | |||||
1 | .. _parallelmpi: |
|
1 | .. _parallelmpi: | |
2 |
|
2 | |||
3 | ======================= |
|
3 | ======================= | |
4 | Using MPI with IPython |
|
4 | Using MPI with IPython | |
5 | ======================= |
|
5 | ======================= | |
6 |
|
6 | |||
7 | Often, a parallel algorithm will require moving data between the engines. One |
|
7 | Often, a parallel algorithm will require moving data between the engines. One | |
8 | way of accomplishing this is by doing a pull and then a push using the |
|
8 | way of accomplishing this is by doing a pull and then a push using the | |
9 | multiengine client. However, this will be slow as all the data has to go |
|
9 | multiengine client. However, this will be slow as all the data has to go | |
10 | through the controller to the client and then back through the controller, to |
|
10 | through the controller to the client and then back through the controller, to | |
11 | its final destination. |
|
11 | its final destination. | |
12 |
|
12 | |||
13 | A much better way of moving data between engines is to use a message passing |
|
13 | A much better way of moving data between engines is to use a message passing | |
14 | library, such as the Message Passing Interface (MPI) [MPI]_. IPython's |
|
14 | library, such as the Message Passing Interface (MPI) [MPI]_. IPython's | |
15 | parallel computing architecture has been designed from the ground up to |
|
15 | parallel computing architecture has been designed from the ground up to | |
16 | integrate with MPI. This document describes how to use MPI with IPython. |
|
16 | integrate with MPI. This document describes how to use MPI with IPython. | |
17 |
|
17 | |||
18 | Additional installation requirements |
|
18 | Additional installation requirements | |
19 | ==================================== |
|
19 | ==================================== | |
20 |
|
20 | |||
21 | If you want to use MPI with IPython, you will need to install: |
|
21 | If you want to use MPI with IPython, you will need to install: | |
22 |
|
22 | |||
23 | * A standard MPI implementation such as OpenMPI [OpenMPI]_ or MPICH. |
|
23 | * A standard MPI implementation such as OpenMPI [OpenMPI]_ or MPICH. | |
24 | * The mpi4py [mpi4py]_ package. |
|
24 | * The mpi4py [mpi4py]_ package. | |
25 |
|
25 | |||
26 | .. note:: |
|
26 | .. note:: | |
27 |
|
27 | |||
28 | The mpi4py package is not a strict requirement. However, you need to |
|
28 | The mpi4py package is not a strict requirement. However, you need to | |
29 | have *some* way of calling MPI from Python. You also need some way of |
|
29 | have *some* way of calling MPI from Python. You also need some way of | |
30 | making sure that :func:`MPI_Init` is called when the IPython engines start |
|
30 | making sure that :func:`MPI_Init` is called when the IPython engines start | |
31 | up. There are a number of ways of doing this and a good number of |
|
31 | up. There are a number of ways of doing this and a good number of | |
32 | associated subtleties. We highly recommend just using mpi4py as it |
|
32 | associated subtleties. We highly recommend just using mpi4py as it | |
33 | takes care of most of these problems. If you want to do something |
|
33 | takes care of most of these problems. If you want to do something | |
34 | different, let us know and we can help you get started. |
|
34 | different, let us know and we can help you get started. | |
35 |
|
35 | |||
36 | Starting the engines with MPI enabled |
|
36 | Starting the engines with MPI enabled | |
37 | ===================================== |
|
37 | ===================================== | |
38 |
|
38 | |||
39 | To use code that calls MPI, there are typically two things that MPI requires. |
|
39 | To use code that calls MPI, there are typically two things that MPI requires. | |
40 |
|
40 | |||
41 | 1. The process that wants to call MPI must be started using |
|
41 | 1. The process that wants to call MPI must be started using | |
42 | :command:`mpiexec` or a batch system (like PBS) that has MPI support. |
|
42 | :command:`mpiexec` or a batch system (like PBS) that has MPI support. | |
43 | 2. Once the process starts, it must call :func:`MPI_Init`. |
|
43 | 2. Once the process starts, it must call :func:`MPI_Init`. | |
44 |
|
44 | |||
45 | There are a couple of ways that you can start the IPython engines and get |
|
45 | There are a couple of ways that you can start the IPython engines and get | |
46 | these things to happen. |
|
46 | these things to happen. | |
47 |
|
47 | |||
48 | Automatic starting using :command:`mpiexec` and :command:`ipcluster` |
|
48 | Automatic starting using :command:`mpiexec` and :command:`ipcluster` | |
49 | -------------------------------------------------------------------- |
|
49 | -------------------------------------------------------------------- | |
50 |
|
50 | |||
51 | The easiest approach is to use the `MPI` Launchers in :command:`ipcluster`, |
|
51 | The easiest approach is to use the `MPI` Launchers in :command:`ipcluster`, | |
52 | which will first start a controller and then a set of engines using |
|
52 | which will first start a controller and then a set of engines using | |
53 | :command:`mpiexec`:: |
|
53 | :command:`mpiexec`:: | |
54 |
|
54 | |||
55 | $ ipcluster start -n 4 --engines=MPIEngineSetLauncher |
|
55 | $ ipcluster start -n 4 --engines=MPIEngineSetLauncher | |
56 |
|
56 | |||
57 | This approach is best as interrupting :command:`ipcluster` will automatically |
|
57 | This approach is best as interrupting :command:`ipcluster` will automatically | |
58 | stop and clean up the controller and engines. |
|
58 | stop and clean up the controller and engines. | |
59 |
|
59 | |||
60 | Manual starting using :command:`mpiexec` |
|
60 | Manual starting using :command:`mpiexec` | |
61 | ---------------------------------------- |
|
61 | ---------------------------------------- | |
62 |
|
62 | |||
63 | If you want to start the IPython engines using the :command:`mpiexec`, just |
|
63 | If you want to start the IPython engines using the :command:`mpiexec`, just | |
64 | do:: |
|
64 | do:: | |
65 |
|
65 | |||
66 | $ mpiexec -n 4 ipengine --mpi=mpi4py |
|
66 | $ mpiexec -n 4 ipengine --mpi=mpi4py | |
67 |
|
67 | |||
68 | This requires that you already have a controller running and that the FURL |
|
68 | This requires that you already have a controller running and that the FURL | |
69 | files for the engines are in place. We also have built in support for |
|
69 | files for the engines are in place. We also have built in support for | |
70 | PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by |
|
70 | PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by | |
71 | starting the engines with:: |
|
71 | starting the engines with:: | |
72 |
|
72 | |||
73 | $ mpiexec -n 4 ipengine --mpi=pytrilinos |
|
73 | $ mpiexec -n 4 ipengine --mpi=pytrilinos | |
74 |
|
74 | |||
75 | Automatic starting using PBS and :command:`ipcluster` |
|
75 | Automatic starting using PBS and :command:`ipcluster` | |
76 | ------------------------------------------------------ |
|
76 | ------------------------------------------------------ | |
77 |
|
77 | |||
78 | The :command:`ipcluster` command also has built-in integration with PBS. For |
|
78 | The :command:`ipcluster` command also has built-in integration with PBS. For | |
79 | more information on this approach, see our documentation on :ref:`ipcluster |
|
79 | more information on this approach, see our documentation on :ref:`ipcluster | |
80 | <parallel_process>`. |
|
80 | <parallel_process>`. | |
81 |
|
81 | |||
82 | Actually using MPI |
|
82 | Actually using MPI | |
83 | ================== |
|
83 | ================== | |
84 |
|
84 | |||
85 | Once the engines are running with MPI enabled, you are ready to go. You can |
|
85 | Once the engines are running with MPI enabled, you are ready to go. You can | |
86 | now call any code that uses MPI in the IPython engines. And, all of this can |
|
86 | now call any code that uses MPI in the IPython engines. And, all of this can | |
87 | be done interactively. Here we show a simple example that uses mpi4py |
|
87 | be done interactively. Here we show a simple example that uses mpi4py | |
88 | [mpi4py]_ version 1.1.0 or later. |
|
88 | [mpi4py]_ version 1.1.0 or later. | |
89 |
|
89 | |||
90 | First, lets define a simply function that uses MPI to calculate the sum of a |
|
90 | First, lets define a simply function that uses MPI to calculate the sum of a | |
91 | distributed array. Save the following text in a file called :file:`psum.py`: |
|
91 | distributed array. Save the following text in a file called :file:`psum.py`: | |
92 |
|
92 | |||
93 | .. sourcecode:: python |
|
93 | .. sourcecode:: python | |
94 |
|
94 | |||
95 | from mpi4py import MPI |
|
95 | from mpi4py import MPI | |
96 | import numpy as np |
|
96 | import numpy as np | |
97 |
|
97 | |||
98 | def psum(a): |
|
98 | def psum(a): | |
99 | locsum = np.sum(a) |
|
99 | locsum = np.sum(a) | |
100 | rcvBuf = np.array(0.0,'d') |
|
100 | rcvBuf = np.array(0.0,'d') | |
101 | MPI.COMM_WORLD.Allreduce([locsum, MPI.DOUBLE], |
|
101 | MPI.COMM_WORLD.Allreduce([locsum, MPI.DOUBLE], | |
102 | [rcvBuf, MPI.DOUBLE], |
|
102 | [rcvBuf, MPI.DOUBLE], | |
103 | op=MPI.SUM) |
|
103 | op=MPI.SUM) | |
104 | return rcvBuf |
|
104 | return rcvBuf | |
105 |
|
105 | |||
106 | Now, start an IPython cluster:: |
|
106 | Now, start an IPython cluster:: | |
107 |
|
107 | |||
108 | $ ipcluster start --profile=mpi -n 4 |
|
108 | $ ipcluster start --profile=mpi -n 4 | |
109 |
|
109 | |||
110 | .. note:: |
|
110 | .. note:: | |
111 |
|
111 | |||
112 | It is assumed here that the mpi profile has been set up, as described :ref:`here |
|
112 | It is assumed here that the mpi profile has been set up, as described :ref:`here | |
113 | <parallel_process>`. |
|
113 | <parallel_process>`. | |
114 |
|
114 | |||
115 | Finally, connect to the cluster and use this function interactively. In this |
|
115 | Finally, connect to the cluster and use this function interactively. In this | |
116 | case, we create a random array on each engine and sum up all the random arrays |
|
116 | case, we create a random array on each engine and sum up all the random arrays | |
117 | using our :func:`psum` function: |
|
117 | using our :func:`psum` function: | |
118 |
|
118 | |||
119 | .. sourcecode:: ipython |
|
119 | .. sourcecode:: ipython | |
120 |
|
120 | |||
121 | In [1]: from IPython.parallel import Client |
|
121 | In [1]: from IPython.parallel import Client | |
122 |
|
122 | |||
123 | In [2]: c = Client(profile='mpi') |
|
123 | In [2]: c = Client(profile='mpi') | |
124 |
|
124 | |||
125 | In [3]: view = c[:] |
|
125 | In [3]: view = c[:] | |
126 |
|
126 | |||
127 | In [4]: view.activate() # enabe magics |
|
127 | In [4]: view.activate() # enable magics | |
128 |
|
128 | |||
129 | # run the contents of the file on each engine: |
|
129 | # run the contents of the file on each engine: | |
130 | In [5]: view.run('psum.py') |
|
130 | In [5]: view.run('psum.py') | |
131 |
|
131 | |||
132 | In [6]: view.scatter('a',np.arange(16,dtype='float')) |
|
132 | In [6]: view.scatter('a',np.arange(16,dtype='float')) | |
133 |
|
133 | |||
134 | In [7]: view['a'] |
|
134 | In [7]: view['a'] | |
135 | Out[7]: [array([ 0., 1., 2., 3.]), |
|
135 | Out[7]: [array([ 0., 1., 2., 3.]), | |
136 | array([ 4., 5., 6., 7.]), |
|
136 | array([ 4., 5., 6., 7.]), | |
137 | array([ 8., 9., 10., 11.]), |
|
137 | array([ 8., 9., 10., 11.]), | |
138 | array([ 12., 13., 14., 15.])] |
|
138 | array([ 12., 13., 14., 15.])] | |
139 |
|
139 | |||
140 | In [7]: %px totalsum = psum(a) |
|
140 | In [7]: %px totalsum = psum(a) | |
141 | Parallel execution on engines: [0,1,2,3] |
|
141 | Parallel execution on engines: [0,1,2,3] | |
142 |
|
142 | |||
143 | In [8]: view['totalsum'] |
|
143 | In [8]: view['totalsum'] | |
144 | Out[8]: [120.0, 120.0, 120.0, 120.0] |
|
144 | Out[8]: [120.0, 120.0, 120.0, 120.0] | |
145 |
|
145 | |||
146 | Any Python code that makes calls to MPI can be used in this manner, including |
|
146 | Any Python code that makes calls to MPI can be used in this manner, including | |
147 | compiled C, C++ and Fortran libraries that have been exposed to Python. |
|
147 | compiled C, C++ and Fortran libraries that have been exposed to Python. | |
148 |
|
148 | |||
149 | .. [MPI] Message Passing Interface. http://www-unix.mcs.anl.gov/mpi/ |
|
149 | .. [MPI] Message Passing Interface. http://www-unix.mcs.anl.gov/mpi/ | |
150 | .. [mpi4py] MPI for Python. mpi4py: http://mpi4py.scipy.org/ |
|
150 | .. [mpi4py] MPI for Python. mpi4py: http://mpi4py.scipy.org/ | |
151 | .. [OpenMPI] Open MPI. http://www.open-mpi.org/ |
|
151 | .. [OpenMPI] Open MPI. http://www.open-mpi.org/ | |
152 | .. [PyTrilinos] PyTrilinos. http://trilinos.sandia.gov/packages/pytrilinos/ |
|
152 | .. [PyTrilinos] PyTrilinos. http://trilinos.sandia.gov/packages/pytrilinos/ | |
153 |
|
153 | |||
154 |
|
154 |
General Comments 0
You need to be logged in to leave comments.
Login now