Show More
@@ -1,178 +1,182 b'' | |||
|
1 | 1 | .. _parallelmpi: |
|
2 | 2 | |
|
3 | 3 | ======================= |
|
4 | 4 | Using MPI with IPython |
|
5 | 5 | ======================= |
|
6 | 6 | |
|
7 | 7 | Often, a parallel algorithm will require moving data between the engines. One |
|
8 | 8 | way of accomplishing this is by doing a pull and then a push using the |
|
9 | 9 | multiengine client. However, this will be slow as all the data has to go |
|
10 | 10 | through the controller to the client and then back through the controller, to |
|
11 | 11 | its final destination. |
|
12 | 12 | |
|
13 | 13 | A much better way of moving data between engines is to use a message passing |
|
14 | 14 | library, such as the Message Passing Interface (MPI) [MPI]_. IPython's |
|
15 | 15 | parallel computing architecture has been designed from the ground up to |
|
16 | 16 | integrate with MPI. This document describes how to use MPI with IPython. |
|
17 | 17 | |
|
18 | 18 | Additional installation requirements |
|
19 | 19 | ==================================== |
|
20 | 20 | |
|
21 | 21 | If you want to use MPI with IPython, you will need to install: |
|
22 | 22 | |
|
23 | 23 | * A standard MPI implementation such as OpenMPI [OpenMPI]_ or MPICH. |
|
24 | 24 | * The mpi4py [mpi4py]_ package. |
|
25 | 25 | |
|
26 | 26 | .. note:: |
|
27 | 27 | |
|
28 | 28 | The mpi4py package is not a strict requirement. However, you need to |
|
29 | 29 | have *some* way of calling MPI from Python. You also need some way of |
|
30 | 30 | making sure that :func:`MPI_Init` is called when the IPython engines start |
|
31 | 31 | up. There are a number of ways of doing this and a good number of |
|
32 | 32 | associated subtleties. We highly recommend just using mpi4py as it |
|
33 | 33 | takes care of most of these problems. If you want to do something |
|
34 | 34 | different, let us know and we can help you get started. |
|
35 | 35 | |
|
36 | 36 | Starting the engines with MPI enabled |
|
37 | 37 | ===================================== |
|
38 | 38 | |
|
39 | 39 | To use code that calls MPI, there are typically two things that MPI requires. |
|
40 | 40 | |
|
41 | 41 | 1. The process that wants to call MPI must be started using |
|
42 | 42 | :command:`mpiexec` or a batch system (like PBS) that has MPI support. |
|
43 | 43 | 2. Once the process starts, it must call :func:`MPI_Init`. |
|
44 | 44 | |
|
45 | 45 | There are a couple of ways that you can start the IPython engines and get |
|
46 | 46 | these things to happen. |
|
47 | 47 | |
|
48 | 48 | Automatic starting using :command:`mpiexec` and :command:`ipcluster` |
|
49 | 49 | -------------------------------------------------------------------- |
|
50 | 50 | |
|
51 | 51 | The easiest approach is to use the `mpiexec` mode of :command:`ipcluster`, |
|
52 | 52 | which will first start a controller and then a set of engines using |
|
53 | 53 | :command:`mpiexec`:: |
|
54 | 54 | |
|
55 | 55 | $ ipcluster mpiexec -n 4 |
|
56 | 56 | |
|
57 | 57 | This approach is best as interrupting :command:`ipcluster` will automatically |
|
58 | 58 | stop and clean up the controller and engines. |
|
59 | 59 | |
|
60 | 60 | Manual starting using :command:`mpiexec` |
|
61 | 61 | ---------------------------------------- |
|
62 | 62 | |
|
63 | 63 | If you want to start the IPython engines using the :command:`mpiexec`, just |
|
64 | 64 | do:: |
|
65 | 65 | |
|
66 | 66 | $ mpiexec -n 4 ipengine --mpi=mpi4py |
|
67 | 67 | |
|
68 | 68 | This requires that you already have a controller running and that the FURL |
|
69 | 69 | files for the engines are in place. We also have built in support for |
|
70 | 70 | PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by |
|
71 | 71 | starting the engines with:: |
|
72 | 72 | |
|
73 | 73 | mpiexec -n 4 ipengine --mpi=pytrilinos |
|
74 | 74 | |
|
75 | 75 | Automatic starting using PBS and :command:`ipcluster` |
|
76 | 76 | ----------------------------------------------------- |
|
77 | 77 | |
|
78 | 78 | The :command:`ipcluster` command also has built-in integration with PBS. For |
|
79 | 79 | more information on this approach, see our documentation on :ref:`ipcluster |
|
80 | 80 | <parallel_process>`. |
|
81 | 81 | |
|
82 | 82 | Actually using MPI |
|
83 | 83 | ================== |
|
84 | 84 | |
|
85 | 85 | Once the engines are running with MPI enabled, you are ready to go. You can |
|
86 | 86 | now call any code that uses MPI in the IPython engines. And, all of this can |
|
87 | 87 | be done interactively. Here we show a simple example that uses mpi4py |
|
88 | [mpi4py]_. | |
|
88 | [mpi4py]_ version 1.1.0 or later. | |
|
89 | 89 | |
|
90 | 90 | First, lets define a simply function that uses MPI to calculate the sum of a |
|
91 | 91 | distributed array. Save the following text in a file called :file:`psum.py`: |
|
92 | 92 | |
|
93 | 93 | .. sourcecode:: python |
|
94 | 94 | |
|
95 | 95 | from mpi4py import MPI |
|
96 | 96 | import numpy as np |
|
97 | ||
|
97 | ||
|
98 | 98 | def psum(a): |
|
99 | 99 | s = np.sum(a) |
|
100 | return MPI.COMM_WORLD.Allreduce(s,MPI.SUM) | |
|
100 | rcvBuf = np.array(0.0,'d') | |
|
101 | MPI.COMM_WORLD.Allreduce([s, MPI.DOUBLE], | |
|
102 | [rcvBuf, MPI.DOUBLE], | |
|
103 | op=MPI.SUM) | |
|
104 | return rcvBuf | |
|
101 | 105 | |
|
102 | 106 | Now, start an IPython cluster in the same directory as :file:`psum.py`:: |
|
103 | 107 | |
|
104 | 108 | $ ipcluster mpiexec -n 4 |
|
105 | 109 | |
|
106 | 110 | Finally, connect to the cluster and use this function interactively. In this |
|
107 | 111 | case, we create a random array on each engine and sum up all the random arrays |
|
108 | 112 | using our :func:`psum` function: |
|
109 | 113 | |
|
110 | 114 | .. sourcecode:: ipython |
|
111 | 115 | |
|
112 | 116 | In [1]: from IPython.kernel import client |
|
113 | 117 | |
|
114 | 118 | In [2]: mec = client.MultiEngineClient() |
|
115 | 119 | |
|
116 | 120 | In [3]: mec.activate() |
|
117 | 121 | |
|
118 | 122 | In [4]: px import numpy as np |
|
119 | 123 | Parallel execution on engines: all |
|
120 | 124 | Out[4]: |
|
121 | 125 | <Results List> |
|
122 | 126 | [0] In [13]: import numpy as np |
|
123 | 127 | [1] In [13]: import numpy as np |
|
124 | 128 | [2] In [13]: import numpy as np |
|
125 | 129 | [3] In [13]: import numpy as np |
|
126 | 130 | |
|
127 | 131 | In [6]: px a = np.random.rand(100) |
|
128 | 132 | Parallel execution on engines: all |
|
129 | 133 | Out[6]: |
|
130 | 134 | <Results List> |
|
131 | 135 | [0] In [15]: a = np.random.rand(100) |
|
132 | 136 | [1] In [15]: a = np.random.rand(100) |
|
133 | 137 | [2] In [15]: a = np.random.rand(100) |
|
134 | 138 | [3] In [15]: a = np.random.rand(100) |
|
135 | 139 | |
|
136 | 140 | In [7]: px from psum import psum |
|
137 | 141 | Parallel execution on engines: all |
|
138 | 142 | Out[7]: |
|
139 | 143 | <Results List> |
|
140 | 144 | [0] In [16]: from psum import psum |
|
141 | 145 | [1] In [16]: from psum import psum |
|
142 | 146 | [2] In [16]: from psum import psum |
|
143 | 147 | [3] In [16]: from psum import psum |
|
144 | 148 | |
|
145 | 149 | In [8]: px s = psum(a) |
|
146 | 150 | Parallel execution on engines: all |
|
147 | 151 | Out[8]: |
|
148 | 152 | <Results List> |
|
149 | 153 | [0] In [17]: s = psum(a) |
|
150 | 154 | [1] In [17]: s = psum(a) |
|
151 | 155 | [2] In [17]: s = psum(a) |
|
152 | 156 | [3] In [17]: s = psum(a) |
|
153 | 157 | |
|
154 | 158 | In [9]: px print s |
|
155 | 159 | Parallel execution on engines: all |
|
156 | 160 | Out[9]: |
|
157 | 161 | <Results List> |
|
158 | 162 | [0] In [18]: print s |
|
159 | 163 | [0] Out[18]: 187.451545803 |
|
160 | 164 | |
|
161 | 165 | [1] In [18]: print s |
|
162 | 166 | [1] Out[18]: 187.451545803 |
|
163 | 167 | |
|
164 | 168 | [2] In [18]: print s |
|
165 | 169 | [2] Out[18]: 187.451545803 |
|
166 | 170 | |
|
167 | 171 | [3] In [18]: print s |
|
168 | 172 | [3] Out[18]: 187.451545803 |
|
169 | 173 | |
|
170 | 174 | Any Python code that makes calls to MPI can be used in this manner, including |
|
171 | 175 | compiled C, C++ and Fortran libraries that have been exposed to Python. |
|
172 | 176 | |
|
173 | 177 | .. [MPI] Message Passing Interface. http://www-unix.mcs.anl.gov/mpi/ |
|
174 | 178 | .. [mpi4py] MPI for Python. mpi4py: http://mpi4py.scipy.org/ |
|
175 | 179 | .. [OpenMPI] Open MPI. http://www.open-mpi.org/ |
|
176 | 180 | .. [PyTrilinos] PyTrilinos. http://trilinos.sandia.gov/packages/pytrilinos/ |
|
177 | 181 | |
|
178 | 182 |
General Comments 0
You need to be logged in to leave comments.
Login now