##// END OF EJS Templates
Merge pull request #1819 from thisch/psums_cleanup...
Min RK -
r7322:9f36bf56 merge
parent child Browse files
Show More
@@ -1,149 +1,154 b''
1 1 .. _parallelmpi:
2 2
3 3 =======================
4 4 Using MPI with IPython
5 5 =======================
6 6
7 7 Often, a parallel algorithm will require moving data between the engines. One
8 8 way of accomplishing this is by doing a pull and then a push using the
9 9 multiengine client. However, this will be slow as all the data has to go
10 10 through the controller to the client and then back through the controller, to
11 11 its final destination.
12 12
13 13 A much better way of moving data between engines is to use a message passing
14 14 library, such as the Message Passing Interface (MPI) [MPI]_. IPython's
15 15 parallel computing architecture has been designed from the ground up to
16 16 integrate with MPI. This document describes how to use MPI with IPython.
17 17
18 18 Additional installation requirements
19 19 ====================================
20 20
21 21 If you want to use MPI with IPython, you will need to install:
22 22
23 23 * A standard MPI implementation such as OpenMPI [OpenMPI]_ or MPICH.
24 24 * The mpi4py [mpi4py]_ package.
25 25
26 26 .. note::
27 27
28 28 The mpi4py package is not a strict requirement. However, you need to
29 29 have *some* way of calling MPI from Python. You also need some way of
30 30 making sure that :func:`MPI_Init` is called when the IPython engines start
31 31 up. There are a number of ways of doing this and a good number of
32 32 associated subtleties. We highly recommend just using mpi4py as it
33 33 takes care of most of these problems. If you want to do something
34 34 different, let us know and we can help you get started.
35 35
36 36 Starting the engines with MPI enabled
37 37 =====================================
38 38
39 39 To use code that calls MPI, there are typically two things that MPI requires.
40 40
41 41 1. The process that wants to call MPI must be started using
42 42 :command:`mpiexec` or a batch system (like PBS) that has MPI support.
43 43 2. Once the process starts, it must call :func:`MPI_Init`.
44 44
45 45 There are a couple of ways that you can start the IPython engines and get
46 46 these things to happen.
47 47
48 48 Automatic starting using :command:`mpiexec` and :command:`ipcluster`
49 49 --------------------------------------------------------------------
50 50
51 51 The easiest approach is to use the `MPI` Launchers in :command:`ipcluster`,
52 52 which will first start a controller and then a set of engines using
53 53 :command:`mpiexec`::
54 54
55 55 $ ipcluster start -n 4 --engines=MPIEngineSetLauncher
56 56
57 57 This approach is best as interrupting :command:`ipcluster` will automatically
58 58 stop and clean up the controller and engines.
59 59
60 60 Manual starting using :command:`mpiexec`
61 61 ----------------------------------------
62 62
63 63 If you want to start the IPython engines using the :command:`mpiexec`, just
64 64 do::
65 65
66 66 $ mpiexec -n 4 ipengine --mpi=mpi4py
67 67
68 68 This requires that you already have a controller running and that the FURL
69 69 files for the engines are in place. We also have built in support for
70 70 PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by
71 71 starting the engines with::
72 72
73 73 $ mpiexec -n 4 ipengine --mpi=pytrilinos
74 74
75 75 Automatic starting using PBS and :command:`ipcluster`
76 76 ------------------------------------------------------
77 77
78 78 The :command:`ipcluster` command also has built-in integration with PBS. For
79 79 more information on this approach, see our documentation on :ref:`ipcluster
80 80 <parallel_process>`.
81 81
82 82 Actually using MPI
83 83 ==================
84 84
85 85 Once the engines are running with MPI enabled, you are ready to go. You can
86 86 now call any code that uses MPI in the IPython engines. And, all of this can
87 87 be done interactively. Here we show a simple example that uses mpi4py
88 88 [mpi4py]_ version 1.1.0 or later.
89 89
90 90 First, lets define a simply function that uses MPI to calculate the sum of a
91 91 distributed array. Save the following text in a file called :file:`psum.py`:
92 92
93 93 .. sourcecode:: python
94 94
95 95 from mpi4py import MPI
96 96 import numpy as np
97 97
98 98 def psum(a):
99 s = np.sum(a)
99 locsum = np.sum(a)
100 100 rcvBuf = np.array(0.0,'d')
101 MPI.COMM_WORLD.Allreduce([s, MPI.DOUBLE],
101 MPI.COMM_WORLD.Allreduce([locsum, MPI.DOUBLE],
102 102 [rcvBuf, MPI.DOUBLE],
103 103 op=MPI.SUM)
104 104 return rcvBuf
105 105
106 106 Now, start an IPython cluster::
107 107
108 108 $ ipcluster start --profile=mpi -n 4
109 109
110 110 .. note::
111 111
112 112 It is assumed here that the mpi profile has been set up, as described :ref:`here
113 113 <parallel_process>`.
114 114
115 115 Finally, connect to the cluster and use this function interactively. In this
116 116 case, we create a random array on each engine and sum up all the random arrays
117 117 using our :func:`psum` function:
118 118
119 119 .. sourcecode:: ipython
120 120
121 121 In [1]: from IPython.parallel import Client
122
123 In [3]: c = Client(profile='mpi')
124
125 In [4]: view = c[:]
126
127 In [5]: view.activate() # enabe magics
128
122
123 In [2]: c = Client(profile='mpi')
124
125 In [3]: view = c[:]
126
127 In [4]: view.activate() # enable magics
128
129 129 # run the contents of the file on each engine:
130 In [6]: view.run('psum.py')
131
132 In [6]: %px a = np.random.rand(100)
133 Parallel execution on engines: [0,1,2,3]
134
135 In [8]: %px s = psum(a)
130 In [5]: view.run('psum.py')
131
132 In [6]: view.scatter('a',np.arange(16,dtype='float'))
133
134 In [7]: view['a']
135 Out[7]: [array([ 0., 1., 2., 3.]),
136 array([ 4., 5., 6., 7.]),
137 array([ 8., 9., 10., 11.]),
138 array([ 12., 13., 14., 15.])]
139
140 In [7]: %px totalsum = psum(a)
136 141 Parallel execution on engines: [0,1,2,3]
137
138 In [9]: view['s']
139 Out[9]: [187.451545803,187.451545803,187.451545803,187.451545803]
142
143 In [8]: view['totalsum']
144 Out[8]: [120.0, 120.0, 120.0, 120.0]
140 145
141 146 Any Python code that makes calls to MPI can be used in this manner, including
142 147 compiled C, C++ and Fortran libraries that have been exposed to Python.
143 148
144 149 .. [MPI] Message Passing Interface. http://www-unix.mcs.anl.gov/mpi/
145 150 .. [mpi4py] MPI for Python. mpi4py: http://mpi4py.scipy.org/
146 151 .. [OpenMPI] Open MPI. http://www.open-mpi.org/
147 152 .. [PyTrilinos] PyTrilinos. http://trilinos.sandia.gov/packages/pytrilinos/
148 153
149 154
General Comments 0
You need to be logged in to leave comments. Login now