##// END OF EJS Templates
update CompositeError docs with recent changes
MinRK -
Show More
@@ -1,658 +1,652 b''
1 .. _parallel_details:
1 .. _parallel_details:
2
2
3 ==========================================
3 ==========================================
4 Details of Parallel Computing with IPython
4 Details of Parallel Computing with IPython
5 ==========================================
5 ==========================================
6
6
7 .. note::
7 .. note::
8
8
9 There are still many sections to fill out in this doc
9 There are still many sections to fill out in this doc
10
10
11
11
12 Caveats
12 Caveats
13 =======
13 =======
14
14
15 First, some caveats about the detailed workings of parallel computing with 0MQ and IPython.
15 First, some caveats about the detailed workings of parallel computing with 0MQ and IPython.
16
16
17 Non-copying sends and numpy arrays
17 Non-copying sends and numpy arrays
18 ----------------------------------
18 ----------------------------------
19
19
20 When numpy arrays are passed as arguments to apply or via data-movement methods, they are not
20 When numpy arrays are passed as arguments to apply or via data-movement methods, they are not
21 copied. This means that you must be careful if you are sending an array that you intend to work
21 copied. This means that you must be careful if you are sending an array that you intend to work
22 on. PyZMQ does allow you to track when a message has been sent so you can know when it is safe
22 on. PyZMQ does allow you to track when a message has been sent so you can know when it is safe
23 to edit the buffer, but IPython only allows for this.
23 to edit the buffer, but IPython only allows for this.
24
24
25 It is also important to note that the non-copying receive of a message is *read-only*. That
25 It is also important to note that the non-copying receive of a message is *read-only*. That
26 means that if you intend to work in-place on an array that you have sent or received, you must
26 means that if you intend to work in-place on an array that you have sent or received, you must
27 copy it. This is true for both numpy arrays sent to engines and numpy arrays retrieved as
27 copy it. This is true for both numpy arrays sent to engines and numpy arrays retrieved as
28 results.
28 results.
29
29
30 The following will fail:
30 The following will fail:
31
31
32 .. sourcecode:: ipython
32 .. sourcecode:: ipython
33
33
34 In [3]: A = numpy.zeros(2)
34 In [3]: A = numpy.zeros(2)
35
35
36 In [4]: def setter(a):
36 In [4]: def setter(a):
37 ...: a[0]=1
37 ...: a[0]=1
38 ...: return a
38 ...: return a
39
39
40 In [5]: rc[0].apply_sync(setter, A)
40 In [5]: rc[0].apply_sync(setter, A)
41 ---------------------------------------------------------------------------
41 ---------------------------------------------------------------------------
42 RemoteError Traceback (most recent call last)
42 RuntimeError Traceback (most recent call last)<string> in <module>()
43 ...
43 <ipython-input-12-c3e7afeb3075> in setter(a)
44 RemoteError: RuntimeError(array is not writeable)
45 Traceback (most recent call last):
46 File "/path/to/site-packages/IPython/parallel/streamkernel.py", line 329, in apply_request
47 exec code in working, working
48 File "<string>", line 1, in <module>
49 File "<ipython-input-14-736187483856>", line 2, in setter
50 RuntimeError: array is not writeable
44 RuntimeError: array is not writeable
51
45
52 If you do need to edit the array in-place, just remember to copy the array if it's read-only.
46 If you do need to edit the array in-place, just remember to copy the array if it's read-only.
53 The :attr:`ndarray.flags.writeable` flag will tell you if you can write to an array.
47 The :attr:`ndarray.flags.writeable` flag will tell you if you can write to an array.
54
48
55 .. sourcecode:: ipython
49 .. sourcecode:: ipython
56
50
57 In [3]: A = numpy.zeros(2)
51 In [3]: A = numpy.zeros(2)
58
52
59 In [4]: def setter(a):
53 In [4]: def setter(a):
60 ...: """only copy read-only arrays"""
54 ...: """only copy read-only arrays"""
61 ...: if not a.flags.writeable:
55 ...: if not a.flags.writeable:
62 ...: a=a.copy()
56 ...: a=a.copy()
63 ...: a[0]=1
57 ...: a[0]=1
64 ...: return a
58 ...: return a
65
59
66 In [5]: rc[0].apply_sync(setter, A)
60 In [5]: rc[0].apply_sync(setter, A)
67 Out[5]: array([ 1., 0.])
61 Out[5]: array([ 1., 0.])
68
62
69 # note that results will also be read-only:
63 # note that results will also be read-only:
70 In [6]: _.flags.writeable
64 In [6]: _.flags.writeable
71 Out[6]: False
65 Out[6]: False
72
66
73 If you want to safely edit an array in-place after *sending* it, you must use the `track=True`
67 If you want to safely edit an array in-place after *sending* it, you must use the `track=True`
74 flag. IPython always performs non-copying sends of arrays, which return immediately. You must
68 flag. IPython always performs non-copying sends of arrays, which return immediately. You must
75 instruct IPython track those messages *at send time* in order to know for sure that the send has
69 instruct IPython track those messages *at send time* in order to know for sure that the send has
76 completed. AsyncResults have a :attr:`sent` property, and :meth:`wait_on_send` method for
70 completed. AsyncResults have a :attr:`sent` property, and :meth:`wait_on_send` method for
77 checking and waiting for 0MQ to finish with a buffer.
71 checking and waiting for 0MQ to finish with a buffer.
78
72
79 .. sourcecode:: ipython
73 .. sourcecode:: ipython
80
74
81 In [5]: A = numpy.random.random((1024,1024))
75 In [5]: A = numpy.random.random((1024,1024))
82
76
83 In [6]: view.track=True
77 In [6]: view.track=True
84
78
85 In [7]: ar = view.apply_async(lambda x: 2*x, A)
79 In [7]: ar = view.apply_async(lambda x: 2*x, A)
86
80
87 In [8]: ar.sent
81 In [8]: ar.sent
88 Out[8]: False
82 Out[8]: False
89
83
90 In [9]: ar.wait_on_send() # blocks until sent is True
84 In [9]: ar.wait_on_send() # blocks until sent is True
91
85
92
86
93 What is sendable?
87 What is sendable?
94 -----------------
88 -----------------
95
89
96 If IPython doesn't know what to do with an object, it will pickle it. There is a short list of
90 If IPython doesn't know what to do with an object, it will pickle it. There is a short list of
97 objects that are not pickled: ``buffers``, ``str/bytes`` objects, and ``numpy``
91 objects that are not pickled: ``buffers``, ``str/bytes`` objects, and ``numpy``
98 arrays. These are handled specially by IPython in order to prevent the copying of data. Sending
92 arrays. These are handled specially by IPython in order to prevent the copying of data. Sending
99 bytes or numpy arrays will result in exactly zero in-memory copies of your data (unless the data
93 bytes or numpy arrays will result in exactly zero in-memory copies of your data (unless the data
100 is very small).
94 is very small).
101
95
102 If you have an object that provides a Python buffer interface, then you can always send that
96 If you have an object that provides a Python buffer interface, then you can always send that
103 buffer without copying - and reconstruct the object on the other side in your own code. It is
97 buffer without copying - and reconstruct the object on the other side in your own code. It is
104 possible that the object reconstruction will become extensible, so you can add your own
98 possible that the object reconstruction will become extensible, so you can add your own
105 non-copying types, but this does not yet exist.
99 non-copying types, but this does not yet exist.
106
100
107 Closures
101 Closures
108 ********
102 ********
109
103
110 Just about anything in Python is pickleable. The one notable exception is objects (generally
104 Just about anything in Python is pickleable. The one notable exception is objects (generally
111 functions) with *closures*. Closures can be a complicated topic, but the basic principal is that
105 functions) with *closures*. Closures can be a complicated topic, but the basic principal is that
112 functions that refer to variables in their parent scope have closures.
106 functions that refer to variables in their parent scope have closures.
113
107
114 An example of a function that uses a closure:
108 An example of a function that uses a closure:
115
109
116 .. sourcecode:: python
110 .. sourcecode:: python
117
111
118 def f(a):
112 def f(a):
119 def inner():
113 def inner():
120 # inner will have a closure
114 # inner will have a closure
121 return a
115 return a
122 return echo
116 return echo
123
117
124 f1 = f(1)
118 f1 = f(1)
125 f2 = f(2)
119 f2 = f(2)
126 f1() # returns 1
120 f1() # returns 1
127 f2() # returns 2
121 f2() # returns 2
128
122
129 ``f1`` and ``f2`` will have closures referring to the scope in which `inner` was defined,
123 ``f1`` and ``f2`` will have closures referring to the scope in which `inner` was defined,
130 because they use the variable 'a'. As a result, you would not be able to send ``f1`` or ``f2``
124 because they use the variable 'a'. As a result, you would not be able to send ``f1`` or ``f2``
131 with IPython. Note that you *would* be able to send `f`. This is only true for interactively
125 with IPython. Note that you *would* be able to send `f`. This is only true for interactively
132 defined functions (as are often used in decorators), and only when there are variables used
126 defined functions (as are often used in decorators), and only when there are variables used
133 inside the inner function, that are defined in the outer function. If the names are *not* in the
127 inside the inner function, that are defined in the outer function. If the names are *not* in the
134 outer function, then there will not be a closure, and the generated function will look in
128 outer function, then there will not be a closure, and the generated function will look in
135 ``globals()`` for the name:
129 ``globals()`` for the name:
136
130
137 .. sourcecode:: python
131 .. sourcecode:: python
138
132
139 def g(b):
133 def g(b):
140 # note that `b` is not referenced in inner's scope
134 # note that `b` is not referenced in inner's scope
141 def inner():
135 def inner():
142 # this inner will *not* have a closure
136 # this inner will *not* have a closure
143 return a
137 return a
144 return echo
138 return echo
145 g1 = g(1)
139 g1 = g(1)
146 g2 = g(2)
140 g2 = g(2)
147 g1() # raises NameError on 'a'
141 g1() # raises NameError on 'a'
148 a=5
142 a=5
149 g2() # returns 5
143 g2() # returns 5
150
144
151 `g1` and `g2` *will* be sendable with IPython, and will treat the engine's namespace as
145 `g1` and `g2` *will* be sendable with IPython, and will treat the engine's namespace as
152 globals(). The :meth:`pull` method is implemented based on this principal. If we did not
146 globals(). The :meth:`pull` method is implemented based on this principal. If we did not
153 provide pull, you could implement it yourself with `apply`, by simply returning objects out
147 provide pull, you could implement it yourself with `apply`, by simply returning objects out
154 of the global namespace:
148 of the global namespace:
155
149
156 .. sourcecode:: ipython
150 .. sourcecode:: ipython
157
151
158 In [10]: view.apply(lambda : a)
152 In [10]: view.apply(lambda : a)
159
153
160 # is equivalent to
154 # is equivalent to
161 In [11]: view.pull('a')
155 In [11]: view.pull('a')
162
156
163 Running Code
157 Running Code
164 ============
158 ============
165
159
166 There are two principal units of execution in Python: strings of Python code (e.g. 'a=5'),
160 There are two principal units of execution in Python: strings of Python code (e.g. 'a=5'),
167 and Python functions. IPython is designed around the use of functions via the core
161 and Python functions. IPython is designed around the use of functions via the core
168 Client method, called `apply`.
162 Client method, called `apply`.
169
163
170 Apply
164 Apply
171 -----
165 -----
172
166
173 The principal method of remote execution is :meth:`apply`, of
167 The principal method of remote execution is :meth:`apply`, of
174 :class:`~IPython.parallel.client.view.View` objects. The Client provides the full execution and
168 :class:`~IPython.parallel.client.view.View` objects. The Client provides the full execution and
175 communication API for engines via its low-level :meth:`send_apply_message` method, which is used
169 communication API for engines via its low-level :meth:`send_apply_message` method, which is used
176 by all higher level methods of its Views.
170 by all higher level methods of its Views.
177
171
178 f : function
172 f : function
179 The fuction to be called remotely
173 The fuction to be called remotely
180 args : tuple/list
174 args : tuple/list
181 The positional arguments passed to `f`
175 The positional arguments passed to `f`
182 kwargs : dict
176 kwargs : dict
183 The keyword arguments passed to `f`
177 The keyword arguments passed to `f`
184
178
185 flags for all views:
179 flags for all views:
186
180
187 block : bool (default: view.block)
181 block : bool (default: view.block)
188 Whether to wait for the result, or return immediately.
182 Whether to wait for the result, or return immediately.
189 False:
183 False:
190 returns AsyncResult
184 returns AsyncResult
191 True:
185 True:
192 returns actual result(s) of f(*args, **kwargs)
186 returns actual result(s) of f(*args, **kwargs)
193 if multiple targets:
187 if multiple targets:
194 list of results, matching `targets`
188 list of results, matching `targets`
195 track : bool [default view.track]
189 track : bool [default view.track]
196 whether to track non-copying sends.
190 whether to track non-copying sends.
197
191
198 targets : int,list of ints, 'all', None [default view.targets]
192 targets : int,list of ints, 'all', None [default view.targets]
199 Specify the destination of the job.
193 Specify the destination of the job.
200 if 'all' or None:
194 if 'all' or None:
201 Run on all active engines
195 Run on all active engines
202 if list:
196 if list:
203 Run on each specified engine
197 Run on each specified engine
204 if int:
198 if int:
205 Run on single engine
199 Run on single engine
206
200
207 Note that LoadBalancedView uses targets to restrict possible destinations. LoadBalanced calls
201 Note that LoadBalancedView uses targets to restrict possible destinations. LoadBalanced calls
208 will always execute in just one location.
202 will always execute in just one location.
209
203
210 flags only in LoadBalancedViews:
204 flags only in LoadBalancedViews:
211
205
212 after : Dependency or collection of msg_ids
206 after : Dependency or collection of msg_ids
213 Only for load-balanced execution (targets=None)
207 Only for load-balanced execution (targets=None)
214 Specify a list of msg_ids as a time-based dependency.
208 Specify a list of msg_ids as a time-based dependency.
215 This job will only be run *after* the dependencies
209 This job will only be run *after* the dependencies
216 have been met.
210 have been met.
217
211
218 follow : Dependency or collection of msg_ids
212 follow : Dependency or collection of msg_ids
219 Only for load-balanced execution (targets=None)
213 Only for load-balanced execution (targets=None)
220 Specify a list of msg_ids as a location-based dependency.
214 Specify a list of msg_ids as a location-based dependency.
221 This job will only be run on an engine where this dependency
215 This job will only be run on an engine where this dependency
222 is met.
216 is met.
223
217
224 timeout : float/int or None
218 timeout : float/int or None
225 Only for load-balanced execution (targets=None)
219 Only for load-balanced execution (targets=None)
226 Specify an amount of time (in seconds) for the scheduler to
220 Specify an amount of time (in seconds) for the scheduler to
227 wait for dependencies to be met before failing with a
221 wait for dependencies to be met before failing with a
228 DependencyTimeout.
222 DependencyTimeout.
229
223
230 execute and run
224 execute and run
231 ---------------
225 ---------------
232
226
233 For executing strings of Python code, :class:`DirectView` 's also provide an :meth:`execute` and
227 For executing strings of Python code, :class:`DirectView` 's also provide an :meth:`execute` and
234 a :meth:`run` method, which rather than take functions and arguments, take simple strings.
228 a :meth:`run` method, which rather than take functions and arguments, take simple strings.
235 `execute` simply takes a string of Python code to execute, and sends it to the Engine(s). `run`
229 `execute` simply takes a string of Python code to execute, and sends it to the Engine(s). `run`
236 is the same as `execute`, but for a *file*, rather than a string. It is simply a wrapper that
230 is the same as `execute`, but for a *file*, rather than a string. It is simply a wrapper that
237 does something very similar to ``execute(open(f).read())``.
231 does something very similar to ``execute(open(f).read())``.
238
232
239 .. note::
233 .. note::
240
234
241 TODO: Examples for execute and run
235 TODO: Examples for execute and run
242
236
243 Views
237 Views
244 =====
238 =====
245
239
246 The principal extension of the :class:`~parallel.Client` is the :class:`~parallel.View`
240 The principal extension of the :class:`~parallel.Client` is the :class:`~parallel.View`
247 class. The client is typically a singleton for connecting to a cluster, and presents a
241 class. The client is typically a singleton for connecting to a cluster, and presents a
248 low-level interface to the Hub and Engines. Most real usage will involve creating one or more
242 low-level interface to the Hub and Engines. Most real usage will involve creating one or more
249 :class:`~parallel.View` objects for working with engines in various ways.
243 :class:`~parallel.View` objects for working with engines in various ways.
250
244
251
245
252 DirectView
246 DirectView
253 ----------
247 ----------
254
248
255 The :class:`.DirectView` is the class for the IPython :ref:`Multiplexing Interface
249 The :class:`.DirectView` is the class for the IPython :ref:`Multiplexing Interface
256 <parallel_multiengine>`.
250 <parallel_multiengine>`.
257
251
258 Creating a DirectView
252 Creating a DirectView
259 *********************
253 *********************
260
254
261 DirectViews can be created in two ways, by index access to a client, or by a client's
255 DirectViews can be created in two ways, by index access to a client, or by a client's
262 :meth:`view` method. Index access to a Client works in a few ways. First, you can create
256 :meth:`view` method. Index access to a Client works in a few ways. First, you can create
263 DirectViews to single engines simply by accessing the client by engine id:
257 DirectViews to single engines simply by accessing the client by engine id:
264
258
265 .. sourcecode:: ipython
259 .. sourcecode:: ipython
266
260
267 In [2]: rc[0]
261 In [2]: rc[0]
268 Out[2]: <DirectView 0>
262 Out[2]: <DirectView 0>
269
263
270 You can also create a DirectView with a list of engines:
264 You can also create a DirectView with a list of engines:
271
265
272 .. sourcecode:: ipython
266 .. sourcecode:: ipython
273
267
274 In [2]: rc[0,1,2]
268 In [2]: rc[0,1,2]
275 Out[2]: <DirectView [0,1,2]>
269 Out[2]: <DirectView [0,1,2]>
276
270
277 Other methods for accessing elements, such as slicing and negative indexing, work by passing
271 Other methods for accessing elements, such as slicing and negative indexing, work by passing
278 the index directly to the client's :attr:`ids` list, so:
272 the index directly to the client's :attr:`ids` list, so:
279
273
280 .. sourcecode:: ipython
274 .. sourcecode:: ipython
281
275
282 # negative index
276 # negative index
283 In [2]: rc[-1]
277 In [2]: rc[-1]
284 Out[2]: <DirectView 3>
278 Out[2]: <DirectView 3>
285
279
286 # or slicing:
280 # or slicing:
287 In [3]: rc[::2]
281 In [3]: rc[::2]
288 Out[3]: <DirectView [0,2]>
282 Out[3]: <DirectView [0,2]>
289
283
290 are always the same as:
284 are always the same as:
291
285
292 .. sourcecode:: ipython
286 .. sourcecode:: ipython
293
287
294 In [2]: rc[rc.ids[-1]]
288 In [2]: rc[rc.ids[-1]]
295 Out[2]: <DirectView 3>
289 Out[2]: <DirectView 3>
296
290
297 In [3]: rc[rc.ids[::2]]
291 In [3]: rc[rc.ids[::2]]
298 Out[3]: <DirectView [0,2]>
292 Out[3]: <DirectView [0,2]>
299
293
300 Also note that the slice is evaluated at the time of construction of the DirectView, so the
294 Also note that the slice is evaluated at the time of construction of the DirectView, so the
301 targets will not change over time if engines are added/removed from the cluster.
295 targets will not change over time if engines are added/removed from the cluster.
302
296
303 Execution via DirectView
297 Execution via DirectView
304 ************************
298 ************************
305
299
306 The DirectView is the simplest way to work with one or more engines directly (hence the name).
300 The DirectView is the simplest way to work with one or more engines directly (hence the name).
307
301
308 For instance, to get the process ID of all your engines:
302 For instance, to get the process ID of all your engines:
309
303
310 .. sourcecode:: ipython
304 .. sourcecode:: ipython
311
305
312 In [5]: import os
306 In [5]: import os
313
307
314 In [6]: dview.apply_sync(os.getpid)
308 In [6]: dview.apply_sync(os.getpid)
315 Out[6]: [1354, 1356, 1358, 1360]
309 Out[6]: [1354, 1356, 1358, 1360]
316
310
317 Or to see the hostname of the machine they are on:
311 Or to see the hostname of the machine they are on:
318
312
319 .. sourcecode:: ipython
313 .. sourcecode:: ipython
320
314
321 In [5]: import socket
315 In [5]: import socket
322
316
323 In [6]: dview.apply_sync(socket.gethostname)
317 In [6]: dview.apply_sync(socket.gethostname)
324 Out[6]: ['tesla', 'tesla', 'edison', 'edison', 'edison']
318 Out[6]: ['tesla', 'tesla', 'edison', 'edison', 'edison']
325
319
326 .. note::
320 .. note::
327
321
328 TODO: expand on direct execution
322 TODO: expand on direct execution
329
323
330 Data movement via DirectView
324 Data movement via DirectView
331 ****************************
325 ****************************
332
326
333 Since a Python namespace is just a :class:`dict`, :class:`DirectView` objects provide
327 Since a Python namespace is just a :class:`dict`, :class:`DirectView` objects provide
334 dictionary-style access by key and methods such as :meth:`get` and
328 dictionary-style access by key and methods such as :meth:`get` and
335 :meth:`update` for convenience. This make the remote namespaces of the engines
329 :meth:`update` for convenience. This make the remote namespaces of the engines
336 appear as a local dictionary. Underneath, these methods call :meth:`apply`:
330 appear as a local dictionary. Underneath, these methods call :meth:`apply`:
337
331
338 .. sourcecode:: ipython
332 .. sourcecode:: ipython
339
333
340 In [51]: dview['a']=['foo','bar']
334 In [51]: dview['a']=['foo','bar']
341
335
342 In [52]: dview['a']
336 In [52]: dview['a']
343 Out[52]: [ ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'] ]
337 Out[52]: [ ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'] ]
344
338
345 Scatter and gather
339 Scatter and gather
346 ------------------
340 ------------------
347
341
348 Sometimes it is useful to partition a sequence and push the partitions to
342 Sometimes it is useful to partition a sequence and push the partitions to
349 different engines. In MPI language, this is know as scatter/gather and we
343 different engines. In MPI language, this is know as scatter/gather and we
350 follow that terminology. However, it is important to remember that in
344 follow that terminology. However, it is important to remember that in
351 IPython's :class:`Client` class, :meth:`scatter` is from the
345 IPython's :class:`Client` class, :meth:`scatter` is from the
352 interactive IPython session to the engines and :meth:`gather` is from the
346 interactive IPython session to the engines and :meth:`gather` is from the
353 engines back to the interactive IPython session. For scatter/gather operations
347 engines back to the interactive IPython session. For scatter/gather operations
354 between engines, MPI should be used:
348 between engines, MPI should be used:
355
349
356 .. sourcecode:: ipython
350 .. sourcecode:: ipython
357
351
358 In [58]: dview.scatter('a',range(16))
352 In [58]: dview.scatter('a',range(16))
359 Out[58]: [None,None,None,None]
353 Out[58]: [None,None,None,None]
360
354
361 In [59]: dview['a']
355 In [59]: dview['a']
362 Out[59]: [ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15] ]
356 Out[59]: [ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15] ]
363
357
364 In [60]: dview.gather('a')
358 In [60]: dview.gather('a')
365 Out[60]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
359 Out[60]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
366
360
367 Push and pull
361 Push and pull
368 -------------
362 -------------
369
363
370 :meth:`~IPython.parallel.client.view.DirectView.push`
364 :meth:`~IPython.parallel.client.view.DirectView.push`
371
365
372 :meth:`~IPython.parallel.client.view.DirectView.pull`
366 :meth:`~IPython.parallel.client.view.DirectView.pull`
373
367
374 .. note::
368 .. note::
375
369
376 TODO: write this section
370 TODO: write this section
377
371
378
372
379 LoadBalancedView
373 LoadBalancedView
380 ----------------
374 ----------------
381
375
382 The :class:`~.LoadBalancedView` is the class for load-balanced execution via the task scheduler.
376 The :class:`~.LoadBalancedView` is the class for load-balanced execution via the task scheduler.
383 These views always run tasks on exactly one engine, but let the scheduler determine where that
377 These views always run tasks on exactly one engine, but let the scheduler determine where that
384 should be, allowing load-balancing of tasks. The LoadBalancedView does allow you to specify
378 should be, allowing load-balancing of tasks. The LoadBalancedView does allow you to specify
385 restrictions on where and when tasks can execute, for more complicated load-balanced workflows.
379 restrictions on where and when tasks can execute, for more complicated load-balanced workflows.
386
380
387 Data Movement
381 Data Movement
388 =============
382 =============
389
383
390 Since the :class:`~.LoadBalancedView` does not know where execution will take place, explicit
384 Since the :class:`~.LoadBalancedView` does not know where execution will take place, explicit
391 data movement methods like push/pull and scatter/gather do not make sense, and are not provided.
385 data movement methods like push/pull and scatter/gather do not make sense, and are not provided.
392
386
393 Results
387 Results
394 =======
388 =======
395
389
396 AsyncResults
390 AsyncResults
397 ------------
391 ------------
398
392
399 Our primary representation of the results of remote execution is the :class:`~.AsyncResult`
393 Our primary representation of the results of remote execution is the :class:`~.AsyncResult`
400 object, based on the object of the same name in the built-in :mod:`multiprocessing.pool`
394 object, based on the object of the same name in the built-in :mod:`multiprocessing.pool`
401 module. Our version provides a superset of that interface.
395 module. Our version provides a superset of that interface.
402
396
403 The basic principle of the AsyncResult is the encapsulation of one or more results not yet completed. Execution methods (including data movement, such as push/pull) will all return
397 The basic principle of the AsyncResult is the encapsulation of one or more results not yet completed. Execution methods (including data movement, such as push/pull) will all return
404 AsyncResults when `block=False`.
398 AsyncResults when `block=False`.
405
399
406 The mp.pool.AsyncResult interface
400 The mp.pool.AsyncResult interface
407 ---------------------------------
401 ---------------------------------
408
402
409 The basic interface of the AsyncResult is exactly that of the AsyncResult in :mod:`multiprocessing.pool`, and consists of four methods:
403 The basic interface of the AsyncResult is exactly that of the AsyncResult in :mod:`multiprocessing.pool`, and consists of four methods:
410
404
411 .. AsyncResult spec directly from docs.python.org
405 .. AsyncResult spec directly from docs.python.org
412
406
413 .. class:: AsyncResult
407 .. class:: AsyncResult
414
408
415 The stdlib AsyncResult spec
409 The stdlib AsyncResult spec
416
410
417 .. method:: wait([timeout])
411 .. method:: wait([timeout])
418
412
419 Wait until the result is available or until *timeout* seconds pass. This
413 Wait until the result is available or until *timeout* seconds pass. This
420 method always returns ``None``.
414 method always returns ``None``.
421
415
422 .. method:: ready()
416 .. method:: ready()
423
417
424 Return whether the call has completed.
418 Return whether the call has completed.
425
419
426 .. method:: successful()
420 .. method:: successful()
427
421
428 Return whether the call completed without raising an exception. Will
422 Return whether the call completed without raising an exception. Will
429 raise :exc:`AssertionError` if the result is not ready.
423 raise :exc:`AssertionError` if the result is not ready.
430
424
431 .. method:: get([timeout])
425 .. method:: get([timeout])
432
426
433 Return the result when it arrives. If *timeout* is not ``None`` and the
427 Return the result when it arrives. If *timeout* is not ``None`` and the
434 result does not arrive within *timeout* seconds then
428 result does not arrive within *timeout* seconds then
435 :exc:`TimeoutError` is raised. If the remote call raised
429 :exc:`TimeoutError` is raised. If the remote call raised
436 an exception then that exception will be reraised as a :exc:`RemoteError`
430 an exception then that exception will be reraised as a :exc:`RemoteError`
437 by :meth:`get`.
431 by :meth:`get`.
438
432
439
433
440 While an AsyncResult is not done, you can check on it with its :meth:`ready` method, which will
434 While an AsyncResult is not done, you can check on it with its :meth:`ready` method, which will
441 return whether the AR is done. You can also wait on an AsyncResult with its :meth:`wait` method.
435 return whether the AR is done. You can also wait on an AsyncResult with its :meth:`wait` method.
442 This method blocks until the result arrives. If you don't want to wait forever, you can pass a
436 This method blocks until the result arrives. If you don't want to wait forever, you can pass a
443 timeout (in seconds) as an argument to :meth:`wait`. :meth:`wait` will *always return None*, and
437 timeout (in seconds) as an argument to :meth:`wait`. :meth:`wait` will *always return None*, and
444 should never raise an error.
438 should never raise an error.
445
439
446 :meth:`ready` and :meth:`wait` are insensitive to the success or failure of the call. After a
440 :meth:`ready` and :meth:`wait` are insensitive to the success or failure of the call. After a
447 result is done, :meth:`successful` will tell you whether the call completed without raising an
441 result is done, :meth:`successful` will tell you whether the call completed without raising an
448 exception.
442 exception.
449
443
450 If you actually want the result of the call, you can use :meth:`get`. Initially, :meth:`get`
444 If you actually want the result of the call, you can use :meth:`get`. Initially, :meth:`get`
451 behaves just like :meth:`wait`, in that it will block until the result is ready, or until a
445 behaves just like :meth:`wait`, in that it will block until the result is ready, or until a
452 timeout is met. However, unlike :meth:`wait`, :meth:`get` will raise a :exc:`TimeoutError` if
446 timeout is met. However, unlike :meth:`wait`, :meth:`get` will raise a :exc:`TimeoutError` if
453 the timeout is reached and the result is still not ready. If the result arrives before the
447 the timeout is reached and the result is still not ready. If the result arrives before the
454 timeout is reached, then :meth:`get` will return the result itself if no exception was raised,
448 timeout is reached, then :meth:`get` will return the result itself if no exception was raised,
455 and will raise an exception if there was.
449 and will raise an exception if there was.
456
450
457 Here is where we start to expand on the multiprocessing interface. Rather than raising the
451 Here is where we start to expand on the multiprocessing interface. Rather than raising the
458 original exception, a RemoteError will be raised, encapsulating the remote exception with some
452 original exception, a RemoteError will be raised, encapsulating the remote exception with some
459 metadata. If the AsyncResult represents multiple calls (e.g. any time `targets` is plural), then
453 metadata. If the AsyncResult represents multiple calls (e.g. any time `targets` is plural), then
460 a CompositeError, a subclass of RemoteError, will be raised.
454 a CompositeError, a subclass of RemoteError, will be raised.
461
455
462 .. seealso::
456 .. seealso::
463
457
464 For more information on remote exceptions, see :ref:`the section in the Direct Interface
458 For more information on remote exceptions, see :ref:`the section in the Direct Interface
465 <parallel_exceptions>`.
459 <parallel_exceptions>`.
466
460
467 Extended interface
461 Extended interface
468 ******************
462 ******************
469
463
470
464
471 Other extensions of the AsyncResult interface include convenience wrappers for :meth:`get`.
465 Other extensions of the AsyncResult interface include convenience wrappers for :meth:`get`.
472 AsyncResults have a property, :attr:`result`, with the short alias :attr:`r`, which simply call
466 AsyncResults have a property, :attr:`result`, with the short alias :attr:`r`, which simply call
473 :meth:`get`. Since our object is designed for representing *parallel* results, it is expected
467 :meth:`get`. Since our object is designed for representing *parallel* results, it is expected
474 that many calls (any of those submitted via DirectView) will map results to engine IDs. We
468 that many calls (any of those submitted via DirectView) will map results to engine IDs. We
475 provide a :meth:`get_dict`, which is also a wrapper on :meth:`get`, which returns a dictionary
469 provide a :meth:`get_dict`, which is also a wrapper on :meth:`get`, which returns a dictionary
476 of the individual results, keyed by engine ID.
470 of the individual results, keyed by engine ID.
477
471
478 You can also prevent a submitted job from actually executing, via the AsyncResult's
472 You can also prevent a submitted job from actually executing, via the AsyncResult's
479 :meth:`abort` method. This will instruct engines to not execute the job when it arrives.
473 :meth:`abort` method. This will instruct engines to not execute the job when it arrives.
480
474
481 The larger extension of the AsyncResult API is the :attr:`metadata` attribute. The metadata
475 The larger extension of the AsyncResult API is the :attr:`metadata` attribute. The metadata
482 is a dictionary (with attribute access) that contains, logically enough, metadata about the
476 is a dictionary (with attribute access) that contains, logically enough, metadata about the
483 execution.
477 execution.
484
478
485 Metadata keys:
479 Metadata keys:
486
480
487 timestamps
481 timestamps
488
482
489 submitted
483 submitted
490 When the task left the Client
484 When the task left the Client
491 started
485 started
492 When the task started execution on the engine
486 When the task started execution on the engine
493 completed
487 completed
494 When execution finished on the engine
488 When execution finished on the engine
495 received
489 received
496 When the result arrived on the Client
490 When the result arrived on the Client
497
491
498 note that it is not known when the result arrived in 0MQ on the client, only when it
492 note that it is not known when the result arrived in 0MQ on the client, only when it
499 arrived in Python via :meth:`Client.spin`, so in interactive use, this may not be
493 arrived in Python via :meth:`Client.spin`, so in interactive use, this may not be
500 strictly informative.
494 strictly informative.
501
495
502 Information about the engine
496 Information about the engine
503
497
504 engine_id
498 engine_id
505 The integer id
499 The integer id
506 engine_uuid
500 engine_uuid
507 The UUID of the engine
501 The UUID of the engine
508
502
509 output of the call
503 output of the call
510
504
511 pyerr
505 pyerr
512 Python exception, if there was one
506 Python exception, if there was one
513 pyout
507 pyout
514 Python output
508 Python output
515 stderr
509 stderr
516 stderr stream
510 stderr stream
517 stdout
511 stdout
518 stdout (e.g. print) stream
512 stdout (e.g. print) stream
519
513
520 And some extended information
514 And some extended information
521
515
522 status
516 status
523 either 'ok' or 'error'
517 either 'ok' or 'error'
524 msg_id
518 msg_id
525 The UUID of the message
519 The UUID of the message
526 after
520 after
527 For tasks: the time-based msg_id dependencies
521 For tasks: the time-based msg_id dependencies
528 follow
522 follow
529 For tasks: the location-based msg_id dependencies
523 For tasks: the location-based msg_id dependencies
530
524
531 While in most cases, the Clients that submitted a request will be the ones using the results,
525 While in most cases, the Clients that submitted a request will be the ones using the results,
532 other Clients can also request results directly from the Hub. This is done via the Client's
526 other Clients can also request results directly from the Hub. This is done via the Client's
533 :meth:`get_result` method. This method will *always* return an AsyncResult object. If the call
527 :meth:`get_result` method. This method will *always* return an AsyncResult object. If the call
534 was not submitted by the client, then it will be a subclass, called :class:`AsyncHubResult`.
528 was not submitted by the client, then it will be a subclass, called :class:`AsyncHubResult`.
535 These behave in the same way as an AsyncResult, but if the result is not ready, waiting on an
529 These behave in the same way as an AsyncResult, but if the result is not ready, waiting on an
536 AsyncHubResult polls the Hub, which is much more expensive than the passive polling used
530 AsyncHubResult polls the Hub, which is much more expensive than the passive polling used
537 in regular AsyncResults.
531 in regular AsyncResults.
538
532
539
533
540 The Client keeps track of all results
534 The Client keeps track of all results
541 history, results, metadata
535 history, results, metadata
542
536
543 Querying the Hub
537 Querying the Hub
544 ================
538 ================
545
539
546 The Hub sees all traffic that may pass through the schedulers between engines and clients.
540 The Hub sees all traffic that may pass through the schedulers between engines and clients.
547 It does this so that it can track state, allowing multiple clients to retrieve results of
541 It does this so that it can track state, allowing multiple clients to retrieve results of
548 computations submitted by their peers, as well as persisting the state to a database.
542 computations submitted by their peers, as well as persisting the state to a database.
549
543
550 queue_status
544 queue_status
551
545
552 You can check the status of the queues of the engines with this command.
546 You can check the status of the queues of the engines with this command.
553
547
554 result_status
548 result_status
555
549
556 check on results
550 check on results
557
551
558 purge_results
552 purge_results
559
553
560 forget results (conserve resources)
554 forget results (conserve resources)
561
555
562 Controlling the Engines
556 Controlling the Engines
563 =======================
557 =======================
564
558
565 There are a few actions you can do with Engines that do not involve execution. These
559 There are a few actions you can do with Engines that do not involve execution. These
566 messages are sent via the Control socket, and bypass any long queues of waiting execution
560 messages are sent via the Control socket, and bypass any long queues of waiting execution
567 jobs
561 jobs
568
562
569 abort
563 abort
570
564
571 Sometimes you may want to prevent a job you have submitted from actually running. The method
565 Sometimes you may want to prevent a job you have submitted from actually running. The method
572 for this is :meth:`abort`. It takes a container of msg_ids, and instructs the Engines to not
566 for this is :meth:`abort`. It takes a container of msg_ids, and instructs the Engines to not
573 run the jobs if they arrive. The jobs will then fail with an AbortedTask error.
567 run the jobs if they arrive. The jobs will then fail with an AbortedTask error.
574
568
575 clear
569 clear
576
570
577 You may want to purge the Engine(s) namespace of any data you have left in it. After
571 You may want to purge the Engine(s) namespace of any data you have left in it. After
578 running `clear`, there will be no names in the Engine's namespace
572 running `clear`, there will be no names in the Engine's namespace
579
573
580 shutdown
574 shutdown
581
575
582 You can also instruct engines (and the Controller) to terminate from a Client. This
576 You can also instruct engines (and the Controller) to terminate from a Client. This
583 can be useful when a job is finished, since you can shutdown all the processes with a
577 can be useful when a job is finished, since you can shutdown all the processes with a
584 single command.
578 single command.
585
579
586 Synchronization
580 Synchronization
587 ===============
581 ===============
588
582
589 Since the Client is a synchronous object, events do not automatically trigger in your
583 Since the Client is a synchronous object, events do not automatically trigger in your
590 interactive session - you must poll the 0MQ sockets for incoming messages. Note that
584 interactive session - you must poll the 0MQ sockets for incoming messages. Note that
591 this polling *does not* actually make any network requests. It simply performs a `select`
585 this polling *does not* actually make any network requests. It simply performs a `select`
592 operation, to check if messages are already in local memory, waiting to be handled.
586 operation, to check if messages are already in local memory, waiting to be handled.
593
587
594 The method that handles incoming messages is :meth:`spin`. This method flushes any waiting
588 The method that handles incoming messages is :meth:`spin`. This method flushes any waiting
595 messages on the various incoming sockets, and updates the state of the Client.
589 messages on the various incoming sockets, and updates the state of the Client.
596
590
597 If you need to wait for particular results to finish, you can use the :meth:`wait` method,
591 If you need to wait for particular results to finish, you can use the :meth:`wait` method,
598 which will call :meth:`spin` until the messages are no longer outstanding. Anything that
592 which will call :meth:`spin` until the messages are no longer outstanding. Anything that
599 represents a collection of messages, such as a list of msg_ids or one or more AsyncResult
593 represents a collection of messages, such as a list of msg_ids or one or more AsyncResult
600 objects, can be passed as argument to wait. A timeout can be specified, which will prevent
594 objects, can be passed as argument to wait. A timeout can be specified, which will prevent
601 the call from blocking for more than a specified time, but the default behavior is to wait
595 the call from blocking for more than a specified time, but the default behavior is to wait
602 forever.
596 forever.
603
597
604 The client also has an ``outstanding`` attribute - a ``set`` of msg_ids that are awaiting
598 The client also has an ``outstanding`` attribute - a ``set`` of msg_ids that are awaiting
605 replies. This is the default if wait is called with no arguments - i.e. wait on *all*
599 replies. This is the default if wait is called with no arguments - i.e. wait on *all*
606 outstanding messages.
600 outstanding messages.
607
601
608
602
609 .. note::
603 .. note::
610
604
611 TODO wait example
605 TODO wait example
612
606
613 Map
607 Map
614 ===
608 ===
615
609
616 Many parallel computing problems can be expressed as a ``map``, or running a single program with
610 Many parallel computing problems can be expressed as a ``map``, or running a single program with
617 a variety of different inputs. Python has a built-in :py:func:`map`, which does exactly this,
611 a variety of different inputs. Python has a built-in :py:func:`map`, which does exactly this,
618 and many parallel execution tools in Python, such as the built-in
612 and many parallel execution tools in Python, such as the built-in
619 :py:class:`multiprocessing.Pool` object provide implementations of `map`. All View objects
613 :py:class:`multiprocessing.Pool` object provide implementations of `map`. All View objects
620 provide a :meth:`map` method as well, but the load-balanced and direct implementations differ.
614 provide a :meth:`map` method as well, but the load-balanced and direct implementations differ.
621
615
622 Views' map methods can be called on any number of sequences, but they can also take the `block`
616 Views' map methods can be called on any number of sequences, but they can also take the `block`
623 and `bound` keyword arguments, just like :meth:`~client.apply`, but *only as keywords*.
617 and `bound` keyword arguments, just like :meth:`~client.apply`, but *only as keywords*.
624
618
625 .. sourcecode:: python
619 .. sourcecode:: python
626
620
627 dview.map(*sequences, block=None)
621 dview.map(*sequences, block=None)
628
622
629
623
630 * iter, map_async, reduce
624 * iter, map_async, reduce
631
625
632 Decorators and RemoteFunctions
626 Decorators and RemoteFunctions
633 ==============================
627 ==============================
634
628
635 .. note::
629 .. note::
636
630
637 TODO: write this section
631 TODO: write this section
638
632
639 :func:`~IPython.parallel.client.remotefunction.@parallel`
633 :func:`~IPython.parallel.client.remotefunction.@parallel`
640
634
641 :func:`~IPython.parallel.client.remotefunction.@remote`
635 :func:`~IPython.parallel.client.remotefunction.@remote`
642
636
643 :class:`~IPython.parallel.client.remotefunction.RemoteFunction`
637 :class:`~IPython.parallel.client.remotefunction.RemoteFunction`
644
638
645 :class:`~IPython.parallel.client.remotefunction.ParallelFunction`
639 :class:`~IPython.parallel.client.remotefunction.ParallelFunction`
646
640
647 Dependencies
641 Dependencies
648 ============
642 ============
649
643
650 .. note::
644 .. note::
651
645
652 TODO: write this section
646 TODO: write this section
653
647
654 :func:`~IPython.parallel.controller.dependency.@depend`
648 :func:`~IPython.parallel.controller.dependency.@depend`
655
649
656 :func:`~IPython.parallel.controller.dependency.@require`
650 :func:`~IPython.parallel.controller.dependency.@require`
657
651
658 :class:`~IPython.parallel.controller.dependency.Dependency`
652 :class:`~IPython.parallel.controller.dependency.Dependency`
@@ -1,1002 +1,907 b''
1 .. _parallel_multiengine:
1 .. _parallel_multiengine:
2
2
3 ==========================
3 ==========================
4 IPython's Direct interface
4 IPython's Direct interface
5 ==========================
5 ==========================
6
6
7 The direct, or multiengine, interface represents one possible way of working with a set of
7 The direct, or multiengine, interface represents one possible way of working with a set of
8 IPython engines. The basic idea behind the multiengine interface is that the
8 IPython engines. The basic idea behind the multiengine interface is that the
9 capabilities of each engine are directly and explicitly exposed to the user.
9 capabilities of each engine are directly and explicitly exposed to the user.
10 Thus, in the multiengine interface, each engine is given an id that is used to
10 Thus, in the multiengine interface, each engine is given an id that is used to
11 identify the engine and give it work to do. This interface is very intuitive
11 identify the engine and give it work to do. This interface is very intuitive
12 and is designed with interactive usage in mind, and is the best place for
12 and is designed with interactive usage in mind, and is the best place for
13 new users of IPython to begin.
13 new users of IPython to begin.
14
14
15 Starting the IPython controller and engines
15 Starting the IPython controller and engines
16 ===========================================
16 ===========================================
17
17
18 To follow along with this tutorial, you will need to start the IPython
18 To follow along with this tutorial, you will need to start the IPython
19 controller and four IPython engines. The simplest way of doing this is to use
19 controller and four IPython engines. The simplest way of doing this is to use
20 the :command:`ipcluster` command::
20 the :command:`ipcluster` command::
21
21
22 $ ipcluster start -n 4
22 $ ipcluster start -n 4
23
23
24 For more detailed information about starting the controller and engines, see
24 For more detailed information about starting the controller and engines, see
25 our :ref:`introduction <parallel_overview>` to using IPython for parallel computing.
25 our :ref:`introduction <parallel_overview>` to using IPython for parallel computing.
26
26
27 Creating a ``DirectView`` instance
27 Creating a ``DirectView`` instance
28 ==================================
28 ==================================
29
29
30 The first step is to import the IPython :mod:`IPython.parallel`
30 The first step is to import the IPython :mod:`IPython.parallel`
31 module and then create a :class:`.Client` instance:
31 module and then create a :class:`.Client` instance:
32
32
33 .. sourcecode:: ipython
33 .. sourcecode:: ipython
34
34
35 In [1]: from IPython.parallel import Client
35 In [1]: from IPython.parallel import Client
36
36
37 In [2]: rc = Client()
37 In [2]: rc = Client()
38
38
39 This form assumes that the default connection information (stored in
39 This form assumes that the default connection information (stored in
40 :file:`ipcontroller-client.json` found in :file:`IPYTHONDIR/profile_default/security`) is
40 :file:`ipcontroller-client.json` found in :file:`IPYTHONDIR/profile_default/security`) is
41 accurate. If the controller was started on a remote machine, you must copy that connection
41 accurate. If the controller was started on a remote machine, you must copy that connection
42 file to the client machine, or enter its contents as arguments to the Client constructor:
42 file to the client machine, or enter its contents as arguments to the Client constructor:
43
43
44 .. sourcecode:: ipython
44 .. sourcecode:: ipython
45
45
46 # If you have copied the json connector file from the controller:
46 # If you have copied the json connector file from the controller:
47 In [2]: rc = Client('/path/to/ipcontroller-client.json')
47 In [2]: rc = Client('/path/to/ipcontroller-client.json')
48 # or to connect with a specific profile you have set up:
48 # or to connect with a specific profile you have set up:
49 In [3]: rc = Client(profile='mpi')
49 In [3]: rc = Client(profile='mpi')
50
50
51
51
52 To make sure there are engines connected to the controller, users can get a list
52 To make sure there are engines connected to the controller, users can get a list
53 of engine ids:
53 of engine ids:
54
54
55 .. sourcecode:: ipython
55 .. sourcecode:: ipython
56
56
57 In [3]: rc.ids
57 In [3]: rc.ids
58 Out[3]: [0, 1, 2, 3]
58 Out[3]: [0, 1, 2, 3]
59
59
60 Here we see that there are four engines ready to do work for us.
60 Here we see that there are four engines ready to do work for us.
61
61
62 For direct execution, we will make use of a :class:`DirectView` object, which can be
62 For direct execution, we will make use of a :class:`DirectView` object, which can be
63 constructed via list-access to the client:
63 constructed via list-access to the client:
64
64
65 .. sourcecode:: ipython
65 .. sourcecode:: ipython
66
66
67 In [4]: dview = rc[:] # use all engines
67 In [4]: dview = rc[:] # use all engines
68
68
69 .. seealso::
69 .. seealso::
70
70
71 For more information, see the in-depth explanation of :ref:`Views <parallel_details>`.
71 For more information, see the in-depth explanation of :ref:`Views <parallel_details>`.
72
72
73
73
74 Quick and easy parallelism
74 Quick and easy parallelism
75 ==========================
75 ==========================
76
76
77 In many cases, you simply want to apply a Python function to a sequence of
77 In many cases, you simply want to apply a Python function to a sequence of
78 objects, but *in parallel*. The client interface provides a simple way
78 objects, but *in parallel*. The client interface provides a simple way
79 of accomplishing this: using the DirectView's :meth:`~DirectView.map` method.
79 of accomplishing this: using the DirectView's :meth:`~DirectView.map` method.
80
80
81 Parallel map
81 Parallel map
82 ------------
82 ------------
83
83
84 Python's builtin :func:`map` functions allows a function to be applied to a
84 Python's builtin :func:`map` functions allows a function to be applied to a
85 sequence element-by-element. This type of code is typically trivial to
85 sequence element-by-element. This type of code is typically trivial to
86 parallelize. In fact, since IPython's interface is all about functions anyway,
86 parallelize. In fact, since IPython's interface is all about functions anyway,
87 you can just use the builtin :func:`map` with a :class:`RemoteFunction`, or a
87 you can just use the builtin :func:`map` with a :class:`RemoteFunction`, or a
88 DirectView's :meth:`map` method:
88 DirectView's :meth:`map` method:
89
89
90 .. sourcecode:: ipython
90 .. sourcecode:: ipython
91
91
92 In [62]: serial_result = map(lambda x:x**10, range(32))
92 In [62]: serial_result = map(lambda x:x**10, range(32))
93
93
94 In [63]: parallel_result = dview.map_sync(lambda x: x**10, range(32))
94 In [63]: parallel_result = dview.map_sync(lambda x: x**10, range(32))
95
95
96 In [67]: serial_result==parallel_result
96 In [67]: serial_result==parallel_result
97 Out[67]: True
97 Out[67]: True
98
98
99
99
100 .. note::
100 .. note::
101
101
102 The :class:`DirectView`'s version of :meth:`map` does
102 The :class:`DirectView`'s version of :meth:`map` does
103 not do dynamic load balancing. For a load balanced version, use a
103 not do dynamic load balancing. For a load balanced version, use a
104 :class:`LoadBalancedView`.
104 :class:`LoadBalancedView`.
105
105
106 .. seealso::
106 .. seealso::
107
107
108 :meth:`map` is implemented via :class:`ParallelFunction`.
108 :meth:`map` is implemented via :class:`ParallelFunction`.
109
109
110 Remote function decorators
110 Remote function decorators
111 --------------------------
111 --------------------------
112
112
113 Remote functions are just like normal functions, but when they are called,
113 Remote functions are just like normal functions, but when they are called,
114 they execute on one or more engines, rather than locally. IPython provides
114 they execute on one or more engines, rather than locally. IPython provides
115 two decorators:
115 two decorators:
116
116
117 .. sourcecode:: ipython
117 .. sourcecode:: ipython
118
118
119 In [10]: @dview.remote(block=True)
119 In [10]: @dview.remote(block=True)
120 ....: def getpid():
120 ....: def getpid():
121 ....: import os
121 ....: import os
122 ....: return os.getpid()
122 ....: return os.getpid()
123 ....:
123 ....:
124
124
125 In [11]: getpid()
125 In [11]: getpid()
126 Out[11]: [12345, 12346, 12347, 12348]
126 Out[11]: [12345, 12346, 12347, 12348]
127
127
128 The ``@parallel`` decorator creates parallel functions, that break up an element-wise
128 The ``@parallel`` decorator creates parallel functions, that break up an element-wise
129 operations and distribute them, reconstructing the result.
129 operations and distribute them, reconstructing the result.
130
130
131 .. sourcecode:: ipython
131 .. sourcecode:: ipython
132
132
133 In [12]: import numpy as np
133 In [12]: import numpy as np
134
134
135 In [13]: A = np.random.random((64,48))
135 In [13]: A = np.random.random((64,48))
136
136
137 In [14]: @dview.parallel(block=True)
137 In [14]: @dview.parallel(block=True)
138 ....: def pmul(A,B):
138 ....: def pmul(A,B):
139 ....: return A*B
139 ....: return A*B
140
140
141 In [15]: C_local = A*A
141 In [15]: C_local = A*A
142
142
143 In [16]: C_remote = pmul(A,A)
143 In [16]: C_remote = pmul(A,A)
144
144
145 In [17]: (C_local == C_remote).all()
145 In [17]: (C_local == C_remote).all()
146 Out[17]: True
146 Out[17]: True
147
147
148 Calling a ``@parallel`` function *does not* correspond to map. It is used for splitting
148 Calling a ``@parallel`` function *does not* correspond to map. It is used for splitting
149 element-wise operations that operate on a sequence or array. For ``map`` behavior,
149 element-wise operations that operate on a sequence or array. For ``map`` behavior,
150 parallel functions do have a map method.
150 parallel functions do have a map method.
151
151
152 ==================== ============================ =============================
152 ==================== ============================ =============================
153 call pfunc(seq) pfunc.map(seq)
153 call pfunc(seq) pfunc.map(seq)
154 ==================== ============================ =============================
154 ==================== ============================ =============================
155 # of tasks # of engines (1 per engine) # of engines (1 per engine)
155 # of tasks # of engines (1 per engine) # of engines (1 per engine)
156 # of remote calls # of engines (1 per engine) ``len(seq)``
156 # of remote calls # of engines (1 per engine) ``len(seq)``
157 argument to remote ``seq[i:j]`` (sub-sequence) ``seq[i]`` (single element)
157 argument to remote ``seq[i:j]`` (sub-sequence) ``seq[i]`` (single element)
158 ==================== ============================ =============================
158 ==================== ============================ =============================
159
159
160 A quick example to illustrate the difference in arguments for the two modes:
160 A quick example to illustrate the difference in arguments for the two modes:
161
161
162 .. sourcecode:: ipython
162 .. sourcecode:: ipython
163
163
164 In [16]: @dview.parallel(block=True)
164 In [16]: @dview.parallel(block=True)
165 ....: def echo(x):
165 ....: def echo(x):
166 ....: return str(x)
166 ....: return str(x)
167 ....:
167 ....:
168
168
169 In [17]: echo(range(5))
169 In [17]: echo(range(5))
170 Out[17]: ['[0, 1]', '[2]', '[3]', '[4]']
170 Out[17]: ['[0, 1]', '[2]', '[3]', '[4]']
171
171
172 In [18]: echo.map(range(5))
172 In [18]: echo.map(range(5))
173 Out[18]: ['0', '1', '2', '3', '4']
173 Out[18]: ['0', '1', '2', '3', '4']
174
174
175
175
176 .. seealso::
176 .. seealso::
177
177
178 See the :func:`~.remotefunction.parallel` and :func:`~.remotefunction.remote`
178 See the :func:`~.remotefunction.parallel` and :func:`~.remotefunction.remote`
179 decorators for options.
179 decorators for options.
180
180
181 Calling Python functions
181 Calling Python functions
182 ========================
182 ========================
183
183
184 The most basic type of operation that can be performed on the engines is to
184 The most basic type of operation that can be performed on the engines is to
185 execute Python code or call Python functions. Executing Python code can be
185 execute Python code or call Python functions. Executing Python code can be
186 done in blocking or non-blocking mode (non-blocking is default) using the
186 done in blocking or non-blocking mode (non-blocking is default) using the
187 :meth:`.View.execute` method, and calling functions can be done via the
187 :meth:`.View.execute` method, and calling functions can be done via the
188 :meth:`.View.apply` method.
188 :meth:`.View.apply` method.
189
189
190 apply
190 apply
191 -----
191 -----
192
192
193 The main method for doing remote execution (in fact, all methods that
193 The main method for doing remote execution (in fact, all methods that
194 communicate with the engines are built on top of it), is :meth:`View.apply`.
194 communicate with the engines are built on top of it), is :meth:`View.apply`.
195
195
196 We strive to provide the cleanest interface we can, so `apply` has the following
196 We strive to provide the cleanest interface we can, so `apply` has the following
197 signature:
197 signature:
198
198
199 .. sourcecode:: python
199 .. sourcecode:: python
200
200
201 view.apply(f, *args, **kwargs)
201 view.apply(f, *args, **kwargs)
202
202
203 There are various ways to call functions with IPython, and these flags are set as
203 There are various ways to call functions with IPython, and these flags are set as
204 attributes of the View. The ``DirectView`` has just two of these flags:
204 attributes of the View. The ``DirectView`` has just two of these flags:
205
205
206 dv.block : bool
206 dv.block : bool
207 whether to wait for the result, or return an :class:`AsyncResult` object
207 whether to wait for the result, or return an :class:`AsyncResult` object
208 immediately
208 immediately
209 dv.track : bool
209 dv.track : bool
210 whether to instruct pyzmq to track when zeromq is done sending the message.
210 whether to instruct pyzmq to track when zeromq is done sending the message.
211 This is primarily useful for non-copying sends of numpy arrays that you plan to
211 This is primarily useful for non-copying sends of numpy arrays that you plan to
212 edit in-place. You need to know when it becomes safe to edit the buffer
212 edit in-place. You need to know when it becomes safe to edit the buffer
213 without corrupting the message.
213 without corrupting the message.
214 dv.targets : int, list of ints
214 dv.targets : int, list of ints
215 which targets this view is associated with.
215 which targets this view is associated with.
216
216
217
217
218 Creating a view is simple: index-access on a client creates a :class:`.DirectView`.
218 Creating a view is simple: index-access on a client creates a :class:`.DirectView`.
219
219
220 .. sourcecode:: ipython
220 .. sourcecode:: ipython
221
221
222 In [4]: view = rc[1:3]
222 In [4]: view = rc[1:3]
223 Out[4]: <DirectView [1, 2]>
223 Out[4]: <DirectView [1, 2]>
224
224
225 In [5]: view.apply<tab>
225 In [5]: view.apply<tab>
226 view.apply view.apply_async view.apply_sync
226 view.apply view.apply_async view.apply_sync
227
227
228 For convenience, you can set block temporarily for a single call with the extra sync/async methods.
228 For convenience, you can set block temporarily for a single call with the extra sync/async methods.
229
229
230 Blocking execution
230 Blocking execution
231 ------------------
231 ------------------
232
232
233 In blocking mode, the :class:`.DirectView` object (called ``dview`` in
233 In blocking mode, the :class:`.DirectView` object (called ``dview`` in
234 these examples) submits the command to the controller, which places the
234 these examples) submits the command to the controller, which places the
235 command in the engines' queues for execution. The :meth:`apply` call then
235 command in the engines' queues for execution. The :meth:`apply` call then
236 blocks until the engines are done executing the command:
236 blocks until the engines are done executing the command:
237
237
238 .. sourcecode:: ipython
238 .. sourcecode:: ipython
239
239
240 In [2]: dview = rc[:] # A DirectView of all engines
240 In [2]: dview = rc[:] # A DirectView of all engines
241 In [3]: dview.block=True
241 In [3]: dview.block=True
242 In [4]: dview['a'] = 5
242 In [4]: dview['a'] = 5
243
243
244 In [5]: dview['b'] = 10
244 In [5]: dview['b'] = 10
245
245
246 In [6]: dview.apply(lambda x: a+b+x, 27)
246 In [6]: dview.apply(lambda x: a+b+x, 27)
247 Out[6]: [42, 42, 42, 42]
247 Out[6]: [42, 42, 42, 42]
248
248
249 You can also select blocking execution on a call-by-call basis with the :meth:`apply_sync`
249 You can also select blocking execution on a call-by-call basis with the :meth:`apply_sync`
250 method:
250 method:
251
251
252 .. sourcecode:: ipython
252 .. sourcecode:: ipython
253
253
254 In [7]: dview.block=False
254 In [7]: dview.block=False
255
255
256 In [8]: dview.apply_sync(lambda x: a+b+x, 27)
256 In [8]: dview.apply_sync(lambda x: a+b+x, 27)
257 Out[8]: [42, 42, 42, 42]
257 Out[8]: [42, 42, 42, 42]
258
258
259 Python commands can be executed as strings on specific engines by using a View's ``execute``
259 Python commands can be executed as strings on specific engines by using a View's ``execute``
260 method:
260 method:
261
261
262 .. sourcecode:: ipython
262 .. sourcecode:: ipython
263
263
264 In [6]: rc[::2].execute('c=a+b')
264 In [6]: rc[::2].execute('c=a+b')
265
265
266 In [7]: rc[1::2].execute('c=a-b')
266 In [7]: rc[1::2].execute('c=a-b')
267
267
268 In [8]: dview['c'] # shorthand for dview.pull('c', block=True)
268 In [8]: dview['c'] # shorthand for dview.pull('c', block=True)
269 Out[8]: [15, -5, 15, -5]
269 Out[8]: [15, -5, 15, -5]
270
270
271
271
272 Non-blocking execution
272 Non-blocking execution
273 ----------------------
273 ----------------------
274
274
275 In non-blocking mode, :meth:`apply` submits the command to be executed and
275 In non-blocking mode, :meth:`apply` submits the command to be executed and
276 then returns a :class:`AsyncResult` object immediately. The
276 then returns a :class:`AsyncResult` object immediately. The
277 :class:`AsyncResult` object gives you a way of getting a result at a later
277 :class:`AsyncResult` object gives you a way of getting a result at a later
278 time through its :meth:`get` method.
278 time through its :meth:`get` method.
279
279
280 .. seealso::
280 .. seealso::
281
281
282 Docs on the :ref:`AsyncResult <parallel_asyncresult>` object.
282 Docs on the :ref:`AsyncResult <parallel_asyncresult>` object.
283
283
284 This allows you to quickly submit long running commands without blocking your
284 This allows you to quickly submit long running commands without blocking your
285 local Python/IPython session:
285 local Python/IPython session:
286
286
287 .. sourcecode:: ipython
287 .. sourcecode:: ipython
288
288
289 # define our function
289 # define our function
290 In [6]: def wait(t):
290 In [6]: def wait(t):
291 ....: import time
291 ....: import time
292 ....: tic = time.time()
292 ....: tic = time.time()
293 ....: time.sleep(t)
293 ....: time.sleep(t)
294 ....: return time.time()-tic
294 ....: return time.time()-tic
295
295
296 # In non-blocking mode
296 # In non-blocking mode
297 In [7]: ar = dview.apply_async(wait, 2)
297 In [7]: ar = dview.apply_async(wait, 2)
298
298
299 # Now block for the result
299 # Now block for the result
300 In [8]: ar.get()
300 In [8]: ar.get()
301 Out[8]: [2.0006198883056641, 1.9997570514678955, 1.9996809959411621, 2.0003249645233154]
301 Out[8]: [2.0006198883056641, 1.9997570514678955, 1.9996809959411621, 2.0003249645233154]
302
302
303 # Again in non-blocking mode
303 # Again in non-blocking mode
304 In [9]: ar = dview.apply_async(wait, 10)
304 In [9]: ar = dview.apply_async(wait, 10)
305
305
306 # Poll to see if the result is ready
306 # Poll to see if the result is ready
307 In [10]: ar.ready()
307 In [10]: ar.ready()
308 Out[10]: False
308 Out[10]: False
309
309
310 # ask for the result, but wait a maximum of 1 second:
310 # ask for the result, but wait a maximum of 1 second:
311 In [45]: ar.get(1)
311 In [45]: ar.get(1)
312 ---------------------------------------------------------------------------
312 ---------------------------------------------------------------------------
313 TimeoutError Traceback (most recent call last)
313 TimeoutError Traceback (most recent call last)
314 /home/you/<ipython-input-45-7cd858bbb8e0> in <module>()
314 /home/you/<ipython-input-45-7cd858bbb8e0> in <module>()
315 ----> 1 ar.get(1)
315 ----> 1 ar.get(1)
316
316
317 /path/to/site-packages/IPython/parallel/asyncresult.pyc in get(self, timeout)
317 /path/to/site-packages/IPython/parallel/asyncresult.pyc in get(self, timeout)
318 62 raise self._exception
318 62 raise self._exception
319 63 else:
319 63 else:
320 ---> 64 raise error.TimeoutError("Result not ready.")
320 ---> 64 raise error.TimeoutError("Result not ready.")
321 65
321 65
322 66 def ready(self):
322 66 def ready(self):
323
323
324 TimeoutError: Result not ready.
324 TimeoutError: Result not ready.
325
325
326 .. Note::
326 .. Note::
327
327
328 Note the import inside the function. This is a common model, to ensure
328 Note the import inside the function. This is a common model, to ensure
329 that the appropriate modules are imported where the task is run. You can
329 that the appropriate modules are imported where the task is run. You can
330 also manually import modules into the engine(s) namespace(s) via
330 also manually import modules into the engine(s) namespace(s) via
331 :meth:`view.execute('import numpy')`.
331 :meth:`view.execute('import numpy')`.
332
332
333 Often, it is desirable to wait until a set of :class:`AsyncResult` objects
333 Often, it is desirable to wait until a set of :class:`AsyncResult` objects
334 are done. For this, there is a the method :meth:`wait`. This method takes a
334 are done. For this, there is a the method :meth:`wait`. This method takes a
335 tuple of :class:`AsyncResult` objects (or `msg_ids` or indices to the client's History),
335 tuple of :class:`AsyncResult` objects (or `msg_ids` or indices to the client's History),
336 and blocks until all of the associated results are ready:
336 and blocks until all of the associated results are ready:
337
337
338 .. sourcecode:: ipython
338 .. sourcecode:: ipython
339
339
340 In [72]: dview.block=False
340 In [72]: dview.block=False
341
341
342 # A trivial list of AsyncResults objects
342 # A trivial list of AsyncResults objects
343 In [73]: pr_list = [dview.apply_async(wait, 3) for i in range(10)]
343 In [73]: pr_list = [dview.apply_async(wait, 3) for i in range(10)]
344
344
345 # Wait until all of them are done
345 # Wait until all of them are done
346 In [74]: dview.wait(pr_list)
346 In [74]: dview.wait(pr_list)
347
347
348 # Then, their results are ready using get() or the `.r` attribute
348 # Then, their results are ready using get() or the `.r` attribute
349 In [75]: pr_list[0].get()
349 In [75]: pr_list[0].get()
350 Out[75]: [2.9982571601867676, 2.9982588291168213, 2.9987530708312988, 2.9990990161895752]
350 Out[75]: [2.9982571601867676, 2.9982588291168213, 2.9987530708312988, 2.9990990161895752]
351
351
352
352
353
353
354 The ``block`` and ``targets`` keyword arguments and attributes
354 The ``block`` and ``targets`` keyword arguments and attributes
355 --------------------------------------------------------------
355 --------------------------------------------------------------
356
356
357 Most DirectView methods (excluding :meth:`apply`) accept ``block`` and
357 Most DirectView methods (excluding :meth:`apply`) accept ``block`` and
358 ``targets`` as keyword arguments. As we have seen above, these keyword arguments control the
358 ``targets`` as keyword arguments. As we have seen above, these keyword arguments control the
359 blocking mode and which engines the command is applied to. The :class:`View` class also has
359 blocking mode and which engines the command is applied to. The :class:`View` class also has
360 :attr:`block` and :attr:`targets` attributes that control the default behavior when the keyword
360 :attr:`block` and :attr:`targets` attributes that control the default behavior when the keyword
361 arguments are not provided. Thus the following logic is used for :attr:`block` and :attr:`targets`:
361 arguments are not provided. Thus the following logic is used for :attr:`block` and :attr:`targets`:
362
362
363 * If no keyword argument is provided, the instance attributes are used.
363 * If no keyword argument is provided, the instance attributes are used.
364 * The Keyword arguments, if provided overrides the instance attributes for
364 * The Keyword arguments, if provided overrides the instance attributes for
365 the duration of a single call.
365 the duration of a single call.
366
366
367 The following examples demonstrate how to use the instance attributes:
367 The following examples demonstrate how to use the instance attributes:
368
368
369 .. sourcecode:: ipython
369 .. sourcecode:: ipython
370
370
371 In [16]: dview.targets = [0,2]
371 In [16]: dview.targets = [0,2]
372
372
373 In [17]: dview.block = False
373 In [17]: dview.block = False
374
374
375 In [18]: ar = dview.apply(lambda : 10)
375 In [18]: ar = dview.apply(lambda : 10)
376
376
377 In [19]: ar.get()
377 In [19]: ar.get()
378 Out[19]: [10, 10]
378 Out[19]: [10, 10]
379
379
380 In [20]: dview.targets = v.client.ids # all engines (4)
380 In [20]: dview.targets = v.client.ids # all engines (4)
381
381
382 In [21]: dview.block = True
382 In [21]: dview.block = True
383
383
384 In [22]: dview.apply(lambda : 42)
384 In [22]: dview.apply(lambda : 42)
385 Out[22]: [42, 42, 42, 42]
385 Out[22]: [42, 42, 42, 42]
386
386
387 The :attr:`block` and :attr:`targets` instance attributes of the
387 The :attr:`block` and :attr:`targets` instance attributes of the
388 :class:`.DirectView` also determine the behavior of the parallel magic commands.
388 :class:`.DirectView` also determine the behavior of the parallel magic commands.
389
389
390 Parallel magic commands
390 Parallel magic commands
391 -----------------------
391 -----------------------
392
392
393 We provide a few IPython magic commands (``%px``, ``%autopx`` and ``%result``)
393 We provide a few IPython magic commands (``%px``, ``%autopx`` and ``%result``)
394 that make it a bit more pleasant to execute Python commands on the engines interactively.
394 that make it a bit more pleasant to execute Python commands on the engines interactively.
395 These are simply shortcuts to :meth:`.DirectView.execute`
395 These are simply shortcuts to :meth:`.DirectView.execute`
396 and :meth:`.AsyncResult.display_outputs` methods repsectively.
396 and :meth:`.AsyncResult.display_outputs` methods repsectively.
397 The ``%px`` magic executes a single Python command on the engines
397 The ``%px`` magic executes a single Python command on the engines
398 specified by the :attr:`targets` attribute of the :class:`DirectView` instance:
398 specified by the :attr:`targets` attribute of the :class:`DirectView` instance:
399
399
400 .. sourcecode:: ipython
400 .. sourcecode:: ipython
401
401
402 # Create a DirectView for all targets
402 # Create a DirectView for all targets
403 In [22]: dv = rc[:]
403 In [22]: dv = rc[:]
404
404
405 # Make this DirectView active for parallel magic commands
405 # Make this DirectView active for parallel magic commands
406 In [23]: dv.activate()
406 In [23]: dv.activate()
407
407
408 In [24]: dv.block=True
408 In [24]: dv.block=True
409
409
410 # import numpy here and everywhere
410 # import numpy here and everywhere
411 In [25]: with dv.sync_imports():
411 In [25]: with dv.sync_imports():
412 ....: import numpy
412 ....: import numpy
413 importing numpy on engine(s)
413 importing numpy on engine(s)
414
414
415 In [27]: %px a = numpy.random.rand(2,2)
415 In [27]: %px a = numpy.random.rand(2,2)
416 Parallel execution on engines: [0, 1, 2, 3]
416 Parallel execution on engines: [0, 1, 2, 3]
417
417
418 In [28]: %px numpy.linalg.eigvals(a)
418 In [28]: %px numpy.linalg.eigvals(a)
419 Parallel execution on engines: [0, 1, 2, 3]
419 Parallel execution on engines: [0, 1, 2, 3]
420 [0] Out[68]: array([ 0.77120707, -0.19448286])
420 [0] Out[68]: array([ 0.77120707, -0.19448286])
421 [1] Out[68]: array([ 1.10815921, 0.05110369])
421 [1] Out[68]: array([ 1.10815921, 0.05110369])
422 [2] Out[68]: array([ 0.74625527, -0.37475081])
422 [2] Out[68]: array([ 0.74625527, -0.37475081])
423 [3] Out[68]: array([ 0.72931905, 0.07159743])
423 [3] Out[68]: array([ 0.72931905, 0.07159743])
424
424
425 In [29]: %px print 'hi'
425 In [29]: %px print 'hi'
426 Parallel execution on engine(s): [0, 1, 2, 3]
426 Parallel execution on engine(s): [0, 1, 2, 3]
427 [stdout:0] hi
427 [stdout:0] hi
428 [stdout:1] hi
428 [stdout:1] hi
429 [stdout:2] hi
429 [stdout:2] hi
430 [stdout:3] hi
430 [stdout:3] hi
431
431
432
432
433 Since engines are IPython as well, you can even run magics remotely:
433 Since engines are IPython as well, you can even run magics remotely:
434
434
435 .. sourcecode:: ipython
435 .. sourcecode:: ipython
436
436
437 In [28]: %px %pylab inline
437 In [28]: %px %pylab inline
438 Parallel execution on engine(s): [0, 1, 2, 3]
438 Parallel execution on engine(s): [0, 1, 2, 3]
439 [stdout:0]
439 [stdout:0]
440 Welcome to pylab, a matplotlib-based Python environment...
440 Welcome to pylab, a matplotlib-based Python environment...
441 For more information, type 'help(pylab)'.
441 For more information, type 'help(pylab)'.
442 [stdout:1]
442 [stdout:1]
443 Welcome to pylab, a matplotlib-based Python environment...
443 Welcome to pylab, a matplotlib-based Python environment...
444 For more information, type 'help(pylab)'.
444 For more information, type 'help(pylab)'.
445 [stdout:2]
445 [stdout:2]
446 Welcome to pylab, a matplotlib-based Python environment...
446 Welcome to pylab, a matplotlib-based Python environment...
447 For more information, type 'help(pylab)'.
447 For more information, type 'help(pylab)'.
448 [stdout:3]
448 [stdout:3]
449 Welcome to pylab, a matplotlib-based Python environment...
449 Welcome to pylab, a matplotlib-based Python environment...
450 For more information, type 'help(pylab)'.
450 For more information, type 'help(pylab)'.
451
451
452 And once in pylab mode with the inline backend,
452 And once in pylab mode with the inline backend,
453 you can make plots and they will be displayed in your frontend
453 you can make plots and they will be displayed in your frontend
454 if it suports the inline figures (e.g. notebook or qtconsole):
454 if it suports the inline figures (e.g. notebook or qtconsole):
455
455
456 .. sourcecode:: ipython
456 .. sourcecode:: ipython
457
457
458 In [40]: %px plot(rand(100))
458 In [40]: %px plot(rand(100))
459 Parallel execution on engine(s): [0, 1, 2, 3]
459 Parallel execution on engine(s): [0, 1, 2, 3]
460 <plot0>
460 <plot0>
461 <plot1>
461 <plot1>
462 <plot2>
462 <plot2>
463 <plot3>
463 <plot3>
464 [0] Out[79]: [<matplotlib.lines.Line2D at 0x10a6286d0>]
464 [0] Out[79]: [<matplotlib.lines.Line2D at 0x10a6286d0>]
465 [1] Out[79]: [<matplotlib.lines.Line2D at 0x10b9476d0>]
465 [1] Out[79]: [<matplotlib.lines.Line2D at 0x10b9476d0>]
466 [2] Out[79]: [<matplotlib.lines.Line2D at 0x110652750>]
466 [2] Out[79]: [<matplotlib.lines.Line2D at 0x110652750>]
467 [3] Out[79]: [<matplotlib.lines.Line2D at 0x10c6566d0>]
467 [3] Out[79]: [<matplotlib.lines.Line2D at 0x10c6566d0>]
468
468
469
469
470 ``%%px`` Cell Magic
470 ``%%px`` Cell Magic
471 *******************
471 *******************
472
472
473 `%%px` can also be used as a Cell Magic, which accepts ``--[no]block`` flags,
473 `%%px` can also be used as a Cell Magic, which accepts ``--[no]block`` flags,
474 and a ``--group-outputs`` argument, which adjust how the outputs of multiple
474 and a ``--group-outputs`` argument, which adjust how the outputs of multiple
475 engines are presented.
475 engines are presented.
476
476
477 .. seealso::
477 .. seealso::
478
478
479 :meth:`.AsyncResult.display_outputs` for the grouping options.
479 :meth:`.AsyncResult.display_outputs` for the grouping options.
480
480
481 .. sourcecode:: ipython
481 .. sourcecode:: ipython
482
482
483 In [50]: %%px --block --group-outputs=engine
483 In [50]: %%px --block --group-outputs=engine
484 ....: import numpy as np
484 ....: import numpy as np
485 ....: A = np.random.random((2,2))
485 ....: A = np.random.random((2,2))
486 ....: ev = numpy.linalg.eigvals(A)
486 ....: ev = numpy.linalg.eigvals(A)
487 ....: print ev
487 ....: print ev
488 ....: ev.max()
488 ....: ev.max()
489 ....:
489 ....:
490 Parallel execution on engine(s): [0, 1, 2, 3]
490 Parallel execution on engine(s): [0, 1, 2, 3]
491 [stdout:0] [ 0.60640442 0.95919621]
491 [stdout:0] [ 0.60640442 0.95919621]
492 [0] Out[73]: 0.9591962130899806
492 [0] Out[73]: 0.9591962130899806
493 [stdout:1] [ 0.38501813 1.29430871]
493 [stdout:1] [ 0.38501813 1.29430871]
494 [1] Out[73]: 1.2943087091452372
494 [1] Out[73]: 1.2943087091452372
495 [stdout:2] [-0.85925141 0.9387692 ]
495 [stdout:2] [-0.85925141 0.9387692 ]
496 [2] Out[73]: 0.93876920456230284
496 [2] Out[73]: 0.93876920456230284
497 [stdout:3] [ 0.37998269 1.24218246]
497 [stdout:3] [ 0.37998269 1.24218246]
498 [3] Out[73]: 1.2421824618493817
498 [3] Out[73]: 1.2421824618493817
499
499
500 ``%result`` Magic
500 ``%result`` Magic
501 *****************
501 *****************
502
502
503 If you are using ``%px`` in non-blocking mode, you won't get output.
503 If you are using ``%px`` in non-blocking mode, you won't get output.
504 You can use ``%result`` to display the outputs of the latest command,
504 You can use ``%result`` to display the outputs of the latest command,
505 just as is done when ``%px`` is blocking:
505 just as is done when ``%px`` is blocking:
506
506
507 .. sourcecode:: ipython
507 .. sourcecode:: ipython
508
508
509 In [39]: dv.block = False
509 In [39]: dv.block = False
510
510
511 In [40]: %px print 'hi'
511 In [40]: %px print 'hi'
512 Async parallel execution on engine(s): [0, 1, 2, 3]
512 Async parallel execution on engine(s): [0, 1, 2, 3]
513
513
514 In [41]: %result
514 In [41]: %result
515 [stdout:0] hi
515 [stdout:0] hi
516 [stdout:1] hi
516 [stdout:1] hi
517 [stdout:2] hi
517 [stdout:2] hi
518 [stdout:3] hi
518 [stdout:3] hi
519
519
520 ``%result`` simply calls :meth:`.AsyncResult.display_outputs` on the most recent request.
520 ``%result`` simply calls :meth:`.AsyncResult.display_outputs` on the most recent request.
521 You can pass integers as indices if you want a result other than the latest,
521 You can pass integers as indices if you want a result other than the latest,
522 e.g. ``%result -2``, or ``%result 0`` for the first.
522 e.g. ``%result -2``, or ``%result 0`` for the first.
523
523
524
524
525 ``%autopx``
525 ``%autopx``
526 ***********
526 ***********
527
527
528 The ``%autopx`` magic switches to a mode where everything you type is executed
528 The ``%autopx`` magic switches to a mode where everything you type is executed
529 on the engines until you do ``%autopx`` again.
529 on the engines until you do ``%autopx`` again.
530
530
531 .. sourcecode:: ipython
531 .. sourcecode:: ipython
532
532
533 In [30]: dv.block=True
533 In [30]: dv.block=True
534
534
535 In [31]: %autopx
535 In [31]: %autopx
536 %autopx enabled
536 %autopx enabled
537
537
538 In [32]: max_evals = []
538 In [32]: max_evals = []
539
539
540 In [33]: for i in range(100):
540 In [33]: for i in range(100):
541 ....: a = numpy.random.rand(10,10)
541 ....: a = numpy.random.rand(10,10)
542 ....: a = a+a.transpose()
542 ....: a = a+a.transpose()
543 ....: evals = numpy.linalg.eigvals(a)
543 ....: evals = numpy.linalg.eigvals(a)
544 ....: max_evals.append(evals[0].real)
544 ....: max_evals.append(evals[0].real)
545 ....:
545 ....:
546
546
547 In [34]: print "Average max eigenvalue is: %f" % (sum(max_evals)/len(max_evals))
547 In [34]: print "Average max eigenvalue is: %f" % (sum(max_evals)/len(max_evals))
548 [stdout:0] Average max eigenvalue is: 10.193101
548 [stdout:0] Average max eigenvalue is: 10.193101
549 [stdout:1] Average max eigenvalue is: 10.064508
549 [stdout:1] Average max eigenvalue is: 10.064508
550 [stdout:2] Average max eigenvalue is: 10.055724
550 [stdout:2] Average max eigenvalue is: 10.055724
551 [stdout:3] Average max eigenvalue is: 10.086876
551 [stdout:3] Average max eigenvalue is: 10.086876
552
552
553 In [35]: %autopx
553 In [35]: %autopx
554 Auto Parallel Disabled
554 Auto Parallel Disabled
555
555
556
556
557 Engines as Kernels
557 Engines as Kernels
558 ******************
558 ******************
559
559
560 Engines are really the same object as the Kernels used elsewhere in IPython,
560 Engines are really the same object as the Kernels used elsewhere in IPython,
561 with the minor exception that engines connect to a controller, while regular kernels
561 with the minor exception that engines connect to a controller, while regular kernels
562 bind their sockets, listening for connections from a QtConsole or other frontends.
562 bind their sockets, listening for connections from a QtConsole or other frontends.
563
563
564 Sometimes for debugging or inspection purposes, you would like a QtConsole connected
564 Sometimes for debugging or inspection purposes, you would like a QtConsole connected
565 to an engine for more direct interaction. You can do this by first instructing
565 to an engine for more direct interaction. You can do this by first instructing
566 the Engine to *also* bind its kernel, to listen for connections:
566 the Engine to *also* bind its kernel, to listen for connections:
567
567
568 .. sourcecode:: ipython
568 .. sourcecode:: ipython
569
569
570 In [50]: %px from IPython.parallel import bind_kernel; bind_kernel()
570 In [50]: %px from IPython.parallel import bind_kernel; bind_kernel()
571
571
572 Then, if your engines are local, you can start a qtconsole right on the engine(s):
572 Then, if your engines are local, you can start a qtconsole right on the engine(s):
573
573
574 .. sourcecode:: ipython
574 .. sourcecode:: ipython
575
575
576 In [51]: %px %qtconsole
576 In [51]: %px %qtconsole
577
577
578 Careful with this one, because if your view is of 16 engines it will start 16 QtConsoles!
578 Careful with this one, because if your view is of 16 engines it will start 16 QtConsoles!
579
579
580 Or you can view just the connection info, and work out the right way to connect to the engines,
580 Or you can view just the connection info, and work out the right way to connect to the engines,
581 depending on where they live and where you are:
581 depending on where they live and where you are:
582
582
583 .. sourcecode:: ipython
583 .. sourcecode:: ipython
584
584
585 In [51]: %px %connect_info
585 In [51]: %px %connect_info
586 Parallel execution on engine(s): [0, 1, 2, 3]
586 Parallel execution on engine(s): [0, 1, 2, 3]
587 [stdout:0]
587 [stdout:0]
588 {
588 {
589 "stdin_port": 60387,
589 "stdin_port": 60387,
590 "ip": "127.0.0.1",
590 "ip": "127.0.0.1",
591 "hb_port": 50835,
591 "hb_port": 50835,
592 "key": "eee2dd69-7dd3-4340-bf3e-7e2e22a62542",
592 "key": "eee2dd69-7dd3-4340-bf3e-7e2e22a62542",
593 "shell_port": 55328,
593 "shell_port": 55328,
594 "iopub_port": 58264
594 "iopub_port": 58264
595 }
595 }
596
596
597 Paste the above JSON into a file, and connect with:
597 Paste the above JSON into a file, and connect with:
598 $> ipython <app> --existing <file>
598 $> ipython <app> --existing <file>
599 or, if you are local, you can connect with just:
599 or, if you are local, you can connect with just:
600 $> ipython <app> --existing kernel-60125.json
600 $> ipython <app> --existing kernel-60125.json
601 or even just:
601 or even just:
602 $> ipython <app> --existing
602 $> ipython <app> --existing
603 if this is the most recent IPython session you have started.
603 if this is the most recent IPython session you have started.
604 [stdout:1]
604 [stdout:1]
605 {
605 {
606 "stdin_port": 61869,
606 "stdin_port": 61869,
607 ...
607 ...
608
608
609 .. note::
609 .. note::
610
610
611 ``%qtconsole`` will call :func:`bind_kernel` on an engine if it hasn't been done already,
611 ``%qtconsole`` will call :func:`bind_kernel` on an engine if it hasn't been done already,
612 so you can often skip that first step.
612 so you can often skip that first step.
613
613
614
614
615 Moving Python objects around
615 Moving Python objects around
616 ============================
616 ============================
617
617
618 In addition to calling functions and executing code on engines, you can
618 In addition to calling functions and executing code on engines, you can
619 transfer Python objects to and from your IPython session and the engines. In
619 transfer Python objects to and from your IPython session and the engines. In
620 IPython, these operations are called :meth:`push` (sending an object to the
620 IPython, these operations are called :meth:`push` (sending an object to the
621 engines) and :meth:`pull` (getting an object from the engines).
621 engines) and :meth:`pull` (getting an object from the engines).
622
622
623 Basic push and pull
623 Basic push and pull
624 -------------------
624 -------------------
625
625
626 Here are some examples of how you use :meth:`push` and :meth:`pull`:
626 Here are some examples of how you use :meth:`push` and :meth:`pull`:
627
627
628 .. sourcecode:: ipython
628 .. sourcecode:: ipython
629
629
630 In [38]: dview.push(dict(a=1.03234,b=3453))
630 In [38]: dview.push(dict(a=1.03234,b=3453))
631 Out[38]: [None,None,None,None]
631 Out[38]: [None,None,None,None]
632
632
633 In [39]: dview.pull('a')
633 In [39]: dview.pull('a')
634 Out[39]: [ 1.03234, 1.03234, 1.03234, 1.03234]
634 Out[39]: [ 1.03234, 1.03234, 1.03234, 1.03234]
635
635
636 In [40]: dview.pull('b', targets=0)
636 In [40]: dview.pull('b', targets=0)
637 Out[40]: 3453
637 Out[40]: 3453
638
638
639 In [41]: dview.pull(('a','b'))
639 In [41]: dview.pull(('a','b'))
640 Out[41]: [ [1.03234, 3453], [1.03234, 3453], [1.03234, 3453], [1.03234, 3453] ]
640 Out[41]: [ [1.03234, 3453], [1.03234, 3453], [1.03234, 3453], [1.03234, 3453] ]
641
641
642 In [42]: dview.push(dict(c='speed'))
642 In [42]: dview.push(dict(c='speed'))
643 Out[42]: [None,None,None,None]
643 Out[42]: [None,None,None,None]
644
644
645 In non-blocking mode :meth:`push` and :meth:`pull` also return
645 In non-blocking mode :meth:`push` and :meth:`pull` also return
646 :class:`AsyncResult` objects:
646 :class:`AsyncResult` objects:
647
647
648 .. sourcecode:: ipython
648 .. sourcecode:: ipython
649
649
650 In [48]: ar = dview.pull('a', block=False)
650 In [48]: ar = dview.pull('a', block=False)
651
651
652 In [49]: ar.get()
652 In [49]: ar.get()
653 Out[49]: [1.03234, 1.03234, 1.03234, 1.03234]
653 Out[49]: [1.03234, 1.03234, 1.03234, 1.03234]
654
654
655
655
656 Dictionary interface
656 Dictionary interface
657 --------------------
657 --------------------
658
658
659 Since a Python namespace is just a :class:`dict`, :class:`DirectView` objects provide
659 Since a Python namespace is just a :class:`dict`, :class:`DirectView` objects provide
660 dictionary-style access by key and methods such as :meth:`get` and
660 dictionary-style access by key and methods such as :meth:`get` and
661 :meth:`update` for convenience. This make the remote namespaces of the engines
661 :meth:`update` for convenience. This make the remote namespaces of the engines
662 appear as a local dictionary. Underneath, these methods call :meth:`apply`:
662 appear as a local dictionary. Underneath, these methods call :meth:`apply`:
663
663
664 .. sourcecode:: ipython
664 .. sourcecode:: ipython
665
665
666 In [51]: dview['a']=['foo','bar']
666 In [51]: dview['a']=['foo','bar']
667
667
668 In [52]: dview['a']
668 In [52]: dview['a']
669 Out[52]: [ ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'] ]
669 Out[52]: [ ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'] ]
670
670
671 Scatter and gather
671 Scatter and gather
672 ------------------
672 ------------------
673
673
674 Sometimes it is useful to partition a sequence and push the partitions to
674 Sometimes it is useful to partition a sequence and push the partitions to
675 different engines. In MPI language, this is know as scatter/gather and we
675 different engines. In MPI language, this is know as scatter/gather and we
676 follow that terminology. However, it is important to remember that in
676 follow that terminology. However, it is important to remember that in
677 IPython's :class:`Client` class, :meth:`scatter` is from the
677 IPython's :class:`Client` class, :meth:`scatter` is from the
678 interactive IPython session to the engines and :meth:`gather` is from the
678 interactive IPython session to the engines and :meth:`gather` is from the
679 engines back to the interactive IPython session. For scatter/gather operations
679 engines back to the interactive IPython session. For scatter/gather operations
680 between engines, MPI, pyzmq, or some other direct interconnect should be used.
680 between engines, MPI, pyzmq, or some other direct interconnect should be used.
681
681
682 .. sourcecode:: ipython
682 .. sourcecode:: ipython
683
683
684 In [58]: dview.scatter('a',range(16))
684 In [58]: dview.scatter('a',range(16))
685 Out[58]: [None,None,None,None]
685 Out[58]: [None,None,None,None]
686
686
687 In [59]: dview['a']
687 In [59]: dview['a']
688 Out[59]: [ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15] ]
688 Out[59]: [ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15] ]
689
689
690 In [60]: dview.gather('a')
690 In [60]: dview.gather('a')
691 Out[60]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
691 Out[60]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
692
692
693 Other things to look at
693 Other things to look at
694 =======================
694 =======================
695
695
696 How to do parallel list comprehensions
696 How to do parallel list comprehensions
697 --------------------------------------
697 --------------------------------------
698
698
699 In many cases list comprehensions are nicer than using the map function. While
699 In many cases list comprehensions are nicer than using the map function. While
700 we don't have fully parallel list comprehensions, it is simple to get the
700 we don't have fully parallel list comprehensions, it is simple to get the
701 basic effect using :meth:`scatter` and :meth:`gather`:
701 basic effect using :meth:`scatter` and :meth:`gather`:
702
702
703 .. sourcecode:: ipython
703 .. sourcecode:: ipython
704
704
705 In [66]: dview.scatter('x',range(64))
705 In [66]: dview.scatter('x',range(64))
706
706
707 In [67]: %px y = [i**10 for i in x]
707 In [67]: %px y = [i**10 for i in x]
708 Parallel execution on engines: [0, 1, 2, 3]
708 Parallel execution on engines: [0, 1, 2, 3]
709
709
710 In [68]: y = dview.gather('y')
710 In [68]: y = dview.gather('y')
711
711
712 In [69]: print y
712 In [69]: print y
713 [0, 1, 1024, 59049, 1048576, 9765625, 60466176, 282475249, 1073741824,...]
713 [0, 1, 1024, 59049, 1048576, 9765625, 60466176, 282475249, 1073741824,...]
714
714
715 Remote imports
715 Remote imports
716 --------------
716 --------------
717
717
718 Sometimes you will want to import packages both in your interactive session
718 Sometimes you will want to import packages both in your interactive session
719 and on your remote engines. This can be done with the :class:`ContextManager`
719 and on your remote engines. This can be done with the :class:`ContextManager`
720 created by a DirectView's :meth:`sync_imports` method:
720 created by a DirectView's :meth:`sync_imports` method:
721
721
722 .. sourcecode:: ipython
722 .. sourcecode:: ipython
723
723
724 In [69]: with dview.sync_imports():
724 In [69]: with dview.sync_imports():
725 ....: import numpy
725 ....: import numpy
726 importing numpy on engine(s)
726 importing numpy on engine(s)
727
727
728 Any imports made inside the block will also be performed on the view's engines.
728 Any imports made inside the block will also be performed on the view's engines.
729 sync_imports also takes a `local` boolean flag that defaults to True, which specifies
729 sync_imports also takes a `local` boolean flag that defaults to True, which specifies
730 whether the local imports should also be performed. However, support for `local=False`
730 whether the local imports should also be performed. However, support for `local=False`
731 has not been implemented, so only packages that can be imported locally will work
731 has not been implemented, so only packages that can be imported locally will work
732 this way.
732 this way.
733
733
734 You can also specify imports via the ``@require`` decorator. This is a decorator
734 You can also specify imports via the ``@require`` decorator. This is a decorator
735 designed for use in Dependencies, but can be used to handle remote imports as well.
735 designed for use in Dependencies, but can be used to handle remote imports as well.
736 Modules or module names passed to ``@require`` will be imported before the decorated
736 Modules or module names passed to ``@require`` will be imported before the decorated
737 function is called. If they cannot be imported, the decorated function will never
737 function is called. If they cannot be imported, the decorated function will never
738 execute and will fail with an UnmetDependencyError. Failures of single Engines will
738 execute and will fail with an UnmetDependencyError. Failures of single Engines will
739 be collected and raise a CompositeError, as demonstrated in the next section.
739 be collected and raise a CompositeError, as demonstrated in the next section.
740
740
741 .. sourcecode:: ipython
741 .. sourcecode:: ipython
742
742
743 In [69]: from IPython.parallel import require
743 In [69]: from IPython.parallel import require
744
744
745 In [70]: @require('re'):
745 In [70]: @require('re'):
746 ....: def findall(pat, x):
746 ....: def findall(pat, x):
747 ....: # re is guaranteed to be available
747 ....: # re is guaranteed to be available
748 ....: return re.findall(pat, x)
748 ....: return re.findall(pat, x)
749
749
750 # you can also pass modules themselves, that you already have locally:
750 # you can also pass modules themselves, that you already have locally:
751 In [71]: @require(time):
751 In [71]: @require(time):
752 ....: def wait(t):
752 ....: def wait(t):
753 ....: time.sleep(t)
753 ....: time.sleep(t)
754 ....: return t
754 ....: return t
755
755
756 .. _parallel_exceptions:
756 .. _parallel_exceptions:
757
757
758 Parallel exceptions
758 Parallel exceptions
759 -------------------
759 -------------------
760
760
761 In the multiengine interface, parallel commands can raise Python exceptions,
761 In the multiengine interface, parallel commands can raise Python exceptions,
762 just like serial commands. But, it is a little subtle, because a single
762 just like serial commands. But it is a little subtle, because a single
763 parallel command can actually raise multiple exceptions (one for each engine
763 parallel command can actually raise multiple exceptions (one for each engine
764 the command was run on). To express this idea, we have a
764 the command was run on). To express this idea, we have a
765 :exc:`CompositeError` exception class that will be raised in most cases. The
765 :exc:`CompositeError` exception class that will be raised in most cases. The
766 :exc:`CompositeError` class is a special type of exception that wraps one or
766 :exc:`CompositeError` class is a special type of exception that wraps one or
767 more other types of exceptions. Here is how it works:
767 more other types of exceptions. Here is how it works:
768
768
769 .. sourcecode:: ipython
769 .. sourcecode:: ipython
770
770
771 In [76]: dview.block=True
771 In [78]: dview.block = True
772
773 In [79]: dview.execute("1/0")
774 [0:execute]:
775 ---------------------------------------------------------------------------
776 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
777 ----> 1 1/0
778 ZeroDivisionError: integer division or modulo by zero
779
780 [1:execute]:
781 ---------------------------------------------------------------------------
782 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
783 ----> 1 1/0
784 ZeroDivisionError: integer division or modulo by zero
772
785
773 In [77]: dview.execute('1/0')
786 [2:execute]:
774 ---------------------------------------------------------------------------
787 ---------------------------------------------------------------------------
775 CompositeError Traceback (most recent call last)
788 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
776 /home/user/<ipython-input-10-5d56b303a66c> in <module>()
789 ----> 1 1/0
777 ----> 1 dview.execute('1/0')
790 ZeroDivisionError: integer division or modulo by zero
778
791
779 /path/to/site-packages/IPython/parallel/client/view.pyc in execute(self, code, targets, block)
792 [3:execute]:
780 591 default: self.block
793 ---------------------------------------------------------------------------
781 592 """
794 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
782 --> 593 return self._really_apply(util._execute, args=(code,), block=block, targets=targets)
795 ----> 1 1/0
783 594
796 ZeroDivisionError: integer division or modulo by zero
784 595 def run(self, filename, targets=None, block=None):
785
786 /home/user/<string> in _really_apply(self, f, args, kwargs, targets, block, track)
787
788 /path/to/site-packages/IPython/parallel/client/view.pyc in sync_results(f, self, *args, **kwargs)
789 55 def sync_results(f, self, *args, **kwargs):
790 56 """sync relevant results from self.client to our results attribute."""
791 ---> 57 ret = f(self, *args, **kwargs)
792 58 delta = self.outstanding.difference(self.client.outstanding)
793 59 completed = self.outstanding.intersection(delta)
794
795 /home/user/<string> in _really_apply(self, f, args, kwargs, targets, block, track)
796
797 /path/to/site-packages/IPython/parallel/client/view.pyc in save_ids(f, self, *args, **kwargs)
798 44 n_previous = len(self.client.history)
799 45 try:
800 ---> 46 ret = f(self, *args, **kwargs)
801 47 finally:
802 48 nmsgs = len(self.client.history) - n_previous
803
804 /path/to/site-packages/IPython/parallel/client/view.pyc in _really_apply(self, f, args, kwargs, targets, block, track)
805 529 if block:
806 530 try:
807 --> 531 return ar.get()
808 532 except KeyboardInterrupt:
809 533 pass
810
811 /path/to/site-packages/IPython/parallel/client/asyncresult.pyc in get(self, timeout)
812 101 return self._result
813 102 else:
814 --> 103 raise self._exception
815 104 else:
816 105 raise error.TimeoutError("Result not ready.")
817
818 CompositeError: one or more exceptions from call to method: _execute
819 [0:apply]: ZeroDivisionError: integer division or modulo by zero
820 [1:apply]: ZeroDivisionError: integer division or modulo by zero
821 [2:apply]: ZeroDivisionError: integer division or modulo by zero
822 [3:apply]: ZeroDivisionError: integer division or modulo by zero
823
797
824 Notice how the error message printed when :exc:`CompositeError` is raised has
798 Notice how the error message printed when :exc:`CompositeError` is raised has
825 information about the individual exceptions that were raised on each engine.
799 information about the individual exceptions that were raised on each engine.
826 If you want, you can even raise one of these original exceptions:
800 If you want, you can even raise one of these original exceptions:
827
801
828 .. sourcecode:: ipython
802 .. sourcecode:: ipython
829
803
830 In [80]: try:
804 In [80]: try:
831 ....: dview.execute('1/0')
805 ....: dview.execute('1/0', block=True)
832 ....: except parallel.error.CompositeError, e:
806 ....: except parallel.error.CompositeError, e:
833 ....: e.raise_exception()
807 ....: e.raise_exception()
834 ....:
808 ....:
835 ....:
809 ....:
836 ---------------------------------------------------------------------------
810 ---------------------------------------------------------------------------
837 RemoteError Traceback (most recent call last)
811 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
838 /home/user/<ipython-input-17-8597e7e39858> in <module>()
812 ----> 1 1/0
839 2 dview.execute('1/0')
840 3 except CompositeError as e:
841 ----> 4 e.raise_exception()
842
843 /path/to/site-packages/IPython/parallel/error.pyc in raise_exception(self, excid)
844 266 raise IndexError("an exception with index %i does not exist"%excid)
845 267 else:
846 --> 268 raise RemoteError(en, ev, etb, ei)
847 269
848 270
849
850 RemoteError: ZeroDivisionError(integer division or modulo by zero)
851 Traceback (most recent call last):
852 File "/path/to/site-packages/IPython/parallel/engine/streamkernel.py", line 330, in apply_request
853 exec code in working,working
854 File "<string>", line 1, in <module>
855 File "/path/to/site-packages/IPython/parallel/util.py", line 354, in _execute
856 exec code in globals()
857 File "<string>", line 1, in <module>
858 ZeroDivisionError: integer division or modulo by zero
813 ZeroDivisionError: integer division or modulo by zero
859
814
860 If you are working in IPython, you can simple type ``%debug`` after one of
815 If you are working in IPython, you can simple type ``%debug`` after one of
861 these :exc:`CompositeError` exceptions is raised, and inspect the exception
816 these :exc:`CompositeError` exceptions is raised, and inspect the exception
862 instance:
817 instance:
863
818
864 .. sourcecode:: ipython
819 .. sourcecode:: ipython
865
820
866 In [81]: dview.execute('1/0')
821 In [81]: dview.execute('1/0')
822 [0:execute]:
867 ---------------------------------------------------------------------------
823 ---------------------------------------------------------------------------
868 CompositeError Traceback (most recent call last)
824 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
869 /home/user/<ipython-input-10-5d56b303a66c> in <module>()
825 ----> 1 1/0
870 ----> 1 dview.execute('1/0')
871
872 /path/to/site-packages/IPython/parallel/client/view.pyc in execute(self, code, targets, block)
873 591 default: self.block
874 592 """
875 --> 593 return self._really_apply(util._execute, args=(code,), block=block, targets=targets)
876 594
877 595 def run(self, filename, targets=None, block=None):
878
879 /home/user/<string> in _really_apply(self, f, args, kwargs, targets, block, track)
880
881 /path/to/site-packages/IPython/parallel/client/view.pyc in sync_results(f, self, *args, **kwargs)
882 55 def sync_results(f, self, *args, **kwargs):
883 56 """sync relevant results from self.client to our results attribute."""
884 ---> 57 ret = f(self, *args, **kwargs)
885 58 delta = self.outstanding.difference(self.client.outstanding)
886 59 completed = self.outstanding.intersection(delta)
887
888 /home/user/<string> in _really_apply(self, f, args, kwargs, targets, block, track)
889
890 /path/to/site-packages/IPython/parallel/client/view.pyc in save_ids(f, self, *args, **kwargs)
891 44 n_previous = len(self.client.history)
892 45 try:
893 ---> 46 ret = f(self, *args, **kwargs)
894 47 finally:
895 48 nmsgs = len(self.client.history) - n_previous
896
897 /path/to/site-packages/IPython/parallel/client/view.pyc in _really_apply(self, f, args, kwargs, targets, block, track)
898 529 if block:
899 530 try:
900 --> 531 return ar.get()
901 532 except KeyboardInterrupt:
902 533 pass
903
904 /path/to/site-packages/IPython/parallel/client/asyncresult.pyc in get(self, timeout)
905 101 return self._result
906 102 else:
907 --> 103 raise self._exception
908 104 else:
909 105 raise error.TimeoutError("Result not ready.")
910
911 CompositeError: one or more exceptions from call to method: _execute
912 [0:apply]: ZeroDivisionError: integer division or modulo by zero
913 [1:apply]: ZeroDivisionError: integer division or modulo by zero
914 [2:apply]: ZeroDivisionError: integer division or modulo by zero
915 [3:apply]: ZeroDivisionError: integer division or modulo by zero
916
917 In [82]: %debug
918 > /path/to/site-packages/IPython/parallel/client/asyncresult.py(103)get()
919 102 else:
920 --> 103 raise self._exception
921 104 else:
922
923 # With the debugger running, self._exception is the exceptions instance. We can tab complete
924 # on it and see the extra methods that are available.
925 ipdb> self._exception.<tab>
926 e.__class__ e.__getitem__ e.__new__ e.__setstate__ e.args
927 e.__delattr__ e.__getslice__ e.__reduce__ e.__str__ e.elist
928 e.__dict__ e.__hash__ e.__reduce_ex__ e.__weakref__ e.message
929 e.__doc__ e.__init__ e.__repr__ e._get_engine_str e.print_tracebacks
930 e.__getattribute__ e.__module__ e.__setattr__ e._get_traceback e.raise_exception
931 ipdb> self._exception.print_tracebacks()
932 [0:apply]:
933 Traceback (most recent call last):
934 File "/path/to/site-packages/IPython/parallel/engine/streamkernel.py", line 330, in apply_request
935 exec code in working,working
936 File "<string>", line 1, in <module>
937 File "/path/to/site-packages/IPython/parallel/util.py", line 354, in _execute
938 exec code in globals()
939 File "<string>", line 1, in <module>
940 ZeroDivisionError: integer division or modulo by zero
826 ZeroDivisionError: integer division or modulo by zero
941
827
942
828 [1:execute]:
943 [1:apply]:
829 ---------------------------------------------------------------------------
944 Traceback (most recent call last):
830 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
945 File "/path/to/site-packages/IPython/parallel/engine/streamkernel.py", line 330, in apply_request
831 ----> 1 1/0
946 exec code in working,working
947 File "<string>", line 1, in <module>
948 File "/path/to/site-packages/IPython/parallel/util.py", line 354, in _execute
949 exec code in globals()
950 File "<string>", line 1, in <module>
951 ZeroDivisionError: integer division or modulo by zero
832 ZeroDivisionError: integer division or modulo by zero
952
833
953
834 [2:execute]:
954 [2:apply]:
835 ---------------------------------------------------------------------------
955 Traceback (most recent call last):
836 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
956 File "/path/to/site-packages/IPython/parallel/engine/streamkernel.py", line 330, in apply_request
837 ----> 1 1/0
957 exec code in working,working
958 File "<string>", line 1, in <module>
959 File "/path/to/site-packages/IPython/parallel/util.py", line 354, in _execute
960 exec code in globals()
961 File "<string>", line 1, in <module>
962 ZeroDivisionError: integer division or modulo by zero
838 ZeroDivisionError: integer division or modulo by zero
963
839
964
840 [3:execute]:
965 [3:apply]:
841 ---------------------------------------------------------------------------
966 Traceback (most recent call last):
842 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
967 File "/path/to/site-packages/IPython/parallel/engine/streamkernel.py", line 330, in apply_request
843 ----> 1 1/0
968 exec code in working,working
844 ZeroDivisionError: integer division or modulo by zero
969 File "<string>", line 1, in <module>
845
970 File "/path/to/site-packages/IPython/parallel/util.py", line 354, in _execute
846 In [82]: %debug
971 exec code in globals()
847 > /.../site-packages/IPython/parallel/client/asyncresult.py(125)get()
972 File "<string>", line 1, in <module>
848 124 else:
849 --> 125 raise self._exception
850 126 else:
851
852 # Here, self._exception is the CompositeError instance:
853
854 ipdb> e = self._exception
855 ipdb> e
856 CompositeError(4)
857
858 # we can tab-complete on e to see available methods:
859 ipdb> e.<TAB>
860 e.args e.message e.traceback
861 e.elist e.msg
862 e.ename e.print_traceback
863 e.engine_info e.raise_exception
864 e.evalue e.render_traceback
865
866 # We can then display the individual tracebacks, if we want:
867 ipdb> e.print_traceback(1)
868 [1:execute]:
869 ---------------------------------------------------------------------------
870 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
871 ----> 1 1/0
973 ZeroDivisionError: integer division or modulo by zero
872 ZeroDivisionError: integer division or modulo by zero
974
873
975
874
976 All of this same error handling magic even works in non-blocking mode:
875 All of this same error handling magic even works in non-blocking mode:
977
876
978 .. sourcecode:: ipython
877 .. sourcecode:: ipython
979
878
980 In [83]: dview.block=False
879 In [83]: dview.block=False
981
880
982 In [84]: ar = dview.execute('1/0')
881 In [84]: ar = dview.execute('1/0')
983
882
984 In [85]: ar.get()
883 In [85]: ar.get()
884 [0:execute]:
985 ---------------------------------------------------------------------------
885 ---------------------------------------------------------------------------
986 CompositeError Traceback (most recent call last)
886 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
987 /home/user/<ipython-input-21-8531eb3d26fb> in <module>()
887 ----> 1 1/0
988 ----> 1 ar.get()
888 ZeroDivisionError: integer division or modulo by zero
989
889
990 /path/to/site-packages/IPython/parallel/client/asyncresult.pyc in get(self, timeout)
890 [1:execute]:
991 101 return self._result
891 ---------------------------------------------------------------------------
992 102 else:
892 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
993 --> 103 raise self._exception
893 ----> 1 1/0
994 104 else:
894 ZeroDivisionError: integer division or modulo by zero
995 105 raise error.TimeoutError("Result not ready.")
895
996
896 [2:execute]:
997 CompositeError: one or more exceptions from call to method: _execute
897 ---------------------------------------------------------------------------
998 [0:apply]: ZeroDivisionError: integer division or modulo by zero
898 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
999 [1:apply]: ZeroDivisionError: integer division or modulo by zero
899 ----> 1 1/0
1000 [2:apply]: ZeroDivisionError: integer division or modulo by zero
900 ZeroDivisionError: integer division or modulo by zero
1001 [3:apply]: ZeroDivisionError: integer division or modulo by zero
901
902 [3:execute]:
903 ---------------------------------------------------------------------------
904 ZeroDivisionError Traceback (most recent call last)<ipython-input-1-05c9758a9c21> in <module>()
905 ----> 1 1/0
906 ZeroDivisionError: integer division or modulo by zero
1002
907
General Comments 0
You need to be logged in to leave comments. Login now