##// END OF EJS Templates
Fix doc: "principle" not "principal"
John Zwinck -
Show More
@@ -1,652 +1,652 b''
1 1 .. _parallel_details:
2 2
3 3 ==========================================
4 4 Details of Parallel Computing with IPython
5 5 ==========================================
6 6
7 7 .. note::
8 8
9 9 There are still many sections to fill out in this doc
10 10
11 11
12 12 Caveats
13 13 =======
14 14
15 15 First, some caveats about the detailed workings of parallel computing with 0MQ and IPython.
16 16
17 17 Non-copying sends and numpy arrays
18 18 ----------------------------------
19 19
20 20 When numpy arrays are passed as arguments to apply or via data-movement methods, they are not
21 21 copied. This means that you must be careful if you are sending an array that you intend to work
22 22 on. PyZMQ does allow you to track when a message has been sent so you can know when it is safe
23 23 to edit the buffer, but IPython only allows for this.
24 24
25 25 It is also important to note that the non-copying receive of a message is *read-only*. That
26 26 means that if you intend to work in-place on an array that you have sent or received, you must
27 27 copy it. This is true for both numpy arrays sent to engines and numpy arrays retrieved as
28 28 results.
29 29
30 30 The following will fail:
31 31
32 32 .. sourcecode:: ipython
33 33
34 34 In [3]: A = numpy.zeros(2)
35 35
36 36 In [4]: def setter(a):
37 37 ...: a[0]=1
38 38 ...: return a
39 39
40 40 In [5]: rc[0].apply_sync(setter, A)
41 41 ---------------------------------------------------------------------------
42 42 RuntimeError Traceback (most recent call last)<string> in <module>()
43 43 <ipython-input-12-c3e7afeb3075> in setter(a)
44 44 RuntimeError: array is not writeable
45 45
46 46 If you do need to edit the array in-place, just remember to copy the array if it's read-only.
47 47 The :attr:`ndarray.flags.writeable` flag will tell you if you can write to an array.
48 48
49 49 .. sourcecode:: ipython
50 50
51 51 In [3]: A = numpy.zeros(2)
52 52
53 53 In [4]: def setter(a):
54 54 ...: """only copy read-only arrays"""
55 55 ...: if not a.flags.writeable:
56 56 ...: a=a.copy()
57 57 ...: a[0]=1
58 58 ...: return a
59 59
60 60 In [5]: rc[0].apply_sync(setter, A)
61 61 Out[5]: array([ 1., 0.])
62 62
63 63 # note that results will also be read-only:
64 64 In [6]: _.flags.writeable
65 65 Out[6]: False
66 66
67 67 If you want to safely edit an array in-place after *sending* it, you must use the `track=True`
68 68 flag. IPython always performs non-copying sends of arrays, which return immediately. You must
69 69 instruct IPython track those messages *at send time* in order to know for sure that the send has
70 70 completed. AsyncResults have a :attr:`sent` property, and :meth:`wait_on_send` method for
71 71 checking and waiting for 0MQ to finish with a buffer.
72 72
73 73 .. sourcecode:: ipython
74 74
75 75 In [5]: A = numpy.random.random((1024,1024))
76 76
77 77 In [6]: view.track=True
78 78
79 79 In [7]: ar = view.apply_async(lambda x: 2*x, A)
80 80
81 81 In [8]: ar.sent
82 82 Out[8]: False
83 83
84 84 In [9]: ar.wait_on_send() # blocks until sent is True
85 85
86 86
87 87 What is sendable?
88 88 -----------------
89 89
90 90 If IPython doesn't know what to do with an object, it will pickle it. There is a short list of
91 91 objects that are not pickled: ``buffers``, ``str/bytes`` objects, and ``numpy``
92 92 arrays. These are handled specially by IPython in order to prevent the copying of data. Sending
93 93 bytes or numpy arrays will result in exactly zero in-memory copies of your data (unless the data
94 94 is very small).
95 95
96 96 If you have an object that provides a Python buffer interface, then you can always send that
97 97 buffer without copying - and reconstruct the object on the other side in your own code. It is
98 98 possible that the object reconstruction will become extensible, so you can add your own
99 99 non-copying types, but this does not yet exist.
100 100
101 101 Closures
102 102 ********
103 103
104 104 Just about anything in Python is pickleable. The one notable exception is objects (generally
105 105 functions) with *closures*. Closures can be a complicated topic, but the basic principal is that
106 106 functions that refer to variables in their parent scope have closures.
107 107
108 108 An example of a function that uses a closure:
109 109
110 110 .. sourcecode:: python
111 111
112 112 def f(a):
113 113 def inner():
114 114 # inner will have a closure
115 115 return a
116 116 return inner
117 117
118 118 f1 = f(1)
119 119 f2 = f(2)
120 120 f1() # returns 1
121 121 f2() # returns 2
122 122
123 123 ``f1`` and ``f2`` will have closures referring to the scope in which `inner` was defined,
124 124 because they use the variable 'a'. As a result, you would not be able to send ``f1`` or ``f2``
125 125 with IPython. Note that you *would* be able to send `f`. This is only true for interactively
126 126 defined functions (as are often used in decorators), and only when there are variables used
127 127 inside the inner function, that are defined in the outer function. If the names are *not* in the
128 128 outer function, then there will not be a closure, and the generated function will look in
129 129 ``globals()`` for the name:
130 130
131 131 .. sourcecode:: python
132 132
133 133 def g(b):
134 134 # note that `b` is not referenced in inner's scope
135 135 def inner():
136 136 # this inner will *not* have a closure
137 137 return a
138 138 return inner
139 139 g1 = g(1)
140 140 g2 = g(2)
141 141 g1() # raises NameError on 'a'
142 142 a=5
143 143 g2() # returns 5
144 144
145 145 `g1` and `g2` *will* be sendable with IPython, and will treat the engine's namespace as
146 globals(). The :meth:`pull` method is implemented based on this principal. If we did not
146 globals(). The :meth:`pull` method is implemented based on this principle. If we did not
147 147 provide pull, you could implement it yourself with `apply`, by simply returning objects out
148 148 of the global namespace:
149 149
150 150 .. sourcecode:: ipython
151 151
152 152 In [10]: view.apply(lambda : a)
153 153
154 154 # is equivalent to
155 155 In [11]: view.pull('a')
156 156
157 157 Running Code
158 158 ============
159 159
160 160 There are two principal units of execution in Python: strings of Python code (e.g. 'a=5'),
161 161 and Python functions. IPython is designed around the use of functions via the core
162 162 Client method, called `apply`.
163 163
164 164 Apply
165 165 -----
166 166
167 167 The principal method of remote execution is :meth:`apply`, of
168 168 :class:`~IPython.parallel.client.view.View` objects. The Client provides the full execution and
169 169 communication API for engines via its low-level :meth:`send_apply_message` method, which is used
170 170 by all higher level methods of its Views.
171 171
172 172 f : function
173 173 The fuction to be called remotely
174 174 args : tuple/list
175 175 The positional arguments passed to `f`
176 176 kwargs : dict
177 177 The keyword arguments passed to `f`
178 178
179 179 flags for all views:
180 180
181 181 block : bool (default: view.block)
182 182 Whether to wait for the result, or return immediately.
183 183 False:
184 184 returns AsyncResult
185 185 True:
186 186 returns actual result(s) of f(*args, **kwargs)
187 187 if multiple targets:
188 188 list of results, matching `targets`
189 189 track : bool [default view.track]
190 190 whether to track non-copying sends.
191 191
192 192 targets : int,list of ints, 'all', None [default view.targets]
193 193 Specify the destination of the job.
194 194 if 'all' or None:
195 195 Run on all active engines
196 196 if list:
197 197 Run on each specified engine
198 198 if int:
199 199 Run on single engine
200 200
201 201 Note that LoadBalancedView uses targets to restrict possible destinations. LoadBalanced calls
202 202 will always execute in just one location.
203 203
204 204 flags only in LoadBalancedViews:
205 205
206 206 after : Dependency or collection of msg_ids
207 207 Only for load-balanced execution (targets=None)
208 208 Specify a list of msg_ids as a time-based dependency.
209 209 This job will only be run *after* the dependencies
210 210 have been met.
211 211
212 212 follow : Dependency or collection of msg_ids
213 213 Only for load-balanced execution (targets=None)
214 214 Specify a list of msg_ids as a location-based dependency.
215 215 This job will only be run on an engine where this dependency
216 216 is met.
217 217
218 218 timeout : float/int or None
219 219 Only for load-balanced execution (targets=None)
220 220 Specify an amount of time (in seconds) for the scheduler to
221 221 wait for dependencies to be met before failing with a
222 222 DependencyTimeout.
223 223
224 224 execute and run
225 225 ---------------
226 226
227 227 For executing strings of Python code, :class:`DirectView` 's also provide an :meth:`execute` and
228 228 a :meth:`run` method, which rather than take functions and arguments, take simple strings.
229 229 `execute` simply takes a string of Python code to execute, and sends it to the Engine(s). `run`
230 230 is the same as `execute`, but for a *file*, rather than a string. It is simply a wrapper that
231 231 does something very similar to ``execute(open(f).read())``.
232 232
233 233 .. note::
234 234
235 235 TODO: Examples for execute and run
236 236
237 237 Views
238 238 =====
239 239
240 240 The principal extension of the :class:`~parallel.Client` is the :class:`~parallel.View`
241 241 class. The client is typically a singleton for connecting to a cluster, and presents a
242 242 low-level interface to the Hub and Engines. Most real usage will involve creating one or more
243 243 :class:`~parallel.View` objects for working with engines in various ways.
244 244
245 245
246 246 DirectView
247 247 ----------
248 248
249 249 The :class:`.DirectView` is the class for the IPython :ref:`Multiplexing Interface
250 250 <parallel_multiengine>`.
251 251
252 252 Creating a DirectView
253 253 *********************
254 254
255 255 DirectViews can be created in two ways, by index access to a client, or by a client's
256 256 :meth:`view` method. Index access to a Client works in a few ways. First, you can create
257 257 DirectViews to single engines simply by accessing the client by engine id:
258 258
259 259 .. sourcecode:: ipython
260 260
261 261 In [2]: rc[0]
262 262 Out[2]: <DirectView 0>
263 263
264 264 You can also create a DirectView with a list of engines:
265 265
266 266 .. sourcecode:: ipython
267 267
268 268 In [2]: rc[0,1,2]
269 269 Out[2]: <DirectView [0,1,2]>
270 270
271 271 Other methods for accessing elements, such as slicing and negative indexing, work by passing
272 272 the index directly to the client's :attr:`ids` list, so:
273 273
274 274 .. sourcecode:: ipython
275 275
276 276 # negative index
277 277 In [2]: rc[-1]
278 278 Out[2]: <DirectView 3>
279 279
280 280 # or slicing:
281 281 In [3]: rc[::2]
282 282 Out[3]: <DirectView [0,2]>
283 283
284 284 are always the same as:
285 285
286 286 .. sourcecode:: ipython
287 287
288 288 In [2]: rc[rc.ids[-1]]
289 289 Out[2]: <DirectView 3>
290 290
291 291 In [3]: rc[rc.ids[::2]]
292 292 Out[3]: <DirectView [0,2]>
293 293
294 294 Also note that the slice is evaluated at the time of construction of the DirectView, so the
295 295 targets will not change over time if engines are added/removed from the cluster.
296 296
297 297 Execution via DirectView
298 298 ************************
299 299
300 300 The DirectView is the simplest way to work with one or more engines directly (hence the name).
301 301
302 302 For instance, to get the process ID of all your engines:
303 303
304 304 .. sourcecode:: ipython
305 305
306 306 In [5]: import os
307 307
308 308 In [6]: dview.apply_sync(os.getpid)
309 309 Out[6]: [1354, 1356, 1358, 1360]
310 310
311 311 Or to see the hostname of the machine they are on:
312 312
313 313 .. sourcecode:: ipython
314 314
315 315 In [5]: import socket
316 316
317 317 In [6]: dview.apply_sync(socket.gethostname)
318 318 Out[6]: ['tesla', 'tesla', 'edison', 'edison', 'edison']
319 319
320 320 .. note::
321 321
322 322 TODO: expand on direct execution
323 323
324 324 Data movement via DirectView
325 325 ****************************
326 326
327 327 Since a Python namespace is just a :class:`dict`, :class:`DirectView` objects provide
328 328 dictionary-style access by key and methods such as :meth:`get` and
329 329 :meth:`update` for convenience. This make the remote namespaces of the engines
330 330 appear as a local dictionary. Underneath, these methods call :meth:`apply`:
331 331
332 332 .. sourcecode:: ipython
333 333
334 334 In [51]: dview['a']=['foo','bar']
335 335
336 336 In [52]: dview['a']
337 337 Out[52]: [ ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'] ]
338 338
339 339 Scatter and gather
340 340 ------------------
341 341
342 342 Sometimes it is useful to partition a sequence and push the partitions to
343 343 different engines. In MPI language, this is know as scatter/gather and we
344 344 follow that terminology. However, it is important to remember that in
345 345 IPython's :class:`Client` class, :meth:`scatter` is from the
346 346 interactive IPython session to the engines and :meth:`gather` is from the
347 347 engines back to the interactive IPython session. For scatter/gather operations
348 348 between engines, MPI should be used:
349 349
350 350 .. sourcecode:: ipython
351 351
352 352 In [58]: dview.scatter('a',range(16))
353 353 Out[58]: [None,None,None,None]
354 354
355 355 In [59]: dview['a']
356 356 Out[59]: [ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15] ]
357 357
358 358 In [60]: dview.gather('a')
359 359 Out[60]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
360 360
361 361 Push and pull
362 362 -------------
363 363
364 364 :meth:`~IPython.parallel.client.view.DirectView.push`
365 365
366 366 :meth:`~IPython.parallel.client.view.DirectView.pull`
367 367
368 368 .. note::
369 369
370 370 TODO: write this section
371 371
372 372
373 373 LoadBalancedView
374 374 ----------------
375 375
376 376 The :class:`~.LoadBalancedView` is the class for load-balanced execution via the task scheduler.
377 377 These views always run tasks on exactly one engine, but let the scheduler determine where that
378 378 should be, allowing load-balancing of tasks. The LoadBalancedView does allow you to specify
379 379 restrictions on where and when tasks can execute, for more complicated load-balanced workflows.
380 380
381 381 Data Movement
382 382 =============
383 383
384 384 Since the :class:`~.LoadBalancedView` does not know where execution will take place, explicit
385 385 data movement methods like push/pull and scatter/gather do not make sense, and are not provided.
386 386
387 387 Results
388 388 =======
389 389
390 390 AsyncResults
391 391 ------------
392 392
393 393 Our primary representation of the results of remote execution is the :class:`~.AsyncResult`
394 394 object, based on the object of the same name in the built-in :mod:`multiprocessing.pool`
395 395 module. Our version provides a superset of that interface.
396 396
397 397 The basic principle of the AsyncResult is the encapsulation of one or more results not yet completed. Execution methods (including data movement, such as push/pull) will all return
398 398 AsyncResults when `block=False`.
399 399
400 400 The mp.pool.AsyncResult interface
401 401 ---------------------------------
402 402
403 403 The basic interface of the AsyncResult is exactly that of the AsyncResult in :mod:`multiprocessing.pool`, and consists of four methods:
404 404
405 405 .. AsyncResult spec directly from docs.python.org
406 406
407 407 .. class:: AsyncResult
408 408
409 409 The stdlib AsyncResult spec
410 410
411 411 .. method:: wait([timeout])
412 412
413 413 Wait until the result is available or until *timeout* seconds pass. This
414 414 method always returns ``None``.
415 415
416 416 .. method:: ready()
417 417
418 418 Return whether the call has completed.
419 419
420 420 .. method:: successful()
421 421
422 422 Return whether the call completed without raising an exception. Will
423 423 raise :exc:`AssertionError` if the result is not ready.
424 424
425 425 .. method:: get([timeout])
426 426
427 427 Return the result when it arrives. If *timeout* is not ``None`` and the
428 428 result does not arrive within *timeout* seconds then
429 429 :exc:`TimeoutError` is raised. If the remote call raised
430 430 an exception then that exception will be reraised as a :exc:`RemoteError`
431 431 by :meth:`get`.
432 432
433 433
434 434 While an AsyncResult is not done, you can check on it with its :meth:`ready` method, which will
435 435 return whether the AR is done. You can also wait on an AsyncResult with its :meth:`wait` method.
436 436 This method blocks until the result arrives. If you don't want to wait forever, you can pass a
437 437 timeout (in seconds) as an argument to :meth:`wait`. :meth:`wait` will *always return None*, and
438 438 should never raise an error.
439 439
440 440 :meth:`ready` and :meth:`wait` are insensitive to the success or failure of the call. After a
441 441 result is done, :meth:`successful` will tell you whether the call completed without raising an
442 442 exception.
443 443
444 444 If you actually want the result of the call, you can use :meth:`get`. Initially, :meth:`get`
445 445 behaves just like :meth:`wait`, in that it will block until the result is ready, or until a
446 446 timeout is met. However, unlike :meth:`wait`, :meth:`get` will raise a :exc:`TimeoutError` if
447 447 the timeout is reached and the result is still not ready. If the result arrives before the
448 448 timeout is reached, then :meth:`get` will return the result itself if no exception was raised,
449 449 and will raise an exception if there was.
450 450
451 451 Here is where we start to expand on the multiprocessing interface. Rather than raising the
452 452 original exception, a RemoteError will be raised, encapsulating the remote exception with some
453 453 metadata. If the AsyncResult represents multiple calls (e.g. any time `targets` is plural), then
454 454 a CompositeError, a subclass of RemoteError, will be raised.
455 455
456 456 .. seealso::
457 457
458 458 For more information on remote exceptions, see :ref:`the section in the Direct Interface
459 459 <parallel_exceptions>`.
460 460
461 461 Extended interface
462 462 ******************
463 463
464 464
465 465 Other extensions of the AsyncResult interface include convenience wrappers for :meth:`get`.
466 466 AsyncResults have a property, :attr:`result`, with the short alias :attr:`r`, which simply call
467 467 :meth:`get`. Since our object is designed for representing *parallel* results, it is expected
468 468 that many calls (any of those submitted via DirectView) will map results to engine IDs. We
469 469 provide a :meth:`get_dict`, which is also a wrapper on :meth:`get`, which returns a dictionary
470 470 of the individual results, keyed by engine ID.
471 471
472 472 You can also prevent a submitted job from actually executing, via the AsyncResult's
473 473 :meth:`abort` method. This will instruct engines to not execute the job when it arrives.
474 474
475 475 The larger extension of the AsyncResult API is the :attr:`metadata` attribute. The metadata
476 476 is a dictionary (with attribute access) that contains, logically enough, metadata about the
477 477 execution.
478 478
479 479 Metadata keys:
480 480
481 481 timestamps
482 482
483 483 submitted
484 484 When the task left the Client
485 485 started
486 486 When the task started execution on the engine
487 487 completed
488 488 When execution finished on the engine
489 489 received
490 490 When the result arrived on the Client
491 491
492 492 note that it is not known when the result arrived in 0MQ on the client, only when it
493 493 arrived in Python via :meth:`Client.spin`, so in interactive use, this may not be
494 494 strictly informative.
495 495
496 496 Information about the engine
497 497
498 498 engine_id
499 499 The integer id
500 500 engine_uuid
501 501 The UUID of the engine
502 502
503 503 output of the call
504 504
505 505 pyerr
506 506 Python exception, if there was one
507 507 pyout
508 508 Python output
509 509 stderr
510 510 stderr stream
511 511 stdout
512 512 stdout (e.g. print) stream
513 513
514 514 And some extended information
515 515
516 516 status
517 517 either 'ok' or 'error'
518 518 msg_id
519 519 The UUID of the message
520 520 after
521 521 For tasks: the time-based msg_id dependencies
522 522 follow
523 523 For tasks: the location-based msg_id dependencies
524 524
525 525 While in most cases, the Clients that submitted a request will be the ones using the results,
526 526 other Clients can also request results directly from the Hub. This is done via the Client's
527 527 :meth:`get_result` method. This method will *always* return an AsyncResult object. If the call
528 528 was not submitted by the client, then it will be a subclass, called :class:`AsyncHubResult`.
529 529 These behave in the same way as an AsyncResult, but if the result is not ready, waiting on an
530 530 AsyncHubResult polls the Hub, which is much more expensive than the passive polling used
531 531 in regular AsyncResults.
532 532
533 533
534 534 The Client keeps track of all results
535 535 history, results, metadata
536 536
537 537 Querying the Hub
538 538 ================
539 539
540 540 The Hub sees all traffic that may pass through the schedulers between engines and clients.
541 541 It does this so that it can track state, allowing multiple clients to retrieve results of
542 542 computations submitted by their peers, as well as persisting the state to a database.
543 543
544 544 queue_status
545 545
546 546 You can check the status of the queues of the engines with this command.
547 547
548 548 result_status
549 549
550 550 check on results
551 551
552 552 purge_results
553 553
554 554 forget results (conserve resources)
555 555
556 556 Controlling the Engines
557 557 =======================
558 558
559 559 There are a few actions you can do with Engines that do not involve execution. These
560 560 messages are sent via the Control socket, and bypass any long queues of waiting execution
561 561 jobs
562 562
563 563 abort
564 564
565 565 Sometimes you may want to prevent a job you have submitted from actually running. The method
566 566 for this is :meth:`abort`. It takes a container of msg_ids, and instructs the Engines to not
567 567 run the jobs if they arrive. The jobs will then fail with an AbortedTask error.
568 568
569 569 clear
570 570
571 571 You may want to purge the Engine(s) namespace of any data you have left in it. After
572 572 running `clear`, there will be no names in the Engine's namespace
573 573
574 574 shutdown
575 575
576 576 You can also instruct engines (and the Controller) to terminate from a Client. This
577 577 can be useful when a job is finished, since you can shutdown all the processes with a
578 578 single command.
579 579
580 580 Synchronization
581 581 ===============
582 582
583 583 Since the Client is a synchronous object, events do not automatically trigger in your
584 584 interactive session - you must poll the 0MQ sockets for incoming messages. Note that
585 585 this polling *does not* actually make any network requests. It simply performs a `select`
586 586 operation, to check if messages are already in local memory, waiting to be handled.
587 587
588 588 The method that handles incoming messages is :meth:`spin`. This method flushes any waiting
589 589 messages on the various incoming sockets, and updates the state of the Client.
590 590
591 591 If you need to wait for particular results to finish, you can use the :meth:`wait` method,
592 592 which will call :meth:`spin` until the messages are no longer outstanding. Anything that
593 593 represents a collection of messages, such as a list of msg_ids or one or more AsyncResult
594 594 objects, can be passed as argument to wait. A timeout can be specified, which will prevent
595 595 the call from blocking for more than a specified time, but the default behavior is to wait
596 596 forever.
597 597
598 598 The client also has an ``outstanding`` attribute - a ``set`` of msg_ids that are awaiting
599 599 replies. This is the default if wait is called with no arguments - i.e. wait on *all*
600 600 outstanding messages.
601 601
602 602
603 603 .. note::
604 604
605 605 TODO wait example
606 606
607 607 Map
608 608 ===
609 609
610 610 Many parallel computing problems can be expressed as a ``map``, or running a single program with
611 611 a variety of different inputs. Python has a built-in :py:func:`map`, which does exactly this,
612 612 and many parallel execution tools in Python, such as the built-in
613 613 :py:class:`multiprocessing.Pool` object provide implementations of `map`. All View objects
614 614 provide a :meth:`map` method as well, but the load-balanced and direct implementations differ.
615 615
616 616 Views' map methods can be called on any number of sequences, but they can also take the `block`
617 617 and `bound` keyword arguments, just like :meth:`~client.apply`, but *only as keywords*.
618 618
619 619 .. sourcecode:: python
620 620
621 621 dview.map(*sequences, block=None)
622 622
623 623
624 624 * iter, map_async, reduce
625 625
626 626 Decorators and RemoteFunctions
627 627 ==============================
628 628
629 629 .. note::
630 630
631 631 TODO: write this section
632 632
633 633 :func:`~IPython.parallel.client.remotefunction.@parallel`
634 634
635 635 :func:`~IPython.parallel.client.remotefunction.@remote`
636 636
637 637 :class:`~IPython.parallel.client.remotefunction.RemoteFunction`
638 638
639 639 :class:`~IPython.parallel.client.remotefunction.ParallelFunction`
640 640
641 641 Dependencies
642 642 ============
643 643
644 644 .. note::
645 645
646 646 TODO: write this section
647 647
648 648 :func:`~IPython.parallel.controller.dependency.@depend`
649 649
650 650 :func:`~IPython.parallel.controller.dependency.@require`
651 651
652 652 :class:`~IPython.parallel.controller.dependency.Dependency`
General Comments 0
You need to be logged in to leave comments. Login now