Show More
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
1 | NO CONTENT: new file 100644, binary diff hidden |
|
NO CONTENT: new file 100644, binary diff hidden |
@@ -75,7 +75,7 b' The resulting plot of the single digit counts shows that each digit occurs' | |||||
75 | approximately 1,000 times, but that with only 10,000 digits the |
|
75 | approximately 1,000 times, but that with only 10,000 digits the | |
76 | statistical fluctuations are still rather large: |
|
76 | statistical fluctuations are still rather large: | |
77 |
|
77 | |||
78 |
.. image:: |
|
78 | .. image:: single_digits.* | |
79 |
|
79 | |||
80 | It is clear that to reduce the relative fluctuations in the counts, we need |
|
80 | It is clear that to reduce the relative fluctuations in the counts, we need | |
81 | to look at many more digits of pi. That brings us to the parallel calculation. |
|
81 | to look at many more digits of pi. That brings us to the parallel calculation. | |
@@ -188,7 +188,7 b' most likely and that "06" and "07" are least likely. Further analysis would' | |||||
188 | show that the relative size of the statistical fluctuations have decreased |
|
188 | show that the relative size of the statistical fluctuations have decreased | |
189 | compared to the 10,000 digit calculation. |
|
189 | compared to the 10,000 digit calculation. | |
190 |
|
190 | |||
191 |
.. image:: |
|
191 | .. image:: two_digit_counts.* | |
192 |
|
192 | |||
193 |
|
193 | |||
194 | Parallel options pricing |
|
194 | Parallel options pricing | |
@@ -257,9 +257,9 b' entire calculation (10 strike prices, 10 volatilities, 100,000 paths for each)' | |||||
257 | took 30 seconds in parallel, giving a speedup of 7.7x, which is comparable |
|
257 | took 30 seconds in parallel, giving a speedup of 7.7x, which is comparable | |
258 | to the speedup observed in our previous example. |
|
258 | to the speedup observed in our previous example. | |
259 |
|
259 | |||
260 |
.. image:: |
|
260 | .. image:: asian_call.* | |
261 |
|
261 | |||
262 |
.. image:: |
|
262 | .. image:: asian_put.* | |
263 |
|
263 | |||
264 | Conclusion |
|
264 | Conclusion | |
265 | ========== |
|
265 | ========== |
@@ -363,11 +363,151 b' Reference' | |||||
363 | Results |
|
363 | Results | |
364 | ======= |
|
364 | ======= | |
365 |
|
365 | |||
366 | AsyncResults are the primary class |
|
366 | AsyncResults | |
|
367 | ------------ | |||
367 |
|
368 | |||
368 | get_result |
|
369 | Our primary representation is the AsyncResult object, based on the object of the same name in | |
|
370 | the built-in :mod:`multiprocessing.pool` module. Our version provides a superset of that | |||
|
371 | interface. | |||
369 |
|
372 | |||
370 | results, metadata |
|
373 | The basic principle of the AsyncResult is the encapsulation of one or more results not yet completed. Execution methods (including data movement, such as push/pull) will all return | |
|
374 | AsyncResults when `block=False`. | |||
|
375 | ||||
|
376 | The mp.pool.AsyncResult interface | |||
|
377 | --------------------------------- | |||
|
378 | ||||
|
379 | The basic interface of the AsyncResult is exactly that of the AsyncResult in :mod:`multiprocessing.pool`, and consists of four methods: | |||
|
380 | ||||
|
381 | .. AsyncResult spec directly from docs.python.org | |||
|
382 | ||||
|
383 | .. class:: AsyncResult | |||
|
384 | ||||
|
385 | The stdlib AsyncResult spec | |||
|
386 | ||||
|
387 | .. method:: wait([timeout]) | |||
|
388 | ||||
|
389 | Wait until the result is available or until *timeout* seconds pass. This | |||
|
390 | method always returns ``None``. | |||
|
391 | ||||
|
392 | .. method:: ready() | |||
|
393 | ||||
|
394 | Return whether the call has completed. | |||
|
395 | ||||
|
396 | .. method:: successful() | |||
|
397 | ||||
|
398 | Return whether the call completed without raising an exception. Will | |||
|
399 | raise :exc:`AssertionError` if the result is not ready. | |||
|
400 | ||||
|
401 | .. method:: get([timeout]) | |||
|
402 | ||||
|
403 | Return the result when it arrives. If *timeout* is not ``None`` and the | |||
|
404 | result does not arrive within *timeout* seconds then | |||
|
405 | :exc:`TimeoutError` is raised. If the remote call raised | |||
|
406 | an exception then that exception will be reraised as a :exc:`RemoteError` | |||
|
407 | by :meth:`get`. | |||
|
408 | ||||
|
409 | ||||
|
410 | While an AsyncResult is not done, you can check on it with its :meth:`ready` method, which will | |||
|
411 | return whether the AR is done. You can also wait on an AsyncResult with its :meth:`wait` method. | |||
|
412 | This method blocks until the result arrives. If you don't want to wait forever, you can pass a | |||
|
413 | timeout (in seconds) as an argument to :meth:`wait`. :meth:`wait` will *always return None*, and | |||
|
414 | should never raise an error. | |||
|
415 | ||||
|
416 | :meth:`ready` and :meth:`wait` are insensitive to the success or failure of the call. After a | |||
|
417 | result is done, :meth:`successful` will tell you whether the call completed without raising an | |||
|
418 | exception. | |||
|
419 | ||||
|
420 | If you actually want the result of the call, you can use :meth:`get`. Initially, :meth:`get` | |||
|
421 | behaves just like :meth:`wait`, in that it will block until the result is ready, or until a | |||
|
422 | timeout is met. However, unlike :meth:`wait`, :meth:`get` will raise a :exc:`TimeoutError` if | |||
|
423 | the timeout is reached and the result is still not ready. If the result arrives before the | |||
|
424 | timeout is reached, then :meth:`get` will return the result itself if no exception was raised, | |||
|
425 | and will raise an exception if there was. | |||
|
426 | ||||
|
427 | Here is where we start to expand on the multiprocessing interface. Rather than raising the | |||
|
428 | original exception, a RemoteError will be raised, encapsulating the remote exception with some | |||
|
429 | metadata. If the AsyncResult represents multiple calls (e.g. any time `targets` is plural), then | |||
|
430 | a CompositeError, a subclass of RemoteError, will be raised. | |||
|
431 | ||||
|
432 | .. seealso:: | |||
|
433 | ||||
|
434 | For more information on remote exceptions, see :ref:`the section in the Direct Interface | |||
|
435 | <Parallel_exceptions>`. | |||
|
436 | ||||
|
437 | Extended interface | |||
|
438 | ****************** | |||
|
439 | ||||
|
440 | ||||
|
441 | Other extensions of the AsyncResult interface include convenience wrappers for :meth:`get`. | |||
|
442 | AsyncResults have a property, :attr:`result`, with the short alias :attr:`r`, which simply call | |||
|
443 | :meth:`get`. Since our object is designed for representing *parallel* results, it is expected | |||
|
444 | that many calls (any of those submitted via DirectView) will map results to engine IDs. We | |||
|
445 | provide a :meth:`get_dict`, which is also a wrapper on :meth:`get`, which returns a dictionary | |||
|
446 | of the individual results, keyed by engine ID. | |||
|
447 | ||||
|
448 | You can also prevent a submitted job from actually executing, via the AsyncResult's :meth:`abort` method. This will instruct engines to not execute the job when it arrives. | |||
|
449 | ||||
|
450 | The larger extension of the AsyncResult API is the :attr:`metadata` attribute. The metadata | |||
|
451 | is a dictionary (with attribute access) that contains, logically enough, metadata about the | |||
|
452 | execution. | |||
|
453 | ||||
|
454 | Metadata keys: | |||
|
455 | ||||
|
456 | timestamps | |||
|
457 | ||||
|
458 | submitted | |||
|
459 | When the task left the Client | |||
|
460 | started | |||
|
461 | When the task started execution on the engine | |||
|
462 | completed | |||
|
463 | When execution finished on the engine | |||
|
464 | received | |||
|
465 | When the result arrived on the Client | |||
|
466 | ||||
|
467 | note that it is not known when the result arrived in 0MQ on the client, only when it | |||
|
468 | arrived in Python via :meth:`Client.spin`, so in interactive use, this may not be | |||
|
469 | strictly informative. | |||
|
470 | ||||
|
471 | Information about the engine | |||
|
472 | ||||
|
473 | engine_id | |||
|
474 | The integer id | |||
|
475 | engine_uuid | |||
|
476 | The UUID of the engine | |||
|
477 | ||||
|
478 | output of the call | |||
|
479 | ||||
|
480 | pyerr | |||
|
481 | Python exception, if there was one | |||
|
482 | pyout | |||
|
483 | Python output | |||
|
484 | stderr | |||
|
485 | stderr stream | |||
|
486 | stdout | |||
|
487 | stdout (e.g. print) stream | |||
|
488 | ||||
|
489 | And some extended information | |||
|
490 | ||||
|
491 | status | |||
|
492 | either 'ok' or 'error' | |||
|
493 | msg_id | |||
|
494 | The UUID of the message | |||
|
495 | after | |||
|
496 | For tasks: the time-based msg_id dependencies | |||
|
497 | follow | |||
|
498 | For tasks: the location-based msg_id dependencies | |||
|
499 | ||||
|
500 | While in most cases, the Clients that submitted a request will be the ones using the results, | |||
|
501 | other Clients can also request results directly from the Hub. This is done via the Client's | |||
|
502 | :meth:`get_result` method. This method will *always* return an AsyncResult object. If the call | |||
|
503 | was not submitted by the client, then it will be a subclass, called :class:`AsyncHubResult`. | |||
|
504 | These behave in the same way as an AsyncResult, but if the result is not ready, waiting on an | |||
|
505 | AsyncHubResult polls the Hub, which is much more expensive than the passive polling used | |||
|
506 | in regular AsyncResults. | |||
|
507 | ||||
|
508 | ||||
|
509 | The Client keeps track of all results | |||
|
510 | history, results, metadata | |||
371 |
|
511 | |||
372 | Querying the Hub |
|
512 | Querying the Hub | |
373 | ================ |
|
513 | ================ | |
@@ -382,8 +522,12 b' queue_status' | |||||
382 |
|
522 | |||
383 | result_status |
|
523 | result_status | |
384 |
|
524 | |||
|
525 | check on results | |||
|
526 | ||||
385 | purge_results |
|
527 | purge_results | |
386 |
|
528 | |||
|
529 | forget results (conserve resources) | |||
|
530 | ||||
387 | Controlling the Engines |
|
531 | Controlling the Engines | |
388 | ======================= |
|
532 | ======================= | |
389 |
|
533 |
@@ -80,8 +80,8 b' arrays and buffers, there is also a `track` flag, which instructs PyZMQ to produ' | |||||
80 |
|
80 | |||
81 | The result of a non-blocking call to `apply` is now an AsyncResult_ object, described below. |
|
81 | The result of a non-blocking call to `apply` is now an AsyncResult_ object, described below. | |
82 |
|
82 | |||
83 | MultiEngine |
|
83 | MultiEngine to DirectView | |
84 | =========== |
|
84 | ========================= | |
85 |
|
85 | |||
86 | The multiplexing interface previously provided by the MultiEngineClient is now provided by the |
|
86 | The multiplexing interface previously provided by the MultiEngineClient is now provided by the | |
87 | DirectView. Once you have a Client connected, you can create a DirectView with index-access |
|
87 | DirectView. Once you have a Client connected, you can create a DirectView with index-access | |
@@ -131,8 +131,8 b' the natural return value is the actual Python objects. It is no longer the recom' | |||||
131 | to use stdout as your results, due to stream decoupling and the asynchronous nature of how the |
|
131 | to use stdout as your results, due to stream decoupling and the asynchronous nature of how the | |
132 | stdout streams are handled in the new system. |
|
132 | stdout streams are handled in the new system. | |
133 |
|
133 | |||
134 | Task |
|
134 | Task to LoadBalancedView | |
135 | ==== |
|
135 | ======================== | |
136 |
|
136 | |||
137 | Load-Balancing has changed more than Multiplexing. This is because there is no longer a notion |
|
137 | Load-Balancing has changed more than Multiplexing. This is because there is no longer a notion | |
138 | of a StringTask or a MapTask, there are simply Python functions to call. Tasks are now |
|
138 | of a StringTask or a MapTask, there are simply Python functions to call. Tasks are now | |
@@ -203,18 +203,8 b' the engine beyond the duration of the task.' | |||||
203 | LoadBalancedView. |
|
203 | LoadBalancedView. | |
204 |
|
204 | |||
205 |
|
205 | |||
206 | .. _AsyncResult: |
|
|||
207 |
|
206 | |||
208 | PendingResults |
|
207 | There are still some things that behave the same as IPython.kernel: | |
209 | ============== |
|
|||
210 |
|
||||
211 | Since we no longer use Twisted, we also lose the use of Deferred objects. The results of |
|
|||
212 | non-blocking calls were represented as PendingDeferred or PendingResult objects. The object used |
|
|||
213 | for this in the new code is an AsyncResult object. The AsyncResult object is based on the object |
|
|||
214 | of the same name in the built-in :py-mod:`multiprocessing.pool` module. Our version provides a |
|
|||
215 | superset of that interface. |
|
|||
216 |
|
||||
217 | Some things that behave the same: |
|
|||
218 |
|
208 | |||
219 | .. sourcecode:: ipython |
|
209 | .. sourcecode:: ipython | |
220 |
|
210 | |||
@@ -224,7 +214,7 b' Some things that behave the same:' | |||||
224 | Out[6]: [5, 5] |
|
214 | Out[6]: [5, 5] | |
225 |
|
215 | |||
226 | # new |
|
216 | # new | |
227 |
In [5]: ar = |
|
217 | In [5]: ar = dview.pull('a', targets=[0,1], block=False) | |
228 | In [6]: ar.r |
|
218 | In [6]: ar.r | |
229 | Out[6]: [5, 5] |
|
219 | Out[6]: [5, 5] | |
230 |
|
220 |
@@ -80,19 +80,16 b' These packages provide a powerful and cost-effective approach to numerical and' | |||||
80 | scientific computing on Windows. The following dependencies are needed to run |
|
80 | scientific computing on Windows. The following dependencies are needed to run | |
81 | IPython on Windows: |
|
81 | IPython on Windows: | |
82 |
|
82 | |||
83 |
* Python 2. |
|
83 | * Python 2.6 or 2.7 (http://www.python.org) | |
84 | * pywin32 (http://sourceforge.net/projects/pywin32/) |
|
84 | * pywin32 (http://sourceforge.net/projects/pywin32/) | |
85 | * PyReadline (https://launchpad.net/pyreadline) |
|
85 | * PyReadline (https://launchpad.net/pyreadline) | |
86 | * zope.interface and Twisted (http://twistedmatrix.com) |
|
86 | * pyzmq (http://github.com/zeromq/pyzmq/downloads) | |
87 | * Foolcap (http://foolscap.lothar.com/trac) |
|
|||
88 | * pyOpenSSL (https://launchpad.net/pyopenssl) |
|
|||
89 | * IPython (http://ipython.scipy.org) |
|
87 | * IPython (http://ipython.scipy.org) | |
90 |
|
88 | |||
91 | In addition, the following dependencies are needed to run the demos described |
|
89 | In addition, the following dependencies are needed to run the demos described | |
92 | in this document. |
|
90 | in this document. | |
93 |
|
91 | |||
94 | * NumPy and SciPy (http://www.scipy.org) |
|
92 | * NumPy and SciPy (http://www.scipy.org) | |
95 | * wxPython (http://www.wxpython.org) |
|
|||
96 | * Matplotlib (http://matplotlib.sourceforge.net/) |
|
93 | * Matplotlib (http://matplotlib.sourceforge.net/) | |
97 |
|
94 | |||
98 | The easiest way of obtaining these dependencies is through the Enthought |
|
95 | The easiest way of obtaining these dependencies is through the Enthought | |
@@ -109,7 +106,7 b' need to follow:' | |||||
109 | 1. Install all of the packages listed above, either individually or using EPD |
|
106 | 1. Install all of the packages listed above, either individually or using EPD | |
110 | on the head node, compute nodes and user workstations. |
|
107 | on the head node, compute nodes and user workstations. | |
111 |
|
108 | |||
112 |
2. Make sure that :file:`C:\\Python2 |
|
109 | 2. Make sure that :file:`C:\\Python27` and :file:`C:\\Python27\\Scripts` are | |
113 | in the system :envvar:`%PATH%` variable on each node. |
|
110 | in the system :envvar:`%PATH%` variable on each node. | |
114 |
|
111 | |||
115 | 3. Install the latest development version of IPython. This can be done by |
|
112 | 3. Install the latest development version of IPython. This can be done by | |
@@ -123,7 +120,7 b' opening a Windows Command Prompt and typing ``ipython``. This will' | |||||
123 | start IPython's interactive shell and you should see something like the |
|
120 | start IPython's interactive shell and you should see something like the | |
124 | following screenshot: |
|
121 | following screenshot: | |
125 |
|
122 | |||
126 |
.. image:: |
|
123 | .. image:: ipython_shell.* | |
127 |
|
124 | |||
128 | Starting an IPython cluster |
|
125 | Starting an IPython cluster | |
129 | =========================== |
|
126 | =========================== | |
@@ -171,7 +168,7 b' You should see a number of messages printed to the screen, ending with' | |||||
171 | "IPython cluster: started". The result should look something like the following |
|
168 | "IPython cluster: started". The result should look something like the following | |
172 | screenshot: |
|
169 | screenshot: | |
173 |
|
170 | |||
174 |
.. image:: |
|
171 | .. image:: ipcluster_start.* | |
175 |
|
172 | |||
176 | At this point, the controller and two engines are running on your local host. |
|
173 | At this point, the controller and two engines are running on your local host. | |
177 | This configuration is useful for testing and for situations where you want to |
|
174 | This configuration is useful for testing and for situations where you want to | |
@@ -213,7 +210,7 b' The output of this command is shown in the screenshot below. Notice how' | |||||
213 | :command:`ipclusterz` prints out the location of the newly created cluster |
|
210 | :command:`ipclusterz` prints out the location of the newly created cluster | |
214 | directory. |
|
211 | directory. | |
215 |
|
212 | |||
216 |
.. image:: |
|
213 | .. image:: ipcluster_create.* | |
217 |
|
214 | |||
218 | Configuring a cluster profile |
|
215 | Configuring a cluster profile | |
219 | ----------------------------- |
|
216 | ----------------------------- | |
@@ -282,7 +279,7 b' must be run again to regenerate the XML job description files. The' | |||||
282 | following screenshot shows what the HPC Job Manager interface looks like |
|
279 | following screenshot shows what the HPC Job Manager interface looks like | |
283 | with a running IPython cluster. |
|
280 | with a running IPython cluster. | |
284 |
|
281 | |||
285 |
.. image:: |
|
282 | .. image:: hpc_job_manager.* | |
286 |
|
283 | |||
287 | Performing a simple interactive parallel computation |
|
284 | Performing a simple interactive parallel computation | |
288 | ==================================================== |
|
285 | ==================================================== | |
@@ -333,5 +330,5 b" The :meth:`map` method has the same signature as Python's builtin :func:`map`" | |||||
333 | function, but runs the calculation in parallel. More involved examples of using |
|
330 | function, but runs the calculation in parallel. More involved examples of using | |
334 | :class:`MultiEngineClient` are provided in the examples that follow. |
|
331 | :class:`MultiEngineClient` are provided in the examples that follow. | |
335 |
|
332 | |||
336 |
.. image:: |
|
333 | .. image:: mec_simple.* | |
337 |
|
334 |
General Comments 0
You need to be logged in to leave comments.
Login now