Show More
@@ -1,116 +1,116 b'' | |||||
1 | .. _parallel_asyncresult: |
|
1 | .. _parallel_asyncresult: | |
2 |
|
2 | |||
3 | ====================== |
|
3 | ====================== | |
4 | The AsyncResult object |
|
4 | The AsyncResult object | |
5 | ====================== |
|
5 | ====================== | |
6 |
|
6 | |||
7 | In non-blocking mode, :meth:`apply` submits the command to be executed and |
|
7 | In non-blocking mode, :meth:`apply` submits the command to be executed and | |
8 | then returns a :class:`~.AsyncResult` object immediately. The |
|
8 | then returns a :class:`~.AsyncResult` object immediately. The | |
9 | AsyncResult object gives you a way of getting a result at a later |
|
9 | AsyncResult object gives you a way of getting a result at a later | |
10 | time through its :meth:`get` method, but it also collects metadata |
|
10 | time through its :meth:`get` method, but it also collects metadata | |
11 | on execution. |
|
11 | on execution. | |
12 |
|
12 | |||
13 |
|
13 | |||
14 | Beyond multiprocessing's AsyncResult |
|
14 | Beyond multiprocessing's AsyncResult | |
15 | ==================================== |
|
15 | ==================================== | |
16 |
|
16 | |||
17 | .. Note:: |
|
17 | .. Note:: | |
18 |
|
18 | |||
19 | The :class:`~.AsyncResult` object provides a superset of the interface in |
|
19 | The :class:`~.AsyncResult` object provides a superset of the interface in | |
20 | :py:class:`multiprocessing.pool.AsyncResult`. See the |
|
20 | :py:class:`multiprocessing.pool.AsyncResult`. See the | |
21 | `official Python documentation <http://docs.python.org/library/multiprocessing#multiprocessing.pool.AsyncResult>`_ |
|
21 | `official Python documentation <http://docs.python.org/library/multiprocessing#multiprocessing.pool.AsyncResult>`_ | |
22 | for more on the basics of this interface. |
|
22 | for more on the basics of this interface. | |
23 |
|
23 | |||
24 | Our AsyncResult objects add a number of convenient features for working with |
|
24 | Our AsyncResult objects add a number of convenient features for working with | |
25 | parallel results, beyond what is provided by the original AsyncResult. |
|
25 | parallel results, beyond what is provided by the original AsyncResult. | |
26 |
|
26 | |||
27 |
|
27 | |||
28 | get_dict |
|
28 | get_dict | |
29 | -------- |
|
29 | -------- | |
30 |
|
30 | |||
31 | First, is :meth:`.AsyncResult.get_dict`, which pulls results as a dictionary |
|
31 | First, is :meth:`.AsyncResult.get_dict`, which pulls results as a dictionary | |
32 | keyed by engine_id, rather than a flat list. This is useful for quickly |
|
32 | keyed by engine_id, rather than a flat list. This is useful for quickly | |
33 | coordinating or distributing information about all of the engines. |
|
33 | coordinating or distributing information about all of the engines. | |
34 |
|
34 | |||
35 | As an example, here is a quick call that gives every engine a dict showing |
|
35 | As an example, here is a quick call that gives every engine a dict showing | |
36 | the PID of every other engine: |
|
36 | the PID of every other engine: | |
37 |
|
37 | |||
38 | .. sourcecode:: ipython |
|
38 | .. sourcecode:: ipython | |
39 |
|
39 | |||
40 | In [10]: ar = rc[:].apply_async(os.getpid) |
|
40 | In [10]: ar = rc[:].apply_async(os.getpid) | |
41 | In [11]: pids = ar.get_dict() |
|
41 | In [11]: pids = ar.get_dict() | |
42 | In [12]: rc[:]['pid_map'] = pids |
|
42 | In [12]: rc[:]['pid_map'] = pids | |
43 |
|
43 | |||
44 | This trick is particularly useful when setting up inter-engine communication, |
|
44 | This trick is particularly useful when setting up inter-engine communication, | |
45 | as in IPython's :file:`examples/parallel/interengine` examples. |
|
45 | as in IPython's :file:`examples/parallel/interengine` examples. | |
46 |
|
46 | |||
47 |
|
47 | |||
48 | Metadata |
|
48 | Metadata | |
49 | ======== |
|
49 | ======== | |
50 |
|
50 | |||
51 | IPython.parallel tracks some metadata about the tasks, which is stored |
|
51 | IPython.parallel tracks some metadata about the tasks, which is stored | |
52 | in the :attr:`.Client.metadata` dict. The AsyncResult object gives you an |
|
52 | in the :attr:`.Client.metadata` dict. The AsyncResult object gives you an | |
53 | interface for this information as well, including timestamps stdout/err, |
|
53 | interface for this information as well, including timestamps stdout/err, | |
54 | and engine IDs. |
|
54 | and engine IDs. | |
55 |
|
55 | |||
56 |
|
56 | |||
57 | Timing |
|
57 | Timing | |
58 | ------ |
|
58 | ------ | |
59 |
|
59 | |||
60 | IPython tracks various timestamps as :py:class:`.datetime` objects, |
|
60 | IPython tracks various timestamps as :py:class:`.datetime` objects, | |
61 | and the AsyncResult object has a few properties that turn these into useful |
|
61 | and the AsyncResult object has a few properties that turn these into useful | |
62 | times (in seconds as floats). |
|
62 | times (in seconds as floats). | |
63 |
|
63 | |||
64 | For use while the tasks are still pending: |
|
64 | For use while the tasks are still pending: | |
65 |
|
65 | |||
66 | * :attr:`ar.elapsed` is just the elapsed seconds since submission, for use |
|
66 | * :attr:`ar.elapsed` is just the elapsed seconds since submission, for use | |
67 | before the AsyncResult is complete. |
|
67 | before the AsyncResult is complete. | |
68 | * :attr:`ar.progress` is the number of tasks that have completed. Fractional progress |
|
68 | * :attr:`ar.progress` is the number of tasks that have completed. Fractional progress | |
69 | would be:: |
|
69 | would be:: | |
70 |
|
70 | |||
71 | 1.0 * ar.progress / len(ar) |
|
71 | 1.0 * ar.progress / len(ar) | |
72 |
|
72 | |||
73 | * :meth:`AsyncResult.wait_interactive` will wait for the result to finish, but |
|
73 | * :meth:`AsyncResult.wait_interactive` will wait for the result to finish, but | |
74 | print out status updates on progress and elapsed time while it waits. |
|
74 | print out status updates on progress and elapsed time while it waits. | |
75 |
|
75 | |||
76 | For use after the tasks are done: |
|
76 | For use after the tasks are done: | |
77 |
|
77 | |||
78 | * :attr:`ar.serial_time` is the sum of the computation time of all of the tasks |
|
78 | * :attr:`ar.serial_time` is the sum of the computation time of all of the tasks | |
79 | done in parallel. |
|
79 | done in parallel. | |
80 | * :attr:`ar.wall_time` is the time between the first task submitted and last result |
|
80 | * :attr:`ar.wall_time` is the time between the first task submitted and last result | |
81 | received. This is the actual cost of computation, including IPython overhead. |
|
81 | received. This is the actual cost of computation, including IPython overhead. | |
82 |
|
82 | |||
83 |
|
83 | |||
84 | .. note:: |
|
84 | .. note:: | |
85 |
|
85 | |||
86 | wall_time is only precise if the Client is waiting for results when |
|
86 | wall_time is only precise if the Client is waiting for results when | |
87 | the task finished, because the `received` timestamp is made when the result is |
|
87 | the task finished, because the `received` timestamp is made when the result is | |
88 | unpacked by the Client, triggered by the :meth:`~Client.spin` call. If you |
|
88 | unpacked by the Client, triggered by the :meth:`~Client.spin` call. If you | |
89 | are doing work in the Client, and not waiting/spinning, then `received` might |
|
89 | are doing work in the Client, and not waiting/spinning, then `received` might | |
90 | be artificially high. |
|
90 | be artificially high. | |
91 |
|
91 | |||
92 | An often interesting metric is the time it actually cost to do the work in parallel |
|
92 | An often interesting metric is the time it actually cost to do the work in parallel | |
93 | relative to the serial computation, and this can be given simply with |
|
93 | relative to the serial computation, and this can be given simply with | |
94 |
|
94 | |||
95 | .. sourcecode:: python |
|
95 | .. sourcecode:: python | |
96 |
|
96 | |||
97 | speedup = ar.serial_time / ar.wall_time |
|
97 | speedup = ar.serial_time / ar.wall_time | |
98 |
|
98 | |||
99 |
|
99 | |||
100 | Map results are iterable! |
|
100 | Map results are iterable! | |
101 | ========================= |
|
101 | ========================= | |
102 |
|
102 | |||
103 | When an AsyncResult object has multiple results (e.g. the :class:`~AsyncMapResult` |
|
103 | When an AsyncResult object has multiple results (e.g. the :class:`~AsyncMapResult` | |
104 | object), you can actually iterate through them, and act on the results as they arrive: |
|
104 | object), you can actually iterate through them, and act on the results as they arrive: | |
105 |
|
105 | |||
106 | .. literalinclude:: ../../examples/parallel/itermapresult.py |
|
106 | .. literalinclude:: ../../examples/parallel/itermapresult.py | |
107 | :language: python |
|
107 | :language: python | |
108 |
:lines: 20-6 |
|
108 | :lines: 20-67 | |
109 |
|
109 | |||
110 | .. seealso:: |
|
110 | .. seealso:: | |
111 |
|
111 | |||
112 | When AsyncResult or the AsyncMapResult don't provide what you need (for instance, |
|
112 | When AsyncResult or the AsyncMapResult don't provide what you need (for instance, | |
113 | handling individual results as they arrive, but with metadata), you can always |
|
113 | handling individual results as they arrive, but with metadata), you can always | |
114 | just split the original result's ``msg_ids`` attribute, and handle them as you like. |
|
114 | just split the original result's ``msg_ids`` attribute, and handle them as you like. | |
115 |
|
115 | |||
116 | For an example of this, see :file:`docs/examples/parallel/customresult.py` |
|
116 | For an example of this, see :file:`docs/examples/parallel/customresult.py` |
General Comments 0
You need to be logged in to leave comments.
Login now