##// END OF EJS Templates
move parallel doc figures into 'figs' subdir...
MinRK -
Show More
@@ -10,7 +10,7 b' for working with Graphs is NetworkX_. Here, we will walk through a demo mapping'
10 10 a nx DAG to task dependencies.
11 11
12 12 The full script that runs this demo can be found in
13 :file:`docs/examples/newparallel/dagdeps.py`.
13 :file:`docs/examples/parallel/dagdeps.py`.
14 14
15 15 Why are DAGs good for task dependencies?
16 16 ----------------------------------------
@@ -30,7 +30,7 b' A Sample DAG'
30 30
31 31 Here, we have a very simple 5-node DAG:
32 32
33 .. figure:: simpledag.*
33 .. figure:: figs/ simpledag.*
34 34
35 35 With NetworkX, an arrow is just a fattened bit on the edge. Here, we can see that task 0
36 36 depends on nothing, and can run immediately. 1 and 2 depend on 0; 3 depends on
@@ -80,7 +80,7 b' The code to generate the simple DAG:'
80 80 For demonstration purposes, we have a function that generates a random DAG with a given
81 81 number of nodes and edges.
82 82
83 .. literalinclude:: ../../examples/newparallel/dagdeps.py
83 .. literalinclude:: ../../examples/parallel/dagdeps.py
84 84 :language: python
85 85 :lines: 20-36
86 86
@@ -137,7 +137,7 b' These objects store a variety of metadata about each task, including various tim'
137 137 We can validate that the dependencies were respected by checking that each task was
138 138 started after all of its predecessors were completed:
139 139
140 .. literalinclude:: ../../examples/newparallel/dagdeps.py
140 .. literalinclude:: ../../examples/parallel/dagdeps.py
141 141 :language: python
142 142 :lines: 64-70
143 143
@@ -164,7 +164,7 b' will be at the top, and quick, small tasks will be at the bottom.'
164 164 In [13]: nx.draw(G, pos, node_list=colors.keys(), node_color=colors.values(),
165 165 ...: cmap=gist_rainbow)
166 166
167 .. figure:: dagdeps.*
167 .. figure:: figs/ dagdeps.*
168 168
169 169 Time started on x, runtime on y, and color-coded by engine-id (in this case there
170 170 were four engines). Edges denote dependencies.
1 NO CONTENT: file renamed from docs/source/parallel/asian_call.pdf to docs/source/parallel/figs/asian_call.pdf
1 NO CONTENT: file renamed from docs/source/parallel/asian_call.png to docs/source/parallel/figs/asian_call.png
1 NO CONTENT: file renamed from docs/source/parallel/asian_put.pdf to docs/source/parallel/figs/asian_put.pdf
1 NO CONTENT: file renamed from docs/source/parallel/asian_put.png to docs/source/parallel/figs/asian_put.png
1 NO CONTENT: file renamed from docs/source/parallel/dagdeps.pdf to docs/source/parallel/figs/dagdeps.pdf
1 NO CONTENT: file renamed from docs/source/parallel/dagdeps.png to docs/source/parallel/figs/dagdeps.png
1 NO CONTENT: file renamed from docs/source/parallel/hpc_job_manager.pdf to docs/source/parallel/figs/hpc_job_manager.pdf
1 NO CONTENT: file renamed from docs/source/parallel/hpc_job_manager.png to docs/source/parallel/figs/hpc_job_manager.png
1 NO CONTENT: file renamed from docs/source/parallel/ipcluster_create.pdf to docs/source/parallel/figs/ipcluster_create.pdf
1 NO CONTENT: file renamed from docs/source/parallel/ipcluster_create.png to docs/source/parallel/figs/ipcluster_create.png
1 NO CONTENT: file renamed from docs/source/parallel/ipcluster_start.pdf to docs/source/parallel/figs/ipcluster_start.pdf
1 NO CONTENT: file renamed from docs/source/parallel/ipcluster_start.png to docs/source/parallel/figs/ipcluster_start.png
1 NO CONTENT: file renamed from docs/source/parallel/ipython_shell.pdf to docs/source/parallel/figs/ipython_shell.pdf
1 NO CONTENT: file renamed from docs/source/parallel/ipython_shell.png to docs/source/parallel/figs/ipython_shell.png
1 NO CONTENT: file renamed from docs/source/parallel/mec_simple.pdf to docs/source/parallel/figs/mec_simple.pdf
1 NO CONTENT: file renamed from docs/source/parallel/mec_simple.png to docs/source/parallel/figs/mec_simple.png
1 NO CONTENT: file renamed from docs/source/parallel/parallel_pi.pdf to docs/source/parallel/figs/parallel_pi.pdf
1 NO CONTENT: file renamed from docs/source/parallel/parallel_pi.png to docs/source/parallel/figs/parallel_pi.png
1 NO CONTENT: file renamed from docs/source/parallel/simpledag.pdf to docs/source/parallel/figs/simpledag.pdf
1 NO CONTENT: file renamed from docs/source/parallel/simpledag.png to docs/source/parallel/figs/simpledag.png
1 NO CONTENT: file renamed from docs/source/parallel/single_digits.pdf to docs/source/parallel/figs/single_digits.pdf
1 NO CONTENT: file renamed from docs/source/parallel/single_digits.png to docs/source/parallel/figs/single_digits.png
1 NO CONTENT: file renamed from docs/source/parallel/two_digit_counts.pdf to docs/source/parallel/figs/two_digit_counts.pdf
1 NO CONTENT: file renamed from docs/source/parallel/two_digit_counts.png to docs/source/parallel/figs/two_digit_counts.png
@@ -4,7 +4,7 b' Parallel examples'
4 4
5 5 .. note::
6 6
7 Performance numbers from ``IPython.kernel``, not newparallel.
7 Performance numbers from ``IPython.kernel``, not new ``IPython.parallel``.
8 8
9 9 In this section we describe two more involved examples of using an IPython
10 10 cluster to perform a parallel computation. In these examples, we will be using
@@ -27,7 +27,7 b' million digits.'
27 27
28 28 In both the serial and parallel calculation we will be using functions defined
29 29 in the :file:`pidigits.py` file, which is available in the
30 :file:`docs/examples/newparallel` directory of the IPython source distribution.
30 :file:`docs/examples/parallel` directory of the IPython source distribution.
31 31 These functions provide basic facilities for working with the digits of pi and
32 32 can be loaded into IPython by putting :file:`pidigits.py` in your current
33 33 working directory and then doing:
@@ -75,7 +75,7 b' The resulting plot of the single digit counts shows that each digit occurs'
75 75 approximately 1,000 times, but that with only 10,000 digits the
76 76 statistical fluctuations are still rather large:
77 77
78 .. image:: single_digits.*
78 .. image:: figs/single_digits.*
79 79
80 80 It is clear that to reduce the relative fluctuations in the counts, we need
81 81 to look at many more digits of pi. That brings us to the parallel calculation.
@@ -101,13 +101,13 b' compute the two digit counts for the digits in a single file. Then in a final'
101 101 step the counts from each engine will be added up. To perform this
102 102 calculation, we will need two top-level functions from :file:`pidigits.py`:
103 103
104 .. literalinclude:: ../../examples/newparallel/pidigits.py
104 .. literalinclude:: ../../examples/parallel/pi/pidigits.py
105 105 :language: python
106 106 :lines: 47-62
107 107
108 108 We will also use the :func:`plot_two_digit_freqs` function to plot the
109 109 results. The code to run this calculation in parallel is contained in
110 :file:`docs/examples/newparallel/parallelpi.py`. This code can be run in parallel
110 :file:`docs/examples/parallel/parallelpi.py`. This code can be run in parallel
111 111 using IPython by following these steps:
112 112
113 113 1. Use :command:`ipcluster` to start 15 engines. We used an 8 core (2 quad
@@ -188,7 +188,7 b' most likely and that "06" and "07" are least likely. Further analysis would'
188 188 show that the relative size of the statistical fluctuations have decreased
189 189 compared to the 10,000 digit calculation.
190 190
191 .. image:: two_digit_counts.*
191 .. image:: figs/two_digit_counts.*
192 192
193 193
194 194 Parallel options pricing
@@ -209,12 +209,12 b' simulation of the underlying asset price. In this example we use this approach'
209 209 to price both European and Asian (path dependent) options for various strike
210 210 prices and volatilities.
211 211
212 The code for this example can be found in the :file:`docs/examples/newparallel`
212 The code for this example can be found in the :file:`docs/examples/parallel`
213 213 directory of the IPython source. The function :func:`price_options` in
214 214 :file:`mcpricer.py` implements the basic Monte Carlo pricing algorithm using
215 215 the NumPy package and is shown here:
216 216
217 .. literalinclude:: ../../examples/newparallel/mcpricer.py
217 .. literalinclude:: ../../examples/parallel/options/mcpricer.py
218 218 :language: python
219 219
220 220 To run this code in parallel, we will use IPython's :class:`LoadBalancedView` class,
@@ -222,21 +222,21 b' which distributes work to the engines using dynamic load balancing. This'
222 222 view is a wrapper of the :class:`Client` class shown in
223 223 the previous example. The parallel calculation using :class:`LoadBalancedView` can
224 224 be found in the file :file:`mcpricer.py`. The code in this file creates a
225 :class:`TaskClient` instance and then submits a set of tasks using
226 :meth:`TaskClient.run` that calculate the option prices for different
225 :class:`LoadBalancedView` instance and then submits a set of tasks using
226 :meth:`LoadBalancedView.apply` that calculate the option prices for different
227 227 volatilities and strike prices. The results are then plotted as a 2D contour
228 228 plot using Matplotlib.
229 229
230 .. literalinclude:: ../../examples/newparallel/mcdriver.py
230 .. literalinclude:: ../../examples/parallel/options/mckernel.py
231 231 :language: python
232 232
233 233 To use this code, start an IPython cluster using :command:`ipcluster`, open
234 IPython in the pylab mode with the file :file:`mcdriver.py` in your current
234 IPython in the pylab mode with the file :file:`mckernel.py` in your current
235 235 working directory and then type:
236 236
237 237 .. sourcecode:: ipython
238 238
239 In [7]: run mcdriver.py
239 In [7]: run mckernel.py
240 240 Submitted tasks: [0, 1, 2, ...]
241 241
242 242 Once all the tasks have finished, the results can be plotted using the
@@ -257,9 +257,9 b' entire calculation (10 strike prices, 10 volatilities, 100,000 paths for each)'
257 257 took 30 seconds in parallel, giving a speedup of 7.7x, which is comparable
258 258 to the speedup observed in our previous example.
259 259
260 .. image:: asian_call.*
260 .. image:: figs/asian_call.*
261 261
262 .. image:: asian_put.*
262 .. image:: figs/asian_put.*
263 263
264 264 Conclusion
265 265 ==========
@@ -280,5 +280,5 b' parallel architecture that have been demonstrated:'
280 280
281 281 .. note::
282 282
283 The newparallel code has never been run on Windows HPC Server, so the last
283 The new parallel code has never been run on Windows HPC Server, so the last
284 284 conclusion is untested.
@@ -87,12 +87,12 b' To load-balance :meth:`map`,simply use a LoadBalancedView:'
87 87
88 88 In [62]: lview.block = True
89 89
90 In [63]: serial_result = map(lambda x:x**10, range(32))
90 In [63]: serial_result = map(lambda x:x**10, range(32))
91 91
92 In [64]: parallel_result = lview.map(lambda x:x**10, range(32))
92 In [64]: parallel_result = lview.map(lambda x:x**10, range(32))
93 93
94 In [65]: serial_result==parallel_result
95 Out[65]: True
94 In [65]: serial_result==parallel_result
95 Out[65]: True
96 96
97 97 Parallel function decorator
98 98 ---------------------------
@@ -111,6 +111,15 b' that turns any Python function into a parallel function:'
111 111 In [11]: f.map(range(32)) # this is done in parallel
112 112 Out[11]: [0.0,10.0,160.0,...]
113 113
114 .. _parallel_taskmap:
115
116 The AsyncMapResult
117 ==================
118
119 When you call ``lview.map_async(f, sequence)``, or just :meth:`map` with `block=True`, then
120 what you get in return will be an :class:`~AsyncMapResult` object. These are similar to
121 AsyncResult objects, but with one key difference
122
114 123 .. _parallel_dependencies:
115 124
116 125 Dependencies
@@ -291,8 +300,6 b' you can skip using Dependency objects, and just pass msg_ids or AsyncResult obje'
291 300 onto task dependencies.
292 301
293 302
294
295
296 303 Impossible Dependencies
297 304 ***********************
298 305
@@ -120,7 +120,7 b' opening a Windows Command Prompt and typing ``ipython``. This will'
120 120 start IPython's interactive shell and you should see something like the
121 121 following screenshot:
122 122
123 .. image:: ipython_shell.*
123 .. image:: figs/ipython_shell.*
124 124
125 125 Starting an IPython cluster
126 126 ===========================
@@ -168,7 +168,7 b' You should see a number of messages printed to the screen, ending with'
168 168 "IPython cluster: started". The result should look something like the following
169 169 screenshot:
170 170
171 .. image:: ipcluster_start.*
171 .. image:: figs/ipcluster_start.*
172 172
173 173 At this point, the controller and two engines are running on your local host.
174 174 This configuration is useful for testing and for situations where you want to
@@ -210,7 +210,7 b' The output of this command is shown in the screenshot below. Notice how'
210 210 :command:`ipcluster` prints out the location of the newly created cluster
211 211 directory.
212 212
213 .. image:: ipcluster_create.*
213 .. image:: figs/ipcluster_create.*
214 214
215 215 Configuring a cluster profile
216 216 -----------------------------
@@ -279,7 +279,7 b' must be run again to regenerate the XML job description files. The'
279 279 following screenshot shows what the HPC Job Manager interface looks like
280 280 with a running IPython cluster.
281 281
282 .. image:: hpc_job_manager.*
282 .. image:: figs/hpc_job_manager.*
283 283
284 284 Performing a simple interactive parallel computation
285 285 ====================================================
@@ -330,5 +330,5 b" The :meth:`map` method has the same signature as Python's builtin :func:`map`"
330 330 function, but runs the calculation in parallel. More involved examples of using
331 331 :class:`MultiEngineClient` are provided in the examples that follow.
332 332
333 .. image:: mec_simple.*
333 .. image:: figs/mec_simple.*
334 334
General Comments 0
You need to be logged in to leave comments. Login now