Show More
@@ -1,173 +1,173 b'' | |||||
1 | .. _dag_dependencies: |
|
1 | .. _dag_dependencies: | |
2 |
|
2 | |||
3 | ================ |
|
3 | ================ | |
4 | DAG Dependencies |
|
4 | DAG Dependencies | |
5 | ================ |
|
5 | ================ | |
6 |
|
6 | |||
7 | Often, parallel workflow is described in terms of a `Directed Acyclic Graph |
|
7 | Often, parallel workflow is described in terms of a `Directed Acyclic Graph | |
8 | <http://en.wikipedia.org/wiki/Directed_acyclic_graph>`_ or DAG. A popular library |
|
8 | <http://en.wikipedia.org/wiki/Directed_acyclic_graph>`_ or DAG. A popular library | |
9 | for working with Graphs is NetworkX_. Here, we will walk through a demo mapping |
|
9 | for working with Graphs is NetworkX_. Here, we will walk through a demo mapping | |
10 | a nx DAG to task dependencies. |
|
10 | a nx DAG to task dependencies. | |
11 |
|
11 | |||
12 | The full script that runs this demo can be found in |
|
12 | The full script that runs this demo can be found in | |
13 |
:file:`docs/examples/ |
|
13 | :file:`docs/examples/parallel/dagdeps.py`. | |
14 |
|
14 | |||
15 | Why are DAGs good for task dependencies? |
|
15 | Why are DAGs good for task dependencies? | |
16 | ---------------------------------------- |
|
16 | ---------------------------------------- | |
17 |
|
17 | |||
18 | The 'G' in DAG is 'Graph'. A Graph is a collection of **nodes** and **edges** that connect |
|
18 | The 'G' in DAG is 'Graph'. A Graph is a collection of **nodes** and **edges** that connect | |
19 | the nodes. For our purposes, each node would be a task, and each edge would be a |
|
19 | the nodes. For our purposes, each node would be a task, and each edge would be a | |
20 | dependency. The 'D' in DAG stands for 'Directed'. This means that each edge has a |
|
20 | dependency. The 'D' in DAG stands for 'Directed'. This means that each edge has a | |
21 | direction associated with it. So we can interpret the edge (a,b) as meaning that b depends |
|
21 | direction associated with it. So we can interpret the edge (a,b) as meaning that b depends | |
22 | on a, whereas the edge (b,a) would mean a depends on b. The 'A' is 'Acyclic', meaning that |
|
22 | on a, whereas the edge (b,a) would mean a depends on b. The 'A' is 'Acyclic', meaning that | |
23 | there must not be any closed loops in the graph. This is important for dependencies, |
|
23 | there must not be any closed loops in the graph. This is important for dependencies, | |
24 | because if a loop were closed, then a task could ultimately depend on itself, and never be |
|
24 | because if a loop were closed, then a task could ultimately depend on itself, and never be | |
25 | able to run. If your workflow can be described as a DAG, then it is impossible for your |
|
25 | able to run. If your workflow can be described as a DAG, then it is impossible for your | |
26 | dependencies to cause a deadlock. |
|
26 | dependencies to cause a deadlock. | |
27 |
|
27 | |||
28 | A Sample DAG |
|
28 | A Sample DAG | |
29 | ------------ |
|
29 | ------------ | |
30 |
|
30 | |||
31 | Here, we have a very simple 5-node DAG: |
|
31 | Here, we have a very simple 5-node DAG: | |
32 |
|
32 | |||
33 | .. figure:: simpledag.* |
|
33 | .. figure:: figs/ simpledag.* | |
34 |
|
34 | |||
35 | With NetworkX, an arrow is just a fattened bit on the edge. Here, we can see that task 0 |
|
35 | With NetworkX, an arrow is just a fattened bit on the edge. Here, we can see that task 0 | |
36 | depends on nothing, and can run immediately. 1 and 2 depend on 0; 3 depends on |
|
36 | depends on nothing, and can run immediately. 1 and 2 depend on 0; 3 depends on | |
37 | 1 and 2; and 4 depends only on 1. |
|
37 | 1 and 2; and 4 depends only on 1. | |
38 |
|
38 | |||
39 | A possible sequence of events for this workflow: |
|
39 | A possible sequence of events for this workflow: | |
40 |
|
40 | |||
41 | 0. Task 0 can run right away |
|
41 | 0. Task 0 can run right away | |
42 | 1. 0 finishes, so 1,2 can start |
|
42 | 1. 0 finishes, so 1,2 can start | |
43 | 2. 1 finishes, 3 is still waiting on 2, but 4 can start right away |
|
43 | 2. 1 finishes, 3 is still waiting on 2, but 4 can start right away | |
44 | 3. 2 finishes, and 3 can finally start |
|
44 | 3. 2 finishes, and 3 can finally start | |
45 |
|
45 | |||
46 |
|
46 | |||
47 | Further, taking failures into account, assuming all dependencies are run with the default |
|
47 | Further, taking failures into account, assuming all dependencies are run with the default | |
48 | `success=True,failure=False`, the following cases would occur for each node's failure: |
|
48 | `success=True,failure=False`, the following cases would occur for each node's failure: | |
49 |
|
49 | |||
50 | 0. fails: all other tasks fail as Impossible |
|
50 | 0. fails: all other tasks fail as Impossible | |
51 | 1. 2 can still succeed, but 3,4 are unreachable |
|
51 | 1. 2 can still succeed, but 3,4 are unreachable | |
52 | 2. 3 becomes unreachable, but 4 is unaffected |
|
52 | 2. 3 becomes unreachable, but 4 is unaffected | |
53 | 3. and 4. are terminal, and can have no effect on other nodes |
|
53 | 3. and 4. are terminal, and can have no effect on other nodes | |
54 |
|
54 | |||
55 | The code to generate the simple DAG: |
|
55 | The code to generate the simple DAG: | |
56 |
|
56 | |||
57 | .. sourcecode:: python |
|
57 | .. sourcecode:: python | |
58 |
|
58 | |||
59 | import networkx as nx |
|
59 | import networkx as nx | |
60 |
|
60 | |||
61 | G = nx.DiGraph() |
|
61 | G = nx.DiGraph() | |
62 |
|
62 | |||
63 | # add 5 nodes, labeled 0-4: |
|
63 | # add 5 nodes, labeled 0-4: | |
64 | map(G.add_node, range(5)) |
|
64 | map(G.add_node, range(5)) | |
65 | # 1,2 depend on 0: |
|
65 | # 1,2 depend on 0: | |
66 | G.add_edge(0,1) |
|
66 | G.add_edge(0,1) | |
67 | G.add_edge(0,2) |
|
67 | G.add_edge(0,2) | |
68 | # 3 depends on 1,2 |
|
68 | # 3 depends on 1,2 | |
69 | G.add_edge(1,3) |
|
69 | G.add_edge(1,3) | |
70 | G.add_edge(2,3) |
|
70 | G.add_edge(2,3) | |
71 | # 4 depends on 1 |
|
71 | # 4 depends on 1 | |
72 | G.add_edge(1,4) |
|
72 | G.add_edge(1,4) | |
73 |
|
73 | |||
74 | # now draw the graph: |
|
74 | # now draw the graph: | |
75 | pos = { 0 : (0,0), 1 : (1,1), 2 : (-1,1), |
|
75 | pos = { 0 : (0,0), 1 : (1,1), 2 : (-1,1), | |
76 | 3 : (0,2), 4 : (2,2)} |
|
76 | 3 : (0,2), 4 : (2,2)} | |
77 | nx.draw(G, pos, edge_color='r') |
|
77 | nx.draw(G, pos, edge_color='r') | |
78 |
|
78 | |||
79 |
|
79 | |||
80 | For demonstration purposes, we have a function that generates a random DAG with a given |
|
80 | For demonstration purposes, we have a function that generates a random DAG with a given | |
81 | number of nodes and edges. |
|
81 | number of nodes and edges. | |
82 |
|
82 | |||
83 |
.. literalinclude:: ../../examples/ |
|
83 | .. literalinclude:: ../../examples/parallel/dagdeps.py | |
84 | :language: python |
|
84 | :language: python | |
85 | :lines: 20-36 |
|
85 | :lines: 20-36 | |
86 |
|
86 | |||
87 | So first, we start with a graph of 32 nodes, with 128 edges: |
|
87 | So first, we start with a graph of 32 nodes, with 128 edges: | |
88 |
|
88 | |||
89 | .. sourcecode:: ipython |
|
89 | .. sourcecode:: ipython | |
90 |
|
90 | |||
91 | In [2]: G = random_dag(32,128) |
|
91 | In [2]: G = random_dag(32,128) | |
92 |
|
92 | |||
93 | Now, we need to build our dict of jobs corresponding to the nodes on the graph: |
|
93 | Now, we need to build our dict of jobs corresponding to the nodes on the graph: | |
94 |
|
94 | |||
95 | .. sourcecode:: ipython |
|
95 | .. sourcecode:: ipython | |
96 |
|
96 | |||
97 | In [3]: jobs = {} |
|
97 | In [3]: jobs = {} | |
98 |
|
98 | |||
99 | # in reality, each job would presumably be different |
|
99 | # in reality, each job would presumably be different | |
100 | # randomwait is just a function that sleeps for a random interval |
|
100 | # randomwait is just a function that sleeps for a random interval | |
101 | In [4]: for node in G: |
|
101 | In [4]: for node in G: | |
102 | ...: jobs[node] = randomwait |
|
102 | ...: jobs[node] = randomwait | |
103 |
|
103 | |||
104 | Once we have a dict of jobs matching the nodes on the graph, we can start submitting jobs, |
|
104 | Once we have a dict of jobs matching the nodes on the graph, we can start submitting jobs, | |
105 | and linking up the dependencies. Since we don't know a job's msg_id until it is submitted, |
|
105 | and linking up the dependencies. Since we don't know a job's msg_id until it is submitted, | |
106 | which is necessary for building dependencies, it is critical that we don't submit any jobs |
|
106 | which is necessary for building dependencies, it is critical that we don't submit any jobs | |
107 | before other jobs it may depend on. Fortunately, NetworkX provides a |
|
107 | before other jobs it may depend on. Fortunately, NetworkX provides a | |
108 | :meth:`topological_sort` method which ensures exactly this. It presents an iterable, that |
|
108 | :meth:`topological_sort` method which ensures exactly this. It presents an iterable, that | |
109 | guarantees that when you arrive at a node, you have already visited all the nodes it |
|
109 | guarantees that when you arrive at a node, you have already visited all the nodes it | |
110 | on which it depends: |
|
110 | on which it depends: | |
111 |
|
111 | |||
112 | .. sourcecode:: ipython |
|
112 | .. sourcecode:: ipython | |
113 |
|
113 | |||
114 | In [5]: rc = Client() |
|
114 | In [5]: rc = Client() | |
115 | In [5]: view = rc.load_balanced_view() |
|
115 | In [5]: view = rc.load_balanced_view() | |
116 |
|
116 | |||
117 | In [6]: results = {} |
|
117 | In [6]: results = {} | |
118 |
|
118 | |||
119 | In [7]: for node in G.topological_sort(): |
|
119 | In [7]: for node in G.topological_sort(): | |
120 | ...: # get list of AsyncResult objects from nodes |
|
120 | ...: # get list of AsyncResult objects from nodes | |
121 | ...: # leading into this one as dependencies |
|
121 | ...: # leading into this one as dependencies | |
122 | ...: deps = [ results[n] for n in G.predecessors(node) ] |
|
122 | ...: deps = [ results[n] for n in G.predecessors(node) ] | |
123 | ...: # submit and store AsyncResult object |
|
123 | ...: # submit and store AsyncResult object | |
124 | ...: results[node] = view.apply_with_flags(jobs[node], after=deps, block=False) |
|
124 | ...: results[node] = view.apply_with_flags(jobs[node], after=deps, block=False) | |
125 |
|
125 | |||
126 | Now that we have submitted all the jobs, we can wait for the results: |
|
126 | Now that we have submitted all the jobs, we can wait for the results: | |
127 |
|
127 | |||
128 | .. sourcecode:: ipython |
|
128 | .. sourcecode:: ipython | |
129 |
|
129 | |||
130 | In [8]: view.wait(results.values()) |
|
130 | In [8]: view.wait(results.values()) | |
131 |
|
131 | |||
132 | Now, at least we know that all the jobs ran and did not fail (``r.get()`` would have |
|
132 | Now, at least we know that all the jobs ran and did not fail (``r.get()`` would have | |
133 | raised an error if a task failed). But we don't know that the ordering was properly |
|
133 | raised an error if a task failed). But we don't know that the ordering was properly | |
134 | respected. For this, we can use the :attr:`metadata` attribute of each AsyncResult. |
|
134 | respected. For this, we can use the :attr:`metadata` attribute of each AsyncResult. | |
135 |
|
135 | |||
136 | These objects store a variety of metadata about each task, including various timestamps. |
|
136 | These objects store a variety of metadata about each task, including various timestamps. | |
137 | We can validate that the dependencies were respected by checking that each task was |
|
137 | We can validate that the dependencies were respected by checking that each task was | |
138 | started after all of its predecessors were completed: |
|
138 | started after all of its predecessors were completed: | |
139 |
|
139 | |||
140 |
.. literalinclude:: ../../examples/ |
|
140 | .. literalinclude:: ../../examples/parallel/dagdeps.py | |
141 | :language: python |
|
141 | :language: python | |
142 | :lines: 64-70 |
|
142 | :lines: 64-70 | |
143 |
|
143 | |||
144 | We can also validate the graph visually. By drawing the graph with each node's x-position |
|
144 | We can also validate the graph visually. By drawing the graph with each node's x-position | |
145 | as its start time, all arrows must be pointing to the right if dependencies were respected. |
|
145 | as its start time, all arrows must be pointing to the right if dependencies were respected. | |
146 | For spreading, the y-position will be the runtime of the task, so long tasks |
|
146 | For spreading, the y-position will be the runtime of the task, so long tasks | |
147 | will be at the top, and quick, small tasks will be at the bottom. |
|
147 | will be at the top, and quick, small tasks will be at the bottom. | |
148 |
|
148 | |||
149 | .. sourcecode:: ipython |
|
149 | .. sourcecode:: ipython | |
150 |
|
150 | |||
151 | In [10]: from matplotlib.dates import date2num |
|
151 | In [10]: from matplotlib.dates import date2num | |
152 |
|
152 | |||
153 | In [11]: from matplotlib.cm import gist_rainbow |
|
153 | In [11]: from matplotlib.cm import gist_rainbow | |
154 |
|
154 | |||
155 | In [12]: pos = {}; colors = {} |
|
155 | In [12]: pos = {}; colors = {} | |
156 |
|
156 | |||
157 | In [12]: for node in G: |
|
157 | In [12]: for node in G: | |
158 | ...: md = results[node].metadata |
|
158 | ...: md = results[node].metadata | |
159 | ...: start = date2num(md.started) |
|
159 | ...: start = date2num(md.started) | |
160 | ...: runtime = date2num(md.completed) - start |
|
160 | ...: runtime = date2num(md.completed) - start | |
161 | ...: pos[node] = (start, runtime) |
|
161 | ...: pos[node] = (start, runtime) | |
162 | ...: colors[node] = md.engine_id |
|
162 | ...: colors[node] = md.engine_id | |
163 |
|
163 | |||
164 | In [13]: nx.draw(G, pos, node_list=colors.keys(), node_color=colors.values(), |
|
164 | In [13]: nx.draw(G, pos, node_list=colors.keys(), node_color=colors.values(), | |
165 | ...: cmap=gist_rainbow) |
|
165 | ...: cmap=gist_rainbow) | |
166 |
|
166 | |||
167 | .. figure:: dagdeps.* |
|
167 | .. figure:: figs/ dagdeps.* | |
168 |
|
168 | |||
169 | Time started on x, runtime on y, and color-coded by engine-id (in this case there |
|
169 | Time started on x, runtime on y, and color-coded by engine-id (in this case there | |
170 | were four engines). Edges denote dependencies. |
|
170 | were four engines). Edges denote dependencies. | |
171 |
|
171 | |||
172 |
|
172 | |||
173 | .. _NetworkX: http://networkx.lanl.gov/ |
|
173 | .. _NetworkX: http://networkx.lanl.gov/ |
1 | NO CONTENT: file renamed from docs/source/parallel/asian_call.pdf to docs/source/parallel/figs/asian_call.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/asian_call.pdf to docs/source/parallel/figs/asian_call.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/asian_call.png to docs/source/parallel/figs/asian_call.png |
|
NO CONTENT: file renamed from docs/source/parallel/asian_call.png to docs/source/parallel/figs/asian_call.png |
1 | NO CONTENT: file renamed from docs/source/parallel/asian_put.pdf to docs/source/parallel/figs/asian_put.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/asian_put.pdf to docs/source/parallel/figs/asian_put.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/asian_put.png to docs/source/parallel/figs/asian_put.png |
|
NO CONTENT: file renamed from docs/source/parallel/asian_put.png to docs/source/parallel/figs/asian_put.png |
1 | NO CONTENT: file renamed from docs/source/parallel/dagdeps.pdf to docs/source/parallel/figs/dagdeps.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/dagdeps.pdf to docs/source/parallel/figs/dagdeps.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/dagdeps.png to docs/source/parallel/figs/dagdeps.png |
|
NO CONTENT: file renamed from docs/source/parallel/dagdeps.png to docs/source/parallel/figs/dagdeps.png |
1 | NO CONTENT: file renamed from docs/source/parallel/hpc_job_manager.pdf to docs/source/parallel/figs/hpc_job_manager.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/hpc_job_manager.pdf to docs/source/parallel/figs/hpc_job_manager.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/hpc_job_manager.png to docs/source/parallel/figs/hpc_job_manager.png |
|
NO CONTENT: file renamed from docs/source/parallel/hpc_job_manager.png to docs/source/parallel/figs/hpc_job_manager.png |
1 | NO CONTENT: file renamed from docs/source/parallel/ipcluster_create.pdf to docs/source/parallel/figs/ipcluster_create.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/ipcluster_create.pdf to docs/source/parallel/figs/ipcluster_create.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/ipcluster_create.png to docs/source/parallel/figs/ipcluster_create.png |
|
NO CONTENT: file renamed from docs/source/parallel/ipcluster_create.png to docs/source/parallel/figs/ipcluster_create.png |
1 | NO CONTENT: file renamed from docs/source/parallel/ipcluster_start.pdf to docs/source/parallel/figs/ipcluster_start.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/ipcluster_start.pdf to docs/source/parallel/figs/ipcluster_start.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/ipcluster_start.png to docs/source/parallel/figs/ipcluster_start.png |
|
NO CONTENT: file renamed from docs/source/parallel/ipcluster_start.png to docs/source/parallel/figs/ipcluster_start.png |
1 | NO CONTENT: file renamed from docs/source/parallel/ipython_shell.pdf to docs/source/parallel/figs/ipython_shell.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/ipython_shell.pdf to docs/source/parallel/figs/ipython_shell.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/ipython_shell.png to docs/source/parallel/figs/ipython_shell.png |
|
NO CONTENT: file renamed from docs/source/parallel/ipython_shell.png to docs/source/parallel/figs/ipython_shell.png |
1 | NO CONTENT: file renamed from docs/source/parallel/mec_simple.pdf to docs/source/parallel/figs/mec_simple.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/mec_simple.pdf to docs/source/parallel/figs/mec_simple.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/mec_simple.png to docs/source/parallel/figs/mec_simple.png |
|
NO CONTENT: file renamed from docs/source/parallel/mec_simple.png to docs/source/parallel/figs/mec_simple.png |
1 | NO CONTENT: file renamed from docs/source/parallel/parallel_pi.pdf to docs/source/parallel/figs/parallel_pi.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/parallel_pi.pdf to docs/source/parallel/figs/parallel_pi.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/parallel_pi.png to docs/source/parallel/figs/parallel_pi.png |
|
NO CONTENT: file renamed from docs/source/parallel/parallel_pi.png to docs/source/parallel/figs/parallel_pi.png |
1 | NO CONTENT: file renamed from docs/source/parallel/simpledag.pdf to docs/source/parallel/figs/simpledag.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/simpledag.pdf to docs/source/parallel/figs/simpledag.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/simpledag.png to docs/source/parallel/figs/simpledag.png |
|
NO CONTENT: file renamed from docs/source/parallel/simpledag.png to docs/source/parallel/figs/simpledag.png |
1 | NO CONTENT: file renamed from docs/source/parallel/single_digits.pdf to docs/source/parallel/figs/single_digits.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/single_digits.pdf to docs/source/parallel/figs/single_digits.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/single_digits.png to docs/source/parallel/figs/single_digits.png |
|
NO CONTENT: file renamed from docs/source/parallel/single_digits.png to docs/source/parallel/figs/single_digits.png |
1 | NO CONTENT: file renamed from docs/source/parallel/two_digit_counts.pdf to docs/source/parallel/figs/two_digit_counts.pdf |
|
NO CONTENT: file renamed from docs/source/parallel/two_digit_counts.pdf to docs/source/parallel/figs/two_digit_counts.pdf |
1 | NO CONTENT: file renamed from docs/source/parallel/two_digit_counts.png to docs/source/parallel/figs/two_digit_counts.png |
|
NO CONTENT: file renamed from docs/source/parallel/two_digit_counts.png to docs/source/parallel/figs/two_digit_counts.png |
@@ -1,284 +1,284 b'' | |||||
1 | ================= |
|
1 | ================= | |
2 | Parallel examples |
|
2 | Parallel examples | |
3 | ================= |
|
3 | ================= | |
4 |
|
4 | |||
5 | .. note:: |
|
5 | .. note:: | |
6 |
|
6 | |||
7 | Performance numbers from ``IPython.kernel``, not newparallel. |
|
7 | Performance numbers from ``IPython.kernel``, not new ``IPython.parallel``. | |
8 |
|
8 | |||
9 | In this section we describe two more involved examples of using an IPython |
|
9 | In this section we describe two more involved examples of using an IPython | |
10 | cluster to perform a parallel computation. In these examples, we will be using |
|
10 | cluster to perform a parallel computation. In these examples, we will be using | |
11 | IPython's "pylab" mode, which enables interactive plotting using the |
|
11 | IPython's "pylab" mode, which enables interactive plotting using the | |
12 | Matplotlib package. IPython can be started in this mode by typing:: |
|
12 | Matplotlib package. IPython can be started in this mode by typing:: | |
13 |
|
13 | |||
14 | ipython --pylab |
|
14 | ipython --pylab | |
15 |
|
15 | |||
16 | at the system command line. |
|
16 | at the system command line. | |
17 |
|
17 | |||
18 | 150 million digits of pi |
|
18 | 150 million digits of pi | |
19 | ======================== |
|
19 | ======================== | |
20 |
|
20 | |||
21 | In this example we would like to study the distribution of digits in the |
|
21 | In this example we would like to study the distribution of digits in the | |
22 | number pi (in base 10). While it is not known if pi is a normal number (a |
|
22 | number pi (in base 10). While it is not known if pi is a normal number (a | |
23 | number is normal in base 10 if 0-9 occur with equal likelihood) numerical |
|
23 | number is normal in base 10 if 0-9 occur with equal likelihood) numerical | |
24 | investigations suggest that it is. We will begin with a serial calculation on |
|
24 | investigations suggest that it is. We will begin with a serial calculation on | |
25 | 10,000 digits of pi and then perform a parallel calculation involving 150 |
|
25 | 10,000 digits of pi and then perform a parallel calculation involving 150 | |
26 | million digits. |
|
26 | million digits. | |
27 |
|
27 | |||
28 | In both the serial and parallel calculation we will be using functions defined |
|
28 | In both the serial and parallel calculation we will be using functions defined | |
29 | in the :file:`pidigits.py` file, which is available in the |
|
29 | in the :file:`pidigits.py` file, which is available in the | |
30 |
:file:`docs/examples/ |
|
30 | :file:`docs/examples/parallel` directory of the IPython source distribution. | |
31 | These functions provide basic facilities for working with the digits of pi and |
|
31 | These functions provide basic facilities for working with the digits of pi and | |
32 | can be loaded into IPython by putting :file:`pidigits.py` in your current |
|
32 | can be loaded into IPython by putting :file:`pidigits.py` in your current | |
33 | working directory and then doing: |
|
33 | working directory and then doing: | |
34 |
|
34 | |||
35 | .. sourcecode:: ipython |
|
35 | .. sourcecode:: ipython | |
36 |
|
36 | |||
37 | In [1]: run pidigits.py |
|
37 | In [1]: run pidigits.py | |
38 |
|
38 | |||
39 | Serial calculation |
|
39 | Serial calculation | |
40 | ------------------ |
|
40 | ------------------ | |
41 |
|
41 | |||
42 | For the serial calculation, we will use `SymPy <http://www.sympy.org>`_ to |
|
42 | For the serial calculation, we will use `SymPy <http://www.sympy.org>`_ to | |
43 | calculate 10,000 digits of pi and then look at the frequencies of the digits |
|
43 | calculate 10,000 digits of pi and then look at the frequencies of the digits | |
44 | 0-9. Out of 10,000 digits, we expect each digit to occur 1,000 times. While |
|
44 | 0-9. Out of 10,000 digits, we expect each digit to occur 1,000 times. While | |
45 | SymPy is capable of calculating many more digits of pi, our purpose here is to |
|
45 | SymPy is capable of calculating many more digits of pi, our purpose here is to | |
46 | set the stage for the much larger parallel calculation. |
|
46 | set the stage for the much larger parallel calculation. | |
47 |
|
47 | |||
48 | In this example, we use two functions from :file:`pidigits.py`: |
|
48 | In this example, we use two functions from :file:`pidigits.py`: | |
49 | :func:`one_digit_freqs` (which calculates how many times each digit occurs) |
|
49 | :func:`one_digit_freqs` (which calculates how many times each digit occurs) | |
50 | and :func:`plot_one_digit_freqs` (which uses Matplotlib to plot the result). |
|
50 | and :func:`plot_one_digit_freqs` (which uses Matplotlib to plot the result). | |
51 | Here is an interactive IPython session that uses these functions with |
|
51 | Here is an interactive IPython session that uses these functions with | |
52 | SymPy: |
|
52 | SymPy: | |
53 |
|
53 | |||
54 | .. sourcecode:: ipython |
|
54 | .. sourcecode:: ipython | |
55 |
|
55 | |||
56 | In [7]: import sympy |
|
56 | In [7]: import sympy | |
57 |
|
57 | |||
58 | In [8]: pi = sympy.pi.evalf(40) |
|
58 | In [8]: pi = sympy.pi.evalf(40) | |
59 |
|
59 | |||
60 | In [9]: pi |
|
60 | In [9]: pi | |
61 | Out[9]: 3.141592653589793238462643383279502884197 |
|
61 | Out[9]: 3.141592653589793238462643383279502884197 | |
62 |
|
62 | |||
63 | In [10]: pi = sympy.pi.evalf(10000) |
|
63 | In [10]: pi = sympy.pi.evalf(10000) | |
64 |
|
64 | |||
65 | In [11]: digits = (d for d in str(pi)[2:]) # create a sequence of digits |
|
65 | In [11]: digits = (d for d in str(pi)[2:]) # create a sequence of digits | |
66 |
|
66 | |||
67 | In [12]: run pidigits.py # load one_digit_freqs/plot_one_digit_freqs |
|
67 | In [12]: run pidigits.py # load one_digit_freqs/plot_one_digit_freqs | |
68 |
|
68 | |||
69 | In [13]: freqs = one_digit_freqs(digits) |
|
69 | In [13]: freqs = one_digit_freqs(digits) | |
70 |
|
70 | |||
71 | In [14]: plot_one_digit_freqs(freqs) |
|
71 | In [14]: plot_one_digit_freqs(freqs) | |
72 | Out[14]: [<matplotlib.lines.Line2D object at 0x18a55290>] |
|
72 | Out[14]: [<matplotlib.lines.Line2D object at 0x18a55290>] | |
73 |
|
73 | |||
74 | The resulting plot of the single digit counts shows that each digit occurs |
|
74 | The resulting plot of the single digit counts shows that each digit occurs | |
75 | approximately 1,000 times, but that with only 10,000 digits the |
|
75 | approximately 1,000 times, but that with only 10,000 digits the | |
76 | statistical fluctuations are still rather large: |
|
76 | statistical fluctuations are still rather large: | |
77 |
|
77 | |||
78 | .. image:: single_digits.* |
|
78 | .. image:: figs/single_digits.* | |
79 |
|
79 | |||
80 | It is clear that to reduce the relative fluctuations in the counts, we need |
|
80 | It is clear that to reduce the relative fluctuations in the counts, we need | |
81 | to look at many more digits of pi. That brings us to the parallel calculation. |
|
81 | to look at many more digits of pi. That brings us to the parallel calculation. | |
82 |
|
82 | |||
83 | Parallel calculation |
|
83 | Parallel calculation | |
84 | -------------------- |
|
84 | -------------------- | |
85 |
|
85 | |||
86 | Calculating many digits of pi is a challenging computational problem in itself. |
|
86 | Calculating many digits of pi is a challenging computational problem in itself. | |
87 | Because we want to focus on the distribution of digits in this example, we |
|
87 | Because we want to focus on the distribution of digits in this example, we | |
88 | will use pre-computed digit of pi from the website of Professor Yasumasa |
|
88 | will use pre-computed digit of pi from the website of Professor Yasumasa | |
89 | Kanada at the University of Tokyo (http://www.super-computing.org). These |
|
89 | Kanada at the University of Tokyo (http://www.super-computing.org). These | |
90 | digits come in a set of text files (ftp://pi.super-computing.org/.2/pi200m/) |
|
90 | digits come in a set of text files (ftp://pi.super-computing.org/.2/pi200m/) | |
91 | that each have 10 million digits of pi. |
|
91 | that each have 10 million digits of pi. | |
92 |
|
92 | |||
93 | For the parallel calculation, we have copied these files to the local hard |
|
93 | For the parallel calculation, we have copied these files to the local hard | |
94 | drives of the compute nodes. A total of 15 of these files will be used, for a |
|
94 | drives of the compute nodes. A total of 15 of these files will be used, for a | |
95 | total of 150 million digits of pi. To make things a little more interesting we |
|
95 | total of 150 million digits of pi. To make things a little more interesting we | |
96 | will calculate the frequencies of all 2 digits sequences (00-99) and then plot |
|
96 | will calculate the frequencies of all 2 digits sequences (00-99) and then plot | |
97 | the result using a 2D matrix in Matplotlib. |
|
97 | the result using a 2D matrix in Matplotlib. | |
98 |
|
98 | |||
99 | The overall idea of the calculation is simple: each IPython engine will |
|
99 | The overall idea of the calculation is simple: each IPython engine will | |
100 | compute the two digit counts for the digits in a single file. Then in a final |
|
100 | compute the two digit counts for the digits in a single file. Then in a final | |
101 | step the counts from each engine will be added up. To perform this |
|
101 | step the counts from each engine will be added up. To perform this | |
102 | calculation, we will need two top-level functions from :file:`pidigits.py`: |
|
102 | calculation, we will need two top-level functions from :file:`pidigits.py`: | |
103 |
|
103 | |||
104 |
.. literalinclude:: ../../examples/ |
|
104 | .. literalinclude:: ../../examples/parallel/pi/pidigits.py | |
105 | :language: python |
|
105 | :language: python | |
106 | :lines: 47-62 |
|
106 | :lines: 47-62 | |
107 |
|
107 | |||
108 | We will also use the :func:`plot_two_digit_freqs` function to plot the |
|
108 | We will also use the :func:`plot_two_digit_freqs` function to plot the | |
109 | results. The code to run this calculation in parallel is contained in |
|
109 | results. The code to run this calculation in parallel is contained in | |
110 |
:file:`docs/examples/ |
|
110 | :file:`docs/examples/parallel/parallelpi.py`. This code can be run in parallel | |
111 | using IPython by following these steps: |
|
111 | using IPython by following these steps: | |
112 |
|
112 | |||
113 | 1. Use :command:`ipcluster` to start 15 engines. We used an 8 core (2 quad |
|
113 | 1. Use :command:`ipcluster` to start 15 engines. We used an 8 core (2 quad | |
114 | core CPUs) cluster with hyperthreading enabled which makes the 8 cores |
|
114 | core CPUs) cluster with hyperthreading enabled which makes the 8 cores | |
115 | looks like 16 (1 controller + 15 engines) in the OS. However, the maximum |
|
115 | looks like 16 (1 controller + 15 engines) in the OS. However, the maximum | |
116 | speedup we can observe is still only 8x. |
|
116 | speedup we can observe is still only 8x. | |
117 | 2. With the file :file:`parallelpi.py` in your current working directory, open |
|
117 | 2. With the file :file:`parallelpi.py` in your current working directory, open | |
118 | up IPython in pylab mode and type ``run parallelpi.py``. This will download |
|
118 | up IPython in pylab mode and type ``run parallelpi.py``. This will download | |
119 | the pi files via ftp the first time you run it, if they are not |
|
119 | the pi files via ftp the first time you run it, if they are not | |
120 | present in the Engines' working directory. |
|
120 | present in the Engines' working directory. | |
121 |
|
121 | |||
122 | When run on our 8 core cluster, we observe a speedup of 7.7x. This is slightly |
|
122 | When run on our 8 core cluster, we observe a speedup of 7.7x. This is slightly | |
123 | less than linear scaling (8x) because the controller is also running on one of |
|
123 | less than linear scaling (8x) because the controller is also running on one of | |
124 | the cores. |
|
124 | the cores. | |
125 |
|
125 | |||
126 | To emphasize the interactive nature of IPython, we now show how the |
|
126 | To emphasize the interactive nature of IPython, we now show how the | |
127 | calculation can also be run by simply typing the commands from |
|
127 | calculation can also be run by simply typing the commands from | |
128 | :file:`parallelpi.py` interactively into IPython: |
|
128 | :file:`parallelpi.py` interactively into IPython: | |
129 |
|
129 | |||
130 | .. sourcecode:: ipython |
|
130 | .. sourcecode:: ipython | |
131 |
|
131 | |||
132 | In [1]: from IPython.parallel import Client |
|
132 | In [1]: from IPython.parallel import Client | |
133 |
|
133 | |||
134 | # The Client allows us to use the engines interactively. |
|
134 | # The Client allows us to use the engines interactively. | |
135 | # We simply pass Client the name of the cluster profile we |
|
135 | # We simply pass Client the name of the cluster profile we | |
136 | # are using. |
|
136 | # are using. | |
137 | In [2]: c = Client(profile='mycluster') |
|
137 | In [2]: c = Client(profile='mycluster') | |
138 | In [3]: view = c.load_balanced_view() |
|
138 | In [3]: view = c.load_balanced_view() | |
139 |
|
139 | |||
140 | In [3]: c.ids |
|
140 | In [3]: c.ids | |
141 | Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] |
|
141 | Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] | |
142 |
|
142 | |||
143 | In [4]: run pidigits.py |
|
143 | In [4]: run pidigits.py | |
144 |
|
144 | |||
145 | In [5]: filestring = 'pi200m.ascii.%(i)02dof20' |
|
145 | In [5]: filestring = 'pi200m.ascii.%(i)02dof20' | |
146 |
|
146 | |||
147 | # Create the list of files to process. |
|
147 | # Create the list of files to process. | |
148 | In [6]: files = [filestring % {'i':i} for i in range(1,16)] |
|
148 | In [6]: files = [filestring % {'i':i} for i in range(1,16)] | |
149 |
|
149 | |||
150 | In [7]: files |
|
150 | In [7]: files | |
151 | Out[7]: |
|
151 | Out[7]: | |
152 | ['pi200m.ascii.01of20', |
|
152 | ['pi200m.ascii.01of20', | |
153 | 'pi200m.ascii.02of20', |
|
153 | 'pi200m.ascii.02of20', | |
154 | 'pi200m.ascii.03of20', |
|
154 | 'pi200m.ascii.03of20', | |
155 | 'pi200m.ascii.04of20', |
|
155 | 'pi200m.ascii.04of20', | |
156 | 'pi200m.ascii.05of20', |
|
156 | 'pi200m.ascii.05of20', | |
157 | 'pi200m.ascii.06of20', |
|
157 | 'pi200m.ascii.06of20', | |
158 | 'pi200m.ascii.07of20', |
|
158 | 'pi200m.ascii.07of20', | |
159 | 'pi200m.ascii.08of20', |
|
159 | 'pi200m.ascii.08of20', | |
160 | 'pi200m.ascii.09of20', |
|
160 | 'pi200m.ascii.09of20', | |
161 | 'pi200m.ascii.10of20', |
|
161 | 'pi200m.ascii.10of20', | |
162 | 'pi200m.ascii.11of20', |
|
162 | 'pi200m.ascii.11of20', | |
163 | 'pi200m.ascii.12of20', |
|
163 | 'pi200m.ascii.12of20', | |
164 | 'pi200m.ascii.13of20', |
|
164 | 'pi200m.ascii.13of20', | |
165 | 'pi200m.ascii.14of20', |
|
165 | 'pi200m.ascii.14of20', | |
166 | 'pi200m.ascii.15of20'] |
|
166 | 'pi200m.ascii.15of20'] | |
167 |
|
167 | |||
168 | # download the data files if they don't already exist: |
|
168 | # download the data files if they don't already exist: | |
169 | In [8]: v.map(fetch_pi_file, files) |
|
169 | In [8]: v.map(fetch_pi_file, files) | |
170 |
|
170 | |||
171 | # This is the parallel calculation using the Client.map method |
|
171 | # This is the parallel calculation using the Client.map method | |
172 | # which applies compute_two_digit_freqs to each file in files in parallel. |
|
172 | # which applies compute_two_digit_freqs to each file in files in parallel. | |
173 | In [9]: freqs_all = v.map(compute_two_digit_freqs, files) |
|
173 | In [9]: freqs_all = v.map(compute_two_digit_freqs, files) | |
174 |
|
174 | |||
175 | # Add up the frequencies from each engine. |
|
175 | # Add up the frequencies from each engine. | |
176 | In [10]: freqs = reduce_freqs(freqs_all) |
|
176 | In [10]: freqs = reduce_freqs(freqs_all) | |
177 |
|
177 | |||
178 | In [11]: plot_two_digit_freqs(freqs) |
|
178 | In [11]: plot_two_digit_freqs(freqs) | |
179 | Out[11]: <matplotlib.image.AxesImage object at 0x18beb110> |
|
179 | Out[11]: <matplotlib.image.AxesImage object at 0x18beb110> | |
180 |
|
180 | |||
181 | In [12]: plt.title('2 digit counts of 150m digits of pi') |
|
181 | In [12]: plt.title('2 digit counts of 150m digits of pi') | |
182 | Out[12]: <matplotlib.text.Text object at 0x18d1f9b0> |
|
182 | Out[12]: <matplotlib.text.Text object at 0x18d1f9b0> | |
183 |
|
183 | |||
184 | The resulting plot generated by Matplotlib is shown below. The colors indicate |
|
184 | The resulting plot generated by Matplotlib is shown below. The colors indicate | |
185 | which two digit sequences are more (red) or less (blue) likely to occur in the |
|
185 | which two digit sequences are more (red) or less (blue) likely to occur in the | |
186 | first 150 million digits of pi. We clearly see that the sequence "41" is |
|
186 | first 150 million digits of pi. We clearly see that the sequence "41" is | |
187 | most likely and that "06" and "07" are least likely. Further analysis would |
|
187 | most likely and that "06" and "07" are least likely. Further analysis would | |
188 | show that the relative size of the statistical fluctuations have decreased |
|
188 | show that the relative size of the statistical fluctuations have decreased | |
189 | compared to the 10,000 digit calculation. |
|
189 | compared to the 10,000 digit calculation. | |
190 |
|
190 | |||
191 | .. image:: two_digit_counts.* |
|
191 | .. image:: figs/two_digit_counts.* | |
192 |
|
192 | |||
193 |
|
193 | |||
194 | Parallel options pricing |
|
194 | Parallel options pricing | |
195 | ======================== |
|
195 | ======================== | |
196 |
|
196 | |||
197 | An option is a financial contract that gives the buyer of the contract the |
|
197 | An option is a financial contract that gives the buyer of the contract the | |
198 | right to buy (a "call") or sell (a "put") a secondary asset (a stock for |
|
198 | right to buy (a "call") or sell (a "put") a secondary asset (a stock for | |
199 | example) at a particular date in the future (the expiration date) for a |
|
199 | example) at a particular date in the future (the expiration date) for a | |
200 | pre-agreed upon price (the strike price). For this right, the buyer pays the |
|
200 | pre-agreed upon price (the strike price). For this right, the buyer pays the | |
201 | seller a premium (the option price). There are a wide variety of flavors of |
|
201 | seller a premium (the option price). There are a wide variety of flavors of | |
202 | options (American, European, Asian, etc.) that are useful for different |
|
202 | options (American, European, Asian, etc.) that are useful for different | |
203 | purposes: hedging against risk, speculation, etc. |
|
203 | purposes: hedging against risk, speculation, etc. | |
204 |
|
204 | |||
205 | Much of modern finance is driven by the need to price these contracts |
|
205 | Much of modern finance is driven by the need to price these contracts | |
206 | accurately based on what is known about the properties (such as volatility) of |
|
206 | accurately based on what is known about the properties (such as volatility) of | |
207 | the underlying asset. One method of pricing options is to use a Monte Carlo |
|
207 | the underlying asset. One method of pricing options is to use a Monte Carlo | |
208 | simulation of the underlying asset price. In this example we use this approach |
|
208 | simulation of the underlying asset price. In this example we use this approach | |
209 | to price both European and Asian (path dependent) options for various strike |
|
209 | to price both European and Asian (path dependent) options for various strike | |
210 | prices and volatilities. |
|
210 | prices and volatilities. | |
211 |
|
211 | |||
212 |
The code for this example can be found in the :file:`docs/examples/ |
|
212 | The code for this example can be found in the :file:`docs/examples/parallel` | |
213 | directory of the IPython source. The function :func:`price_options` in |
|
213 | directory of the IPython source. The function :func:`price_options` in | |
214 | :file:`mcpricer.py` implements the basic Monte Carlo pricing algorithm using |
|
214 | :file:`mcpricer.py` implements the basic Monte Carlo pricing algorithm using | |
215 | the NumPy package and is shown here: |
|
215 | the NumPy package and is shown here: | |
216 |
|
216 | |||
217 |
.. literalinclude:: ../../examples/ |
|
217 | .. literalinclude:: ../../examples/parallel/options/mcpricer.py | |
218 | :language: python |
|
218 | :language: python | |
219 |
|
219 | |||
220 | To run this code in parallel, we will use IPython's :class:`LoadBalancedView` class, |
|
220 | To run this code in parallel, we will use IPython's :class:`LoadBalancedView` class, | |
221 | which distributes work to the engines using dynamic load balancing. This |
|
221 | which distributes work to the engines using dynamic load balancing. This | |
222 | view is a wrapper of the :class:`Client` class shown in |
|
222 | view is a wrapper of the :class:`Client` class shown in | |
223 | the previous example. The parallel calculation using :class:`LoadBalancedView` can |
|
223 | the previous example. The parallel calculation using :class:`LoadBalancedView` can | |
224 | be found in the file :file:`mcpricer.py`. The code in this file creates a |
|
224 | be found in the file :file:`mcpricer.py`. The code in this file creates a | |
225 |
:class:` |
|
225 | :class:`LoadBalancedView` instance and then submits a set of tasks using | |
226 |
:meth:` |
|
226 | :meth:`LoadBalancedView.apply` that calculate the option prices for different | |
227 | volatilities and strike prices. The results are then plotted as a 2D contour |
|
227 | volatilities and strike prices. The results are then plotted as a 2D contour | |
228 | plot using Matplotlib. |
|
228 | plot using Matplotlib. | |
229 |
|
229 | |||
230 |
.. literalinclude:: ../../examples/ |
|
230 | .. literalinclude:: ../../examples/parallel/options/mckernel.py | |
231 | :language: python |
|
231 | :language: python | |
232 |
|
232 | |||
233 | To use this code, start an IPython cluster using :command:`ipcluster`, open |
|
233 | To use this code, start an IPython cluster using :command:`ipcluster`, open | |
234 |
IPython in the pylab mode with the file :file:`mc |
|
234 | IPython in the pylab mode with the file :file:`mckernel.py` in your current | |
235 | working directory and then type: |
|
235 | working directory and then type: | |
236 |
|
236 | |||
237 | .. sourcecode:: ipython |
|
237 | .. sourcecode:: ipython | |
238 |
|
238 | |||
239 |
In [7]: run mc |
|
239 | In [7]: run mckernel.py | |
240 | Submitted tasks: [0, 1, 2, ...] |
|
240 | Submitted tasks: [0, 1, 2, ...] | |
241 |
|
241 | |||
242 | Once all the tasks have finished, the results can be plotted using the |
|
242 | Once all the tasks have finished, the results can be plotted using the | |
243 | :func:`plot_options` function. Here we make contour plots of the Asian |
|
243 | :func:`plot_options` function. Here we make contour plots of the Asian | |
244 | call and Asian put options as function of the volatility and strike price: |
|
244 | call and Asian put options as function of the volatility and strike price: | |
245 |
|
245 | |||
246 | .. sourcecode:: ipython |
|
246 | .. sourcecode:: ipython | |
247 |
|
247 | |||
248 | In [8]: plot_options(sigma_vals, K_vals, prices['acall']) |
|
248 | In [8]: plot_options(sigma_vals, K_vals, prices['acall']) | |
249 |
|
249 | |||
250 | In [9]: plt.figure() |
|
250 | In [9]: plt.figure() | |
251 | Out[9]: <matplotlib.figure.Figure object at 0x18c178d0> |
|
251 | Out[9]: <matplotlib.figure.Figure object at 0x18c178d0> | |
252 |
|
252 | |||
253 | In [10]: plot_options(sigma_vals, K_vals, prices['aput']) |
|
253 | In [10]: plot_options(sigma_vals, K_vals, prices['aput']) | |
254 |
|
254 | |||
255 | These results are shown in the two figures below. On a 8 core cluster the |
|
255 | These results are shown in the two figures below. On a 8 core cluster the | |
256 | entire calculation (10 strike prices, 10 volatilities, 100,000 paths for each) |
|
256 | entire calculation (10 strike prices, 10 volatilities, 100,000 paths for each) | |
257 | took 30 seconds in parallel, giving a speedup of 7.7x, which is comparable |
|
257 | took 30 seconds in parallel, giving a speedup of 7.7x, which is comparable | |
258 | to the speedup observed in our previous example. |
|
258 | to the speedup observed in our previous example. | |
259 |
|
259 | |||
260 | .. image:: asian_call.* |
|
260 | .. image:: figs/asian_call.* | |
261 |
|
261 | |||
262 | .. image:: asian_put.* |
|
262 | .. image:: figs/asian_put.* | |
263 |
|
263 | |||
264 | Conclusion |
|
264 | Conclusion | |
265 | ========== |
|
265 | ========== | |
266 |
|
266 | |||
267 | To conclude these examples, we summarize the key features of IPython's |
|
267 | To conclude these examples, we summarize the key features of IPython's | |
268 | parallel architecture that have been demonstrated: |
|
268 | parallel architecture that have been demonstrated: | |
269 |
|
269 | |||
270 | * Serial code can be parallelized often with only a few extra lines of code. |
|
270 | * Serial code can be parallelized often with only a few extra lines of code. | |
271 | We have used the :class:`DirectView` and :class:`LoadBalancedView` classes |
|
271 | We have used the :class:`DirectView` and :class:`LoadBalancedView` classes | |
272 | for this purpose. |
|
272 | for this purpose. | |
273 | * The resulting parallel code can be run without ever leaving the IPython's |
|
273 | * The resulting parallel code can be run without ever leaving the IPython's | |
274 | interactive shell. |
|
274 | interactive shell. | |
275 | * Any data computed in parallel can be explored interactively through |
|
275 | * Any data computed in parallel can be explored interactively through | |
276 | visualization or further numerical calculations. |
|
276 | visualization or further numerical calculations. | |
277 | * We have run these examples on a cluster running Windows HPC Server 2008. |
|
277 | * We have run these examples on a cluster running Windows HPC Server 2008. | |
278 | IPython's built in support for the Windows HPC job scheduler makes it |
|
278 | IPython's built in support for the Windows HPC job scheduler makes it | |
279 | easy to get started with IPython's parallel capabilities. |
|
279 | easy to get started with IPython's parallel capabilities. | |
280 |
|
280 | |||
281 | .. note:: |
|
281 | .. note:: | |
282 |
|
282 | |||
283 | The newparallel code has never been run on Windows HPC Server, so the last |
|
283 | The new parallel code has never been run on Windows HPC Server, so the last | |
284 | conclusion is untested. |
|
284 | conclusion is untested. |
@@ -1,442 +1,449 b'' | |||||
1 | .. _parallel_task: |
|
1 | .. _parallel_task: | |
2 |
|
2 | |||
3 | ========================== |
|
3 | ========================== | |
4 | The IPython task interface |
|
4 | The IPython task interface | |
5 | ========================== |
|
5 | ========================== | |
6 |
|
6 | |||
7 | The task interface to the cluster presents the engines as a fault tolerant, |
|
7 | The task interface to the cluster presents the engines as a fault tolerant, | |
8 | dynamic load-balanced system of workers. Unlike the multiengine interface, in |
|
8 | dynamic load-balanced system of workers. Unlike the multiengine interface, in | |
9 | the task interface the user have no direct access to individual engines. By |
|
9 | the task interface the user have no direct access to individual engines. By | |
10 | allowing the IPython scheduler to assign work, this interface is simultaneously |
|
10 | allowing the IPython scheduler to assign work, this interface is simultaneously | |
11 | simpler and more powerful. |
|
11 | simpler and more powerful. | |
12 |
|
12 | |||
13 | Best of all, the user can use both of these interfaces running at the same time |
|
13 | Best of all, the user can use both of these interfaces running at the same time | |
14 | to take advantage of their respective strengths. When the user can break up |
|
14 | to take advantage of their respective strengths. When the user can break up | |
15 | the user's work into segments that do not depend on previous execution, the |
|
15 | the user's work into segments that do not depend on previous execution, the | |
16 | task interface is ideal. But it also has more power and flexibility, allowing |
|
16 | task interface is ideal. But it also has more power and flexibility, allowing | |
17 | the user to guide the distribution of jobs, without having to assign tasks to |
|
17 | the user to guide the distribution of jobs, without having to assign tasks to | |
18 | engines explicitly. |
|
18 | engines explicitly. | |
19 |
|
19 | |||
20 | Starting the IPython controller and engines |
|
20 | Starting the IPython controller and engines | |
21 | =========================================== |
|
21 | =========================================== | |
22 |
|
22 | |||
23 | To follow along with this tutorial, you will need to start the IPython |
|
23 | To follow along with this tutorial, you will need to start the IPython | |
24 | controller and four IPython engines. The simplest way of doing this is to use |
|
24 | controller and four IPython engines. The simplest way of doing this is to use | |
25 | the :command:`ipcluster` command:: |
|
25 | the :command:`ipcluster` command:: | |
26 |
|
26 | |||
27 | $ ipcluster start -n 4 |
|
27 | $ ipcluster start -n 4 | |
28 |
|
28 | |||
29 | For more detailed information about starting the controller and engines, see |
|
29 | For more detailed information about starting the controller and engines, see | |
30 | our :ref:`introduction <parallel_overview>` to using IPython for parallel computing. |
|
30 | our :ref:`introduction <parallel_overview>` to using IPython for parallel computing. | |
31 |
|
31 | |||
32 | Creating a ``Client`` instance |
|
32 | Creating a ``Client`` instance | |
33 | ============================== |
|
33 | ============================== | |
34 |
|
34 | |||
35 | The first step is to import the IPython :mod:`IPython.parallel` |
|
35 | The first step is to import the IPython :mod:`IPython.parallel` | |
36 | module and then create a :class:`.Client` instance, and we will also be using |
|
36 | module and then create a :class:`.Client` instance, and we will also be using | |
37 | a :class:`LoadBalancedView`, here called `lview`: |
|
37 | a :class:`LoadBalancedView`, here called `lview`: | |
38 |
|
38 | |||
39 | .. sourcecode:: ipython |
|
39 | .. sourcecode:: ipython | |
40 |
|
40 | |||
41 | In [1]: from IPython.parallel import Client |
|
41 | In [1]: from IPython.parallel import Client | |
42 |
|
42 | |||
43 | In [2]: rc = Client() |
|
43 | In [2]: rc = Client() | |
44 |
|
44 | |||
45 |
|
45 | |||
46 | This form assumes that the controller was started on localhost with default |
|
46 | This form assumes that the controller was started on localhost with default | |
47 | configuration. If not, the location of the controller must be given as an |
|
47 | configuration. If not, the location of the controller must be given as an | |
48 | argument to the constructor: |
|
48 | argument to the constructor: | |
49 |
|
49 | |||
50 | .. sourcecode:: ipython |
|
50 | .. sourcecode:: ipython | |
51 |
|
51 | |||
52 | # for a visible LAN controller listening on an external port: |
|
52 | # for a visible LAN controller listening on an external port: | |
53 | In [2]: rc = Client('tcp://192.168.1.16:10101') |
|
53 | In [2]: rc = Client('tcp://192.168.1.16:10101') | |
54 | # or to connect with a specific profile you have set up: |
|
54 | # or to connect with a specific profile you have set up: | |
55 | In [3]: rc = Client(profile='mpi') |
|
55 | In [3]: rc = Client(profile='mpi') | |
56 |
|
56 | |||
57 | For load-balanced execution, we will make use of a :class:`LoadBalancedView` object, which can |
|
57 | For load-balanced execution, we will make use of a :class:`LoadBalancedView` object, which can | |
58 | be constructed via the client's :meth:`load_balanced_view` method: |
|
58 | be constructed via the client's :meth:`load_balanced_view` method: | |
59 |
|
59 | |||
60 | .. sourcecode:: ipython |
|
60 | .. sourcecode:: ipython | |
61 |
|
61 | |||
62 | In [4]: lview = rc.load_balanced_view() # default load-balanced view |
|
62 | In [4]: lview = rc.load_balanced_view() # default load-balanced view | |
63 |
|
63 | |||
64 | .. seealso:: |
|
64 | .. seealso:: | |
65 |
|
65 | |||
66 | For more information, see the in-depth explanation of :ref:`Views <parallel_details>`. |
|
66 | For more information, see the in-depth explanation of :ref:`Views <parallel_details>`. | |
67 |
|
67 | |||
68 |
|
68 | |||
69 | Quick and easy parallelism |
|
69 | Quick and easy parallelism | |
70 | ========================== |
|
70 | ========================== | |
71 |
|
71 | |||
72 | In many cases, you simply want to apply a Python function to a sequence of |
|
72 | In many cases, you simply want to apply a Python function to a sequence of | |
73 | objects, but *in parallel*. Like the multiengine interface, these can be |
|
73 | objects, but *in parallel*. Like the multiengine interface, these can be | |
74 | implemented via the task interface. The exact same tools can perform these |
|
74 | implemented via the task interface. The exact same tools can perform these | |
75 | actions in load-balanced ways as well as multiplexed ways: a parallel version |
|
75 | actions in load-balanced ways as well as multiplexed ways: a parallel version | |
76 | of :func:`map` and :func:`@parallel` function decorator. If one specifies the |
|
76 | of :func:`map` and :func:`@parallel` function decorator. If one specifies the | |
77 | argument `balanced=True`, then they are dynamically load balanced. Thus, if the |
|
77 | argument `balanced=True`, then they are dynamically load balanced. Thus, if the | |
78 | execution time per item varies significantly, you should use the versions in |
|
78 | execution time per item varies significantly, you should use the versions in | |
79 | the task interface. |
|
79 | the task interface. | |
80 |
|
80 | |||
81 | Parallel map |
|
81 | Parallel map | |
82 | ------------ |
|
82 | ------------ | |
83 |
|
83 | |||
84 | To load-balance :meth:`map`,simply use a LoadBalancedView: |
|
84 | To load-balance :meth:`map`,simply use a LoadBalancedView: | |
85 |
|
85 | |||
86 | .. sourcecode:: ipython |
|
86 | .. sourcecode:: ipython | |
87 |
|
87 | |||
88 | In [62]: lview.block = True |
|
88 | In [62]: lview.block = True | |
89 |
|
89 | |||
90 |
|
|
90 | In [63]: serial_result = map(lambda x:x**10, range(32)) | |
91 |
|
91 | |||
92 |
|
|
92 | In [64]: parallel_result = lview.map(lambda x:x**10, range(32)) | |
93 |
|
93 | |||
94 |
|
|
94 | In [65]: serial_result==parallel_result | |
95 |
|
|
95 | Out[65]: True | |
96 |
|
96 | |||
97 | Parallel function decorator |
|
97 | Parallel function decorator | |
98 | --------------------------- |
|
98 | --------------------------- | |
99 |
|
99 | |||
100 | Parallel functions are just like normal function, but they can be called on |
|
100 | Parallel functions are just like normal function, but they can be called on | |
101 | sequences and *in parallel*. The multiengine interface provides a decorator |
|
101 | sequences and *in parallel*. The multiengine interface provides a decorator | |
102 | that turns any Python function into a parallel function: |
|
102 | that turns any Python function into a parallel function: | |
103 |
|
103 | |||
104 | .. sourcecode:: ipython |
|
104 | .. sourcecode:: ipython | |
105 |
|
105 | |||
106 | In [10]: @lview.parallel() |
|
106 | In [10]: @lview.parallel() | |
107 | ....: def f(x): |
|
107 | ....: def f(x): | |
108 | ....: return 10.0*x**4 |
|
108 | ....: return 10.0*x**4 | |
109 | ....: |
|
109 | ....: | |
110 |
|
110 | |||
111 | In [11]: f.map(range(32)) # this is done in parallel |
|
111 | In [11]: f.map(range(32)) # this is done in parallel | |
112 | Out[11]: [0.0,10.0,160.0,...] |
|
112 | Out[11]: [0.0,10.0,160.0,...] | |
113 |
|
113 | |||
|
114 | .. _parallel_taskmap: | |||
|
115 | ||||
|
116 | The AsyncMapResult | |||
|
117 | ================== | |||
|
118 | ||||
|
119 | When you call ``lview.map_async(f, sequence)``, or just :meth:`map` with `block=True`, then | |||
|
120 | what you get in return will be an :class:`~AsyncMapResult` object. These are similar to | |||
|
121 | AsyncResult objects, but with one key difference | |||
|
122 | ||||
114 | .. _parallel_dependencies: |
|
123 | .. _parallel_dependencies: | |
115 |
|
124 | |||
116 | Dependencies |
|
125 | Dependencies | |
117 | ============ |
|
126 | ============ | |
118 |
|
127 | |||
119 | Often, pure atomic load-balancing is too primitive for your work. In these cases, you |
|
128 | Often, pure atomic load-balancing is too primitive for your work. In these cases, you | |
120 | may want to associate some kind of `Dependency` that describes when, where, or whether |
|
129 | may want to associate some kind of `Dependency` that describes when, where, or whether | |
121 | a task can be run. In IPython, we provide two types of dependencies: |
|
130 | a task can be run. In IPython, we provide two types of dependencies: | |
122 | `Functional Dependencies`_ and `Graph Dependencies`_ |
|
131 | `Functional Dependencies`_ and `Graph Dependencies`_ | |
123 |
|
132 | |||
124 | .. note:: |
|
133 | .. note:: | |
125 |
|
134 | |||
126 | It is important to note that the pure ZeroMQ scheduler does not support dependencies, |
|
135 | It is important to note that the pure ZeroMQ scheduler does not support dependencies, | |
127 | and you will see errors or warnings if you try to use dependencies with the pure |
|
136 | and you will see errors or warnings if you try to use dependencies with the pure | |
128 | scheduler. |
|
137 | scheduler. | |
129 |
|
138 | |||
130 | Functional Dependencies |
|
139 | Functional Dependencies | |
131 | ----------------------- |
|
140 | ----------------------- | |
132 |
|
141 | |||
133 | Functional dependencies are used to determine whether a given engine is capable of running |
|
142 | Functional dependencies are used to determine whether a given engine is capable of running | |
134 | a particular task. This is implemented via a special :class:`Exception` class, |
|
143 | a particular task. This is implemented via a special :class:`Exception` class, | |
135 | :class:`UnmetDependency`, found in `IPython.parallel.error`. Its use is very simple: |
|
144 | :class:`UnmetDependency`, found in `IPython.parallel.error`. Its use is very simple: | |
136 | if a task fails with an UnmetDependency exception, then the scheduler, instead of relaying |
|
145 | if a task fails with an UnmetDependency exception, then the scheduler, instead of relaying | |
137 | the error up to the client like any other error, catches the error, and submits the task |
|
146 | the error up to the client like any other error, catches the error, and submits the task | |
138 | to a different engine. This will repeat indefinitely, and a task will never be submitted |
|
147 | to a different engine. This will repeat indefinitely, and a task will never be submitted | |
139 | to a given engine a second time. |
|
148 | to a given engine a second time. | |
140 |
|
149 | |||
141 | You can manually raise the :class:`UnmetDependency` yourself, but IPython has provided |
|
150 | You can manually raise the :class:`UnmetDependency` yourself, but IPython has provided | |
142 | some decorators for facilitating this behavior. |
|
151 | some decorators for facilitating this behavior. | |
143 |
|
152 | |||
144 | There are two decorators and a class used for functional dependencies: |
|
153 | There are two decorators and a class used for functional dependencies: | |
145 |
|
154 | |||
146 | .. sourcecode:: ipython |
|
155 | .. sourcecode:: ipython | |
147 |
|
156 | |||
148 | In [9]: from IPython.parallel import depend, require, dependent |
|
157 | In [9]: from IPython.parallel import depend, require, dependent | |
149 |
|
158 | |||
150 | @require |
|
159 | @require | |
151 | ******** |
|
160 | ******** | |
152 |
|
161 | |||
153 | The simplest sort of dependency is requiring that a Python module is available. The |
|
162 | The simplest sort of dependency is requiring that a Python module is available. The | |
154 | ``@require`` decorator lets you define a function that will only run on engines where names |
|
163 | ``@require`` decorator lets you define a function that will only run on engines where names | |
155 | you specify are importable: |
|
164 | you specify are importable: | |
156 |
|
165 | |||
157 | .. sourcecode:: ipython |
|
166 | .. sourcecode:: ipython | |
158 |
|
167 | |||
159 | In [10]: @require('numpy', 'zmq') |
|
168 | In [10]: @require('numpy', 'zmq') | |
160 | ...: def myfunc(): |
|
169 | ...: def myfunc(): | |
161 | ...: return dostuff() |
|
170 | ...: return dostuff() | |
162 |
|
171 | |||
163 | Now, any time you apply :func:`myfunc`, the task will only run on a machine that has |
|
172 | Now, any time you apply :func:`myfunc`, the task will only run on a machine that has | |
164 | numpy and pyzmq available, and when :func:`myfunc` is called, numpy and zmq will be imported. |
|
173 | numpy and pyzmq available, and when :func:`myfunc` is called, numpy and zmq will be imported. | |
165 |
|
174 | |||
166 | @depend |
|
175 | @depend | |
167 | ******* |
|
176 | ******* | |
168 |
|
177 | |||
169 | The ``@depend`` decorator lets you decorate any function with any *other* function to |
|
178 | The ``@depend`` decorator lets you decorate any function with any *other* function to | |
170 | evaluate the dependency. The dependency function will be called at the start of the task, |
|
179 | evaluate the dependency. The dependency function will be called at the start of the task, | |
171 | and if it returns ``False``, then the dependency will be considered unmet, and the task |
|
180 | and if it returns ``False``, then the dependency will be considered unmet, and the task | |
172 | will be assigned to another engine. If the dependency returns *anything other than |
|
181 | will be assigned to another engine. If the dependency returns *anything other than | |
173 | ``False``*, the rest of the task will continue. |
|
182 | ``False``*, the rest of the task will continue. | |
174 |
|
183 | |||
175 | .. sourcecode:: ipython |
|
184 | .. sourcecode:: ipython | |
176 |
|
185 | |||
177 | In [10]: def platform_specific(plat): |
|
186 | In [10]: def platform_specific(plat): | |
178 | ...: import sys |
|
187 | ...: import sys | |
179 | ...: return sys.platform == plat |
|
188 | ...: return sys.platform == plat | |
180 |
|
189 | |||
181 | In [11]: @depend(platform_specific, 'darwin') |
|
190 | In [11]: @depend(platform_specific, 'darwin') | |
182 | ...: def mactask(): |
|
191 | ...: def mactask(): | |
183 | ...: do_mac_stuff() |
|
192 | ...: do_mac_stuff() | |
184 |
|
193 | |||
185 | In [12]: @depend(platform_specific, 'nt') |
|
194 | In [12]: @depend(platform_specific, 'nt') | |
186 | ...: def wintask(): |
|
195 | ...: def wintask(): | |
187 | ...: do_windows_stuff() |
|
196 | ...: do_windows_stuff() | |
188 |
|
197 | |||
189 | In this case, any time you apply ``mytask``, it will only run on an OSX machine. |
|
198 | In this case, any time you apply ``mytask``, it will only run on an OSX machine. | |
190 | ``@depend`` is just like ``apply``, in that it has a ``@depend(f,*args,**kwargs)`` |
|
199 | ``@depend`` is just like ``apply``, in that it has a ``@depend(f,*args,**kwargs)`` | |
191 | signature. |
|
200 | signature. | |
192 |
|
201 | |||
193 | dependents |
|
202 | dependents | |
194 | ********** |
|
203 | ********** | |
195 |
|
204 | |||
196 | You don't have to use the decorators on your tasks, if for instance you may want |
|
205 | You don't have to use the decorators on your tasks, if for instance you may want | |
197 | to run tasks with a single function but varying dependencies, you can directly construct |
|
206 | to run tasks with a single function but varying dependencies, you can directly construct | |
198 | the :class:`dependent` object that the decorators use: |
|
207 | the :class:`dependent` object that the decorators use: | |
199 |
|
208 | |||
200 | .. sourcecode::ipython |
|
209 | .. sourcecode::ipython | |
201 |
|
210 | |||
202 | In [13]: def mytask(*args): |
|
211 | In [13]: def mytask(*args): | |
203 | ...: dostuff() |
|
212 | ...: dostuff() | |
204 |
|
213 | |||
205 | In [14]: mactask = dependent(mytask, platform_specific, 'darwin') |
|
214 | In [14]: mactask = dependent(mytask, platform_specific, 'darwin') | |
206 | # this is the same as decorating the declaration of mytask with @depend |
|
215 | # this is the same as decorating the declaration of mytask with @depend | |
207 | # but you can do it again: |
|
216 | # but you can do it again: | |
208 |
|
217 | |||
209 | In [15]: wintask = dependent(mytask, platform_specific, 'nt') |
|
218 | In [15]: wintask = dependent(mytask, platform_specific, 'nt') | |
210 |
|
219 | |||
211 | # in general: |
|
220 | # in general: | |
212 | In [16]: t = dependent(f, g, *dargs, **dkwargs) |
|
221 | In [16]: t = dependent(f, g, *dargs, **dkwargs) | |
213 |
|
222 | |||
214 | # is equivalent to: |
|
223 | # is equivalent to: | |
215 | In [17]: @depend(g, *dargs, **dkwargs) |
|
224 | In [17]: @depend(g, *dargs, **dkwargs) | |
216 | ...: def t(a,b,c): |
|
225 | ...: def t(a,b,c): | |
217 | ...: # contents of f |
|
226 | ...: # contents of f | |
218 |
|
227 | |||
219 | Graph Dependencies |
|
228 | Graph Dependencies | |
220 | ------------------ |
|
229 | ------------------ | |
221 |
|
230 | |||
222 | Sometimes you want to restrict the time and/or location to run a given task as a function |
|
231 | Sometimes you want to restrict the time and/or location to run a given task as a function | |
223 | of the time and/or location of other tasks. This is implemented via a subclass of |
|
232 | of the time and/or location of other tasks. This is implemented via a subclass of | |
224 | :class:`set`, called a :class:`Dependency`. A Dependency is just a set of `msg_ids` |
|
233 | :class:`set`, called a :class:`Dependency`. A Dependency is just a set of `msg_ids` | |
225 | corresponding to tasks, and a few attributes to guide how to decide when the Dependency |
|
234 | corresponding to tasks, and a few attributes to guide how to decide when the Dependency | |
226 | has been met. |
|
235 | has been met. | |
227 |
|
236 | |||
228 | The switches we provide for interpreting whether a given dependency set has been met: |
|
237 | The switches we provide for interpreting whether a given dependency set has been met: | |
229 |
|
238 | |||
230 | any|all |
|
239 | any|all | |
231 | Whether the dependency is considered met if *any* of the dependencies are done, or |
|
240 | Whether the dependency is considered met if *any* of the dependencies are done, or | |
232 | only after *all* of them have finished. This is set by a Dependency's :attr:`all` |
|
241 | only after *all* of them have finished. This is set by a Dependency's :attr:`all` | |
233 | boolean attribute, which defaults to ``True``. |
|
242 | boolean attribute, which defaults to ``True``. | |
234 |
|
243 | |||
235 | success [default: True] |
|
244 | success [default: True] | |
236 | Whether to consider tasks that succeeded as fulfilling dependencies. |
|
245 | Whether to consider tasks that succeeded as fulfilling dependencies. | |
237 |
|
246 | |||
238 | failure [default : False] |
|
247 | failure [default : False] | |
239 | Whether to consider tasks that failed as fulfilling dependencies. |
|
248 | Whether to consider tasks that failed as fulfilling dependencies. | |
240 | using `failure=True,success=False` is useful for setting up cleanup tasks, to be run |
|
249 | using `failure=True,success=False` is useful for setting up cleanup tasks, to be run | |
241 | only when tasks have failed. |
|
250 | only when tasks have failed. | |
242 |
|
251 | |||
243 | Sometimes you want to run a task after another, but only if that task succeeded. In this case, |
|
252 | Sometimes you want to run a task after another, but only if that task succeeded. In this case, | |
244 | ``success`` should be ``True`` and ``failure`` should be ``False``. However sometimes you may |
|
253 | ``success`` should be ``True`` and ``failure`` should be ``False``. However sometimes you may | |
245 | not care whether the task succeeds, and always want the second task to run, in which case you |
|
254 | not care whether the task succeeds, and always want the second task to run, in which case you | |
246 | should use `success=failure=True`. The default behavior is to only use successes. |
|
255 | should use `success=failure=True`. The default behavior is to only use successes. | |
247 |
|
256 | |||
248 | There are other switches for interpretation that are made at the *task* level. These are |
|
257 | There are other switches for interpretation that are made at the *task* level. These are | |
249 | specified via keyword arguments to the client's :meth:`apply` method. |
|
258 | specified via keyword arguments to the client's :meth:`apply` method. | |
250 |
|
259 | |||
251 | after,follow |
|
260 | after,follow | |
252 | You may want to run a task *after* a given set of dependencies have been run and/or |
|
261 | You may want to run a task *after* a given set of dependencies have been run and/or | |
253 | run it *where* another set of dependencies are met. To support this, every task has an |
|
262 | run it *where* another set of dependencies are met. To support this, every task has an | |
254 | `after` dependency to restrict time, and a `follow` dependency to restrict |
|
263 | `after` dependency to restrict time, and a `follow` dependency to restrict | |
255 | destination. |
|
264 | destination. | |
256 |
|
265 | |||
257 | timeout |
|
266 | timeout | |
258 | You may also want to set a time-limit for how long the scheduler should wait before a |
|
267 | You may also want to set a time-limit for how long the scheduler should wait before a | |
259 | task's dependencies are met. This is done via a `timeout`, which defaults to 0, which |
|
268 | task's dependencies are met. This is done via a `timeout`, which defaults to 0, which | |
260 | indicates that the task should never timeout. If the timeout is reached, and the |
|
269 | indicates that the task should never timeout. If the timeout is reached, and the | |
261 | scheduler still hasn't been able to assign the task to an engine, the task will fail |
|
270 | scheduler still hasn't been able to assign the task to an engine, the task will fail | |
262 | with a :class:`DependencyTimeout`. |
|
271 | with a :class:`DependencyTimeout`. | |
263 |
|
272 | |||
264 | .. note:: |
|
273 | .. note:: | |
265 |
|
274 | |||
266 | Dependencies only work within the task scheduler. You cannot instruct a load-balanced |
|
275 | Dependencies only work within the task scheduler. You cannot instruct a load-balanced | |
267 | task to run after a job submitted via the MUX interface. |
|
276 | task to run after a job submitted via the MUX interface. | |
268 |
|
277 | |||
269 | The simplest form of Dependencies is with `all=True,success=True,failure=False`. In these cases, |
|
278 | The simplest form of Dependencies is with `all=True,success=True,failure=False`. In these cases, | |
270 | you can skip using Dependency objects, and just pass msg_ids or AsyncResult objects as the |
|
279 | you can skip using Dependency objects, and just pass msg_ids or AsyncResult objects as the | |
271 | `follow` and `after` keywords to :meth:`client.apply`: |
|
280 | `follow` and `after` keywords to :meth:`client.apply`: | |
272 |
|
281 | |||
273 | .. sourcecode:: ipython |
|
282 | .. sourcecode:: ipython | |
274 |
|
283 | |||
275 | In [14]: client.block=False |
|
284 | In [14]: client.block=False | |
276 |
|
285 | |||
277 | In [15]: ar = lview.apply(f, args, kwargs) |
|
286 | In [15]: ar = lview.apply(f, args, kwargs) | |
278 |
|
287 | |||
279 | In [16]: ar2 = lview.apply(f2) |
|
288 | In [16]: ar2 = lview.apply(f2) | |
280 |
|
289 | |||
281 | In [17]: ar3 = lview.apply_with_flags(f3, after=[ar,ar2]) |
|
290 | In [17]: ar3 = lview.apply_with_flags(f3, after=[ar,ar2]) | |
282 |
|
291 | |||
283 | In [17]: ar4 = lview.apply_with_flags(f3, follow=[ar], timeout=2.5) |
|
292 | In [17]: ar4 = lview.apply_with_flags(f3, follow=[ar], timeout=2.5) | |
284 |
|
293 | |||
285 |
|
294 | |||
286 | .. seealso:: |
|
295 | .. seealso:: | |
287 |
|
296 | |||
288 | Some parallel workloads can be described as a `Directed Acyclic Graph |
|
297 | Some parallel workloads can be described as a `Directed Acyclic Graph | |
289 | <http://en.wikipedia.org/wiki/Directed_acyclic_graph>`_, or DAG. See :ref:`DAG |
|
298 | <http://en.wikipedia.org/wiki/Directed_acyclic_graph>`_, or DAG. See :ref:`DAG | |
290 | Dependencies <dag_dependencies>` for an example demonstrating how to use map a NetworkX DAG |
|
299 | Dependencies <dag_dependencies>` for an example demonstrating how to use map a NetworkX DAG | |
291 | onto task dependencies. |
|
300 | onto task dependencies. | |
292 |
|
301 | |||
293 |
|
302 | |||
294 |
|
||||
295 |
|
||||
296 | Impossible Dependencies |
|
303 | Impossible Dependencies | |
297 | *********************** |
|
304 | *********************** | |
298 |
|
305 | |||
299 | The schedulers do perform some analysis on graph dependencies to determine whether they |
|
306 | The schedulers do perform some analysis on graph dependencies to determine whether they | |
300 | are not possible to be met. If the scheduler does discover that a dependency cannot be |
|
307 | are not possible to be met. If the scheduler does discover that a dependency cannot be | |
301 | met, then the task will fail with an :class:`ImpossibleDependency` error. This way, if the |
|
308 | met, then the task will fail with an :class:`ImpossibleDependency` error. This way, if the | |
302 | scheduler realized that a task can never be run, it won't sit indefinitely in the |
|
309 | scheduler realized that a task can never be run, it won't sit indefinitely in the | |
303 | scheduler clogging the pipeline. |
|
310 | scheduler clogging the pipeline. | |
304 |
|
311 | |||
305 | The basic cases that are checked: |
|
312 | The basic cases that are checked: | |
306 |
|
313 | |||
307 | * depending on nonexistent messages |
|
314 | * depending on nonexistent messages | |
308 | * `follow` dependencies were run on more than one machine and `all=True` |
|
315 | * `follow` dependencies were run on more than one machine and `all=True` | |
309 | * any dependencies failed and `all=True,success=True,failures=False` |
|
316 | * any dependencies failed and `all=True,success=True,failures=False` | |
310 | * all dependencies failed and `all=False,success=True,failure=False` |
|
317 | * all dependencies failed and `all=False,success=True,failure=False` | |
311 |
|
318 | |||
312 | .. warning:: |
|
319 | .. warning:: | |
313 |
|
320 | |||
314 | This analysis has not been proven to be rigorous, so it is likely possible for tasks |
|
321 | This analysis has not been proven to be rigorous, so it is likely possible for tasks | |
315 | to become impossible to run in obscure situations, so a timeout may be a good choice. |
|
322 | to become impossible to run in obscure situations, so a timeout may be a good choice. | |
316 |
|
323 | |||
317 |
|
324 | |||
318 | Retries and Resubmit |
|
325 | Retries and Resubmit | |
319 | ==================== |
|
326 | ==================== | |
320 |
|
327 | |||
321 | Retries |
|
328 | Retries | |
322 | ------- |
|
329 | ------- | |
323 |
|
330 | |||
324 | Another flag for tasks is `retries`. This is an integer, specifying how many times |
|
331 | Another flag for tasks is `retries`. This is an integer, specifying how many times | |
325 | a task should be resubmitted after failure. This is useful for tasks that should still run |
|
332 | a task should be resubmitted after failure. This is useful for tasks that should still run | |
326 | if their engine was shutdown, or may have some statistical chance of failing. The default |
|
333 | if their engine was shutdown, or may have some statistical chance of failing. The default | |
327 | is to not retry tasks. |
|
334 | is to not retry tasks. | |
328 |
|
335 | |||
329 | Resubmit |
|
336 | Resubmit | |
330 | -------- |
|
337 | -------- | |
331 |
|
338 | |||
332 | Sometimes you may want to re-run a task. This could be because it failed for some reason, and |
|
339 | Sometimes you may want to re-run a task. This could be because it failed for some reason, and | |
333 | you have fixed the error, or because you want to restore the cluster to an interrupted state. |
|
340 | you have fixed the error, or because you want to restore the cluster to an interrupted state. | |
334 | For this, the :class:`Client` has a :meth:`rc.resubmit` method. This simply takes one or more |
|
341 | For this, the :class:`Client` has a :meth:`rc.resubmit` method. This simply takes one or more | |
335 | msg_ids, and returns an :class:`AsyncHubResult` for the result(s). You cannot resubmit |
|
342 | msg_ids, and returns an :class:`AsyncHubResult` for the result(s). You cannot resubmit | |
336 | a task that is pending - only those that have finished, either successful or unsuccessful. |
|
343 | a task that is pending - only those that have finished, either successful or unsuccessful. | |
337 |
|
344 | |||
338 | .. _parallel_schedulers: |
|
345 | .. _parallel_schedulers: | |
339 |
|
346 | |||
340 | Schedulers |
|
347 | Schedulers | |
341 | ========== |
|
348 | ========== | |
342 |
|
349 | |||
343 | There are a variety of valid ways to determine where jobs should be assigned in a |
|
350 | There are a variety of valid ways to determine where jobs should be assigned in a | |
344 | load-balancing situation. In IPython, we support several standard schemes, and |
|
351 | load-balancing situation. In IPython, we support several standard schemes, and | |
345 | even make it easy to define your own. The scheme can be selected via the ``scheme`` |
|
352 | even make it easy to define your own. The scheme can be selected via the ``scheme`` | |
346 | argument to :command:`ipcontroller`, or in the :attr:`TaskScheduler.schemename` attribute |
|
353 | argument to :command:`ipcontroller`, or in the :attr:`TaskScheduler.schemename` attribute | |
347 | of a controller config object. |
|
354 | of a controller config object. | |
348 |
|
355 | |||
349 | The built-in routing schemes: |
|
356 | The built-in routing schemes: | |
350 |
|
357 | |||
351 | To select one of these schemes, simply do:: |
|
358 | To select one of these schemes, simply do:: | |
352 |
|
359 | |||
353 | $ ipcontroller --scheme=<schemename> |
|
360 | $ ipcontroller --scheme=<schemename> | |
354 | for instance: |
|
361 | for instance: | |
355 | $ ipcontroller --scheme=lru |
|
362 | $ ipcontroller --scheme=lru | |
356 |
|
363 | |||
357 | lru: Least Recently Used |
|
364 | lru: Least Recently Used | |
358 |
|
365 | |||
359 | Always assign work to the least-recently-used engine. A close relative of |
|
366 | Always assign work to the least-recently-used engine. A close relative of | |
360 | round-robin, it will be fair with respect to the number of tasks, agnostic |
|
367 | round-robin, it will be fair with respect to the number of tasks, agnostic | |
361 | with respect to runtime of each task. |
|
368 | with respect to runtime of each task. | |
362 |
|
369 | |||
363 | plainrandom: Plain Random |
|
370 | plainrandom: Plain Random | |
364 |
|
371 | |||
365 | Randomly picks an engine on which to run. |
|
372 | Randomly picks an engine on which to run. | |
366 |
|
373 | |||
367 | twobin: Two-Bin Random |
|
374 | twobin: Two-Bin Random | |
368 |
|
375 | |||
369 | **Requires numpy** |
|
376 | **Requires numpy** | |
370 |
|
377 | |||
371 | Pick two engines at random, and use the LRU of the two. This is known to be better |
|
378 | Pick two engines at random, and use the LRU of the two. This is known to be better | |
372 | than plain random in many cases, but requires a small amount of computation. |
|
379 | than plain random in many cases, but requires a small amount of computation. | |
373 |
|
380 | |||
374 | leastload: Least Load |
|
381 | leastload: Least Load | |
375 |
|
382 | |||
376 | **This is the default scheme** |
|
383 | **This is the default scheme** | |
377 |
|
384 | |||
378 | Always assign tasks to the engine with the fewest outstanding tasks (LRU breaks tie). |
|
385 | Always assign tasks to the engine with the fewest outstanding tasks (LRU breaks tie). | |
379 |
|
386 | |||
380 | weighted: Weighted Two-Bin Random |
|
387 | weighted: Weighted Two-Bin Random | |
381 |
|
388 | |||
382 | **Requires numpy** |
|
389 | **Requires numpy** | |
383 |
|
390 | |||
384 | Pick two engines at random using the number of outstanding tasks as inverse weights, |
|
391 | Pick two engines at random using the number of outstanding tasks as inverse weights, | |
385 | and use the one with the lower load. |
|
392 | and use the one with the lower load. | |
386 |
|
393 | |||
387 |
|
394 | |||
388 | Pure ZMQ Scheduler |
|
395 | Pure ZMQ Scheduler | |
389 | ------------------ |
|
396 | ------------------ | |
390 |
|
397 | |||
391 | For maximum throughput, the 'pure' scheme is not Python at all, but a C-level |
|
398 | For maximum throughput, the 'pure' scheme is not Python at all, but a C-level | |
392 | :class:`MonitoredQueue` from PyZMQ, which uses a ZeroMQ ``DEALER`` socket to perform all |
|
399 | :class:`MonitoredQueue` from PyZMQ, which uses a ZeroMQ ``DEALER`` socket to perform all | |
393 | load-balancing. This scheduler does not support any of the advanced features of the Python |
|
400 | load-balancing. This scheduler does not support any of the advanced features of the Python | |
394 | :class:`.Scheduler`. |
|
401 | :class:`.Scheduler`. | |
395 |
|
402 | |||
396 | Disabled features when using the ZMQ Scheduler: |
|
403 | Disabled features when using the ZMQ Scheduler: | |
397 |
|
404 | |||
398 | * Engine unregistration |
|
405 | * Engine unregistration | |
399 | Task farming will be disabled if an engine unregisters. |
|
406 | Task farming will be disabled if an engine unregisters. | |
400 | Further, if an engine is unregistered during computation, the scheduler may not recover. |
|
407 | Further, if an engine is unregistered during computation, the scheduler may not recover. | |
401 | * Dependencies |
|
408 | * Dependencies | |
402 | Since there is no Python logic inside the Scheduler, routing decisions cannot be made |
|
409 | Since there is no Python logic inside the Scheduler, routing decisions cannot be made | |
403 | based on message content. |
|
410 | based on message content. | |
404 | * Early destination notification |
|
411 | * Early destination notification | |
405 | The Python schedulers know which engine gets which task, and notify the Hub. This |
|
412 | The Python schedulers know which engine gets which task, and notify the Hub. This | |
406 | allows graceful handling of Engines coming and going. There is no way to know |
|
413 | allows graceful handling of Engines coming and going. There is no way to know | |
407 | where ZeroMQ messages have gone, so there is no way to know what tasks are on which |
|
414 | where ZeroMQ messages have gone, so there is no way to know what tasks are on which | |
408 | engine until they *finish*. This makes recovery from engine shutdown very difficult. |
|
415 | engine until they *finish*. This makes recovery from engine shutdown very difficult. | |
409 |
|
416 | |||
410 |
|
417 | |||
411 | .. note:: |
|
418 | .. note:: | |
412 |
|
419 | |||
413 | TODO: performance comparisons |
|
420 | TODO: performance comparisons | |
414 |
|
421 | |||
415 |
|
422 | |||
416 |
|
423 | |||
417 |
|
424 | |||
418 | More details |
|
425 | More details | |
419 | ============ |
|
426 | ============ | |
420 |
|
427 | |||
421 | The :class:`LoadBalancedView` has many more powerful features that allow quite a bit |
|
428 | The :class:`LoadBalancedView` has many more powerful features that allow quite a bit | |
422 | of flexibility in how tasks are defined and run. The next places to look are |
|
429 | of flexibility in how tasks are defined and run. The next places to look are | |
423 | in the following classes: |
|
430 | in the following classes: | |
424 |
|
431 | |||
425 | * :class:`~IPython.parallel.client.view.LoadBalancedView` |
|
432 | * :class:`~IPython.parallel.client.view.LoadBalancedView` | |
426 | * :class:`~IPython.parallel.client.asyncresult.AsyncResult` |
|
433 | * :class:`~IPython.parallel.client.asyncresult.AsyncResult` | |
427 | * :meth:`~IPython.parallel.client.view.LoadBalancedView.apply` |
|
434 | * :meth:`~IPython.parallel.client.view.LoadBalancedView.apply` | |
428 | * :mod:`~IPython.parallel.controller.dependency` |
|
435 | * :mod:`~IPython.parallel.controller.dependency` | |
429 |
|
436 | |||
430 | The following is an overview of how to use these classes together: |
|
437 | The following is an overview of how to use these classes together: | |
431 |
|
438 | |||
432 | 1. Create a :class:`Client` and :class:`LoadBalancedView` |
|
439 | 1. Create a :class:`Client` and :class:`LoadBalancedView` | |
433 | 2. Define some functions to be run as tasks |
|
440 | 2. Define some functions to be run as tasks | |
434 | 3. Submit your tasks to using the :meth:`apply` method of your |
|
441 | 3. Submit your tasks to using the :meth:`apply` method of your | |
435 | :class:`LoadBalancedView` instance. |
|
442 | :class:`LoadBalancedView` instance. | |
436 | 4. Use :meth:`Client.get_result` to get the results of the |
|
443 | 4. Use :meth:`Client.get_result` to get the results of the | |
437 | tasks, or use the :meth:`AsyncResult.get` method of the results to wait |
|
444 | tasks, or use the :meth:`AsyncResult.get` method of the results to wait | |
438 | for and then receive the results. |
|
445 | for and then receive the results. | |
439 |
|
446 | |||
440 | .. seealso:: |
|
447 | .. seealso:: | |
441 |
|
448 | |||
442 | A demo of :ref:`DAG Dependencies <dag_dependencies>` with NetworkX and IPython. |
|
449 | A demo of :ref:`DAG Dependencies <dag_dependencies>` with NetworkX and IPython. |
@@ -1,334 +1,334 b'' | |||||
1 | ============================================ |
|
1 | ============================================ | |
2 | Getting started with Windows HPC Server 2008 |
|
2 | Getting started with Windows HPC Server 2008 | |
3 | ============================================ |
|
3 | ============================================ | |
4 |
|
4 | |||
5 | .. note:: |
|
5 | .. note:: | |
6 |
|
6 | |||
7 | Not adapted to zmq yet |
|
7 | Not adapted to zmq yet | |
8 |
|
8 | |||
9 | Introduction |
|
9 | Introduction | |
10 | ============ |
|
10 | ============ | |
11 |
|
11 | |||
12 | The Python programming language is an increasingly popular language for |
|
12 | The Python programming language is an increasingly popular language for | |
13 | numerical computing. This is due to a unique combination of factors. First, |
|
13 | numerical computing. This is due to a unique combination of factors. First, | |
14 | Python is a high-level and *interactive* language that is well matched to |
|
14 | Python is a high-level and *interactive* language that is well matched to | |
15 | interactive numerical work. Second, it is easy (often times trivial) to |
|
15 | interactive numerical work. Second, it is easy (often times trivial) to | |
16 | integrate legacy C/C++/Fortran code into Python. Third, a large number of |
|
16 | integrate legacy C/C++/Fortran code into Python. Third, a large number of | |
17 | high-quality open source projects provide all the needed building blocks for |
|
17 | high-quality open source projects provide all the needed building blocks for | |
18 | numerical computing: numerical arrays (NumPy), algorithms (SciPy), 2D/3D |
|
18 | numerical computing: numerical arrays (NumPy), algorithms (SciPy), 2D/3D | |
19 | Visualization (Matplotlib, Mayavi, Chaco), Symbolic Mathematics (Sage, Sympy) |
|
19 | Visualization (Matplotlib, Mayavi, Chaco), Symbolic Mathematics (Sage, Sympy) | |
20 | and others. |
|
20 | and others. | |
21 |
|
21 | |||
22 | The IPython project is a core part of this open-source toolchain and is |
|
22 | The IPython project is a core part of this open-source toolchain and is | |
23 | focused on creating a comprehensive environment for interactive and |
|
23 | focused on creating a comprehensive environment for interactive and | |
24 | exploratory computing in the Python programming language. It enables all of |
|
24 | exploratory computing in the Python programming language. It enables all of | |
25 | the above tools to be used interactively and consists of two main components: |
|
25 | the above tools to be used interactively and consists of two main components: | |
26 |
|
26 | |||
27 | * An enhanced interactive Python shell with support for interactive plotting |
|
27 | * An enhanced interactive Python shell with support for interactive plotting | |
28 | and visualization. |
|
28 | and visualization. | |
29 | * An architecture for interactive parallel computing. |
|
29 | * An architecture for interactive parallel computing. | |
30 |
|
30 | |||
31 | With these components, it is possible to perform all aspects of a parallel |
|
31 | With these components, it is possible to perform all aspects of a parallel | |
32 | computation interactively. This type of workflow is particularly relevant in |
|
32 | computation interactively. This type of workflow is particularly relevant in | |
33 | scientific and numerical computing where algorithms, code and data are |
|
33 | scientific and numerical computing where algorithms, code and data are | |
34 | continually evolving as the user/developer explores a problem. The broad |
|
34 | continually evolving as the user/developer explores a problem. The broad | |
35 | treads in computing (commodity clusters, multicore, cloud computing, etc.) |
|
35 | treads in computing (commodity clusters, multicore, cloud computing, etc.) | |
36 | make these capabilities of IPython particularly relevant. |
|
36 | make these capabilities of IPython particularly relevant. | |
37 |
|
37 | |||
38 | While IPython is a cross platform tool, it has particularly strong support for |
|
38 | While IPython is a cross platform tool, it has particularly strong support for | |
39 | Windows based compute clusters running Windows HPC Server 2008. This document |
|
39 | Windows based compute clusters running Windows HPC Server 2008. This document | |
40 | describes how to get started with IPython on Windows HPC Server 2008. The |
|
40 | describes how to get started with IPython on Windows HPC Server 2008. The | |
41 | content and emphasis here is practical: installing IPython, configuring |
|
41 | content and emphasis here is practical: installing IPython, configuring | |
42 | IPython to use the Windows job scheduler and running example parallel programs |
|
42 | IPython to use the Windows job scheduler and running example parallel programs | |
43 | interactively. A more complete description of IPython's parallel computing |
|
43 | interactively. A more complete description of IPython's parallel computing | |
44 | capabilities can be found in IPython's online documentation |
|
44 | capabilities can be found in IPython's online documentation | |
45 | (http://ipython.org/documentation.html). |
|
45 | (http://ipython.org/documentation.html). | |
46 |
|
46 | |||
47 | Setting up your Windows cluster |
|
47 | Setting up your Windows cluster | |
48 | =============================== |
|
48 | =============================== | |
49 |
|
49 | |||
50 | This document assumes that you already have a cluster running Windows |
|
50 | This document assumes that you already have a cluster running Windows | |
51 | HPC Server 2008. Here is a broad overview of what is involved with setting up |
|
51 | HPC Server 2008. Here is a broad overview of what is involved with setting up | |
52 | such a cluster: |
|
52 | such a cluster: | |
53 |
|
53 | |||
54 | 1. Install Windows Server 2008 on the head and compute nodes in the cluster. |
|
54 | 1. Install Windows Server 2008 on the head and compute nodes in the cluster. | |
55 | 2. Setup the network configuration on each host. Each host should have a |
|
55 | 2. Setup the network configuration on each host. Each host should have a | |
56 | static IP address. |
|
56 | static IP address. | |
57 | 3. On the head node, activate the "Active Directory Domain Services" role |
|
57 | 3. On the head node, activate the "Active Directory Domain Services" role | |
58 | and make the head node the domain controller. |
|
58 | and make the head node the domain controller. | |
59 | 4. Join the compute nodes to the newly created Active Directory (AD) domain. |
|
59 | 4. Join the compute nodes to the newly created Active Directory (AD) domain. | |
60 | 5. Setup user accounts in the domain with shared home directories. |
|
60 | 5. Setup user accounts in the domain with shared home directories. | |
61 | 6. Install the HPC Pack 2008 on the head node to create a cluster. |
|
61 | 6. Install the HPC Pack 2008 on the head node to create a cluster. | |
62 | 7. Install the HPC Pack 2008 on the compute nodes. |
|
62 | 7. Install the HPC Pack 2008 on the compute nodes. | |
63 |
|
63 | |||
64 | More details about installing and configuring Windows HPC Server 2008 can be |
|
64 | More details about installing and configuring Windows HPC Server 2008 can be | |
65 | found on the Windows HPC Home Page (http://www.microsoft.com/hpc). Regardless |
|
65 | found on the Windows HPC Home Page (http://www.microsoft.com/hpc). Regardless | |
66 | of what steps you follow to set up your cluster, the remainder of this |
|
66 | of what steps you follow to set up your cluster, the remainder of this | |
67 | document will assume that: |
|
67 | document will assume that: | |
68 |
|
68 | |||
69 | * There are domain users that can log on to the AD domain and submit jobs |
|
69 | * There are domain users that can log on to the AD domain and submit jobs | |
70 | to the cluster scheduler. |
|
70 | to the cluster scheduler. | |
71 | * These domain users have shared home directories. While shared home |
|
71 | * These domain users have shared home directories. While shared home | |
72 | directories are not required to use IPython, they make it much easier to |
|
72 | directories are not required to use IPython, they make it much easier to | |
73 | use IPython. |
|
73 | use IPython. | |
74 |
|
74 | |||
75 | Installation of IPython and its dependencies |
|
75 | Installation of IPython and its dependencies | |
76 | ============================================ |
|
76 | ============================================ | |
77 |
|
77 | |||
78 | IPython and all of its dependencies are freely available and open source. |
|
78 | IPython and all of its dependencies are freely available and open source. | |
79 | These packages provide a powerful and cost-effective approach to numerical and |
|
79 | These packages provide a powerful and cost-effective approach to numerical and | |
80 | scientific computing on Windows. The following dependencies are needed to run |
|
80 | scientific computing on Windows. The following dependencies are needed to run | |
81 | IPython on Windows: |
|
81 | IPython on Windows: | |
82 |
|
82 | |||
83 | * Python 2.6 or 2.7 (http://www.python.org) |
|
83 | * Python 2.6 or 2.7 (http://www.python.org) | |
84 | * pywin32 (http://sourceforge.net/projects/pywin32/) |
|
84 | * pywin32 (http://sourceforge.net/projects/pywin32/) | |
85 | * PyReadline (https://launchpad.net/pyreadline) |
|
85 | * PyReadline (https://launchpad.net/pyreadline) | |
86 | * pyzmq (http://github.com/zeromq/pyzmq/downloads) |
|
86 | * pyzmq (http://github.com/zeromq/pyzmq/downloads) | |
87 | * IPython (http://ipython.org) |
|
87 | * IPython (http://ipython.org) | |
88 |
|
88 | |||
89 | In addition, the following dependencies are needed to run the demos described |
|
89 | In addition, the following dependencies are needed to run the demos described | |
90 | in this document. |
|
90 | in this document. | |
91 |
|
91 | |||
92 | * NumPy and SciPy (http://www.scipy.org) |
|
92 | * NumPy and SciPy (http://www.scipy.org) | |
93 | * Matplotlib (http://matplotlib.sourceforge.net/) |
|
93 | * Matplotlib (http://matplotlib.sourceforge.net/) | |
94 |
|
94 | |||
95 | The easiest way of obtaining these dependencies is through the Enthought |
|
95 | The easiest way of obtaining these dependencies is through the Enthought | |
96 | Python Distribution (EPD) (http://www.enthought.com/products/epd.php). EPD is |
|
96 | Python Distribution (EPD) (http://www.enthought.com/products/epd.php). EPD is | |
97 | produced by Enthought, Inc. and contains all of these packages and others in a |
|
97 | produced by Enthought, Inc. and contains all of these packages and others in a | |
98 | single installer and is available free for academic users. While it is also |
|
98 | single installer and is available free for academic users. While it is also | |
99 | possible to download and install each package individually, this is a tedious |
|
99 | possible to download and install each package individually, this is a tedious | |
100 | process. Thus, we highly recommend using EPD to install these packages on |
|
100 | process. Thus, we highly recommend using EPD to install these packages on | |
101 | Windows. |
|
101 | Windows. | |
102 |
|
102 | |||
103 | Regardless of how you install the dependencies, here are the steps you will |
|
103 | Regardless of how you install the dependencies, here are the steps you will | |
104 | need to follow: |
|
104 | need to follow: | |
105 |
|
105 | |||
106 | 1. Install all of the packages listed above, either individually or using EPD |
|
106 | 1. Install all of the packages listed above, either individually or using EPD | |
107 | on the head node, compute nodes and user workstations. |
|
107 | on the head node, compute nodes and user workstations. | |
108 |
|
108 | |||
109 | 2. Make sure that :file:`C:\\Python27` and :file:`C:\\Python27\\Scripts` are |
|
109 | 2. Make sure that :file:`C:\\Python27` and :file:`C:\\Python27\\Scripts` are | |
110 | in the system :envvar:`%PATH%` variable on each node. |
|
110 | in the system :envvar:`%PATH%` variable on each node. | |
111 |
|
111 | |||
112 | 3. Install the latest development version of IPython. This can be done by |
|
112 | 3. Install the latest development version of IPython. This can be done by | |
113 | downloading the the development version from the IPython website |
|
113 | downloading the the development version from the IPython website | |
114 | (http://ipython.org) and following the installation instructions. |
|
114 | (http://ipython.org) and following the installation instructions. | |
115 |
|
115 | |||
116 | Further details about installing IPython or its dependencies can be found in |
|
116 | Further details about installing IPython or its dependencies can be found in | |
117 | the online IPython documentation (http://ipython.org/documentation.html) |
|
117 | the online IPython documentation (http://ipython.org/documentation.html) | |
118 | Once you are finished with the installation, you can try IPython out by |
|
118 | Once you are finished with the installation, you can try IPython out by | |
119 | opening a Windows Command Prompt and typing ``ipython``. This will |
|
119 | opening a Windows Command Prompt and typing ``ipython``. This will | |
120 | start IPython's interactive shell and you should see something like the |
|
120 | start IPython's interactive shell and you should see something like the | |
121 | following screenshot: |
|
121 | following screenshot: | |
122 |
|
122 | |||
123 | .. image:: ipython_shell.* |
|
123 | .. image:: figs/ipython_shell.* | |
124 |
|
124 | |||
125 | Starting an IPython cluster |
|
125 | Starting an IPython cluster | |
126 | =========================== |
|
126 | =========================== | |
127 |
|
127 | |||
128 | To use IPython's parallel computing capabilities, you will need to start an |
|
128 | To use IPython's parallel computing capabilities, you will need to start an | |
129 | IPython cluster. An IPython cluster consists of one controller and multiple |
|
129 | IPython cluster. An IPython cluster consists of one controller and multiple | |
130 | engines: |
|
130 | engines: | |
131 |
|
131 | |||
132 | IPython controller |
|
132 | IPython controller | |
133 | The IPython controller manages the engines and acts as a gateway between |
|
133 | The IPython controller manages the engines and acts as a gateway between | |
134 | the engines and the client, which runs in the user's interactive IPython |
|
134 | the engines and the client, which runs in the user's interactive IPython | |
135 | session. The controller is started using the :command:`ipcontroller` |
|
135 | session. The controller is started using the :command:`ipcontroller` | |
136 | command. |
|
136 | command. | |
137 |
|
137 | |||
138 | IPython engine |
|
138 | IPython engine | |
139 | IPython engines run a user's Python code in parallel on the compute nodes. |
|
139 | IPython engines run a user's Python code in parallel on the compute nodes. | |
140 | Engines are starting using the :command:`ipengine` command. |
|
140 | Engines are starting using the :command:`ipengine` command. | |
141 |
|
141 | |||
142 | Once these processes are started, a user can run Python code interactively and |
|
142 | Once these processes are started, a user can run Python code interactively and | |
143 | in parallel on the engines from within the IPython shell using an appropriate |
|
143 | in parallel on the engines from within the IPython shell using an appropriate | |
144 | client. This includes the ability to interact with, plot and visualize data |
|
144 | client. This includes the ability to interact with, plot and visualize data | |
145 | from the engines. |
|
145 | from the engines. | |
146 |
|
146 | |||
147 | IPython has a command line program called :command:`ipcluster` that automates |
|
147 | IPython has a command line program called :command:`ipcluster` that automates | |
148 | all aspects of starting the controller and engines on the compute nodes. |
|
148 | all aspects of starting the controller and engines on the compute nodes. | |
149 | :command:`ipcluster` has full support for the Windows HPC job scheduler, |
|
149 | :command:`ipcluster` has full support for the Windows HPC job scheduler, | |
150 | meaning that :command:`ipcluster` can use this job scheduler to start the |
|
150 | meaning that :command:`ipcluster` can use this job scheduler to start the | |
151 | controller and engines. In our experience, the Windows HPC job scheduler is |
|
151 | controller and engines. In our experience, the Windows HPC job scheduler is | |
152 | particularly well suited for interactive applications, such as IPython. Once |
|
152 | particularly well suited for interactive applications, such as IPython. Once | |
153 | :command:`ipcluster` is configured properly, a user can start an IPython |
|
153 | :command:`ipcluster` is configured properly, a user can start an IPython | |
154 | cluster from their local workstation almost instantly, without having to log |
|
154 | cluster from their local workstation almost instantly, without having to log | |
155 | on to the head node (as is typically required by Unix based job schedulers). |
|
155 | on to the head node (as is typically required by Unix based job schedulers). | |
156 | This enables a user to move seamlessly between serial and parallel |
|
156 | This enables a user to move seamlessly between serial and parallel | |
157 | computations. |
|
157 | computations. | |
158 |
|
158 | |||
159 | In this section we show how to use :command:`ipcluster` to start an IPython |
|
159 | In this section we show how to use :command:`ipcluster` to start an IPython | |
160 | cluster using the Windows HPC Server 2008 job scheduler. To make sure that |
|
160 | cluster using the Windows HPC Server 2008 job scheduler. To make sure that | |
161 | :command:`ipcluster` is installed and working properly, you should first try |
|
161 | :command:`ipcluster` is installed and working properly, you should first try | |
162 | to start an IPython cluster on your local host. To do this, open a Windows |
|
162 | to start an IPython cluster on your local host. To do this, open a Windows | |
163 | Command Prompt and type the following command:: |
|
163 | Command Prompt and type the following command:: | |
164 |
|
164 | |||
165 | ipcluster start n=2 |
|
165 | ipcluster start n=2 | |
166 |
|
166 | |||
167 | You should see a number of messages printed to the screen, ending with |
|
167 | You should see a number of messages printed to the screen, ending with | |
168 | "IPython cluster: started". The result should look something like the following |
|
168 | "IPython cluster: started". The result should look something like the following | |
169 | screenshot: |
|
169 | screenshot: | |
170 |
|
170 | |||
171 | .. image:: ipcluster_start.* |
|
171 | .. image:: figs/ipcluster_start.* | |
172 |
|
172 | |||
173 | At this point, the controller and two engines are running on your local host. |
|
173 | At this point, the controller and two engines are running on your local host. | |
174 | This configuration is useful for testing and for situations where you want to |
|
174 | This configuration is useful for testing and for situations where you want to | |
175 | take advantage of multiple cores on your local computer. |
|
175 | take advantage of multiple cores on your local computer. | |
176 |
|
176 | |||
177 | Now that we have confirmed that :command:`ipcluster` is working properly, we |
|
177 | Now that we have confirmed that :command:`ipcluster` is working properly, we | |
178 | describe how to configure and run an IPython cluster on an actual compute |
|
178 | describe how to configure and run an IPython cluster on an actual compute | |
179 | cluster running Windows HPC Server 2008. Here is an outline of the needed |
|
179 | cluster running Windows HPC Server 2008. Here is an outline of the needed | |
180 | steps: |
|
180 | steps: | |
181 |
|
181 | |||
182 | 1. Create a cluster profile using: ``ipython profile create --parallel profile=mycluster`` |
|
182 | 1. Create a cluster profile using: ``ipython profile create --parallel profile=mycluster`` | |
183 |
|
183 | |||
184 | 2. Edit configuration files in the directory :file:`.ipython\\cluster_mycluster` |
|
184 | 2. Edit configuration files in the directory :file:`.ipython\\cluster_mycluster` | |
185 |
|
185 | |||
186 | 3. Start the cluster using: ``ipcluser start profile=mycluster n=32`` |
|
186 | 3. Start the cluster using: ``ipcluser start profile=mycluster n=32`` | |
187 |
|
187 | |||
188 | Creating a cluster profile |
|
188 | Creating a cluster profile | |
189 | -------------------------- |
|
189 | -------------------------- | |
190 |
|
190 | |||
191 | In most cases, you will have to create a cluster profile to use IPython on a |
|
191 | In most cases, you will have to create a cluster profile to use IPython on a | |
192 | cluster. A cluster profile is a name (like "mycluster") that is associated |
|
192 | cluster. A cluster profile is a name (like "mycluster") that is associated | |
193 | with a particular cluster configuration. The profile name is used by |
|
193 | with a particular cluster configuration. The profile name is used by | |
194 | :command:`ipcluster` when working with the cluster. |
|
194 | :command:`ipcluster` when working with the cluster. | |
195 |
|
195 | |||
196 | Associated with each cluster profile is a cluster directory. This cluster |
|
196 | Associated with each cluster profile is a cluster directory. This cluster | |
197 | directory is a specially named directory (typically located in the |
|
197 | directory is a specially named directory (typically located in the | |
198 | :file:`.ipython` subdirectory of your home directory) that contains the |
|
198 | :file:`.ipython` subdirectory of your home directory) that contains the | |
199 | configuration files for a particular cluster profile, as well as log files and |
|
199 | configuration files for a particular cluster profile, as well as log files and | |
200 | security keys. The naming convention for cluster directories is: |
|
200 | security keys. The naming convention for cluster directories is: | |
201 | :file:`profile_<profile name>`. Thus, the cluster directory for a profile named |
|
201 | :file:`profile_<profile name>`. Thus, the cluster directory for a profile named | |
202 | "foo" would be :file:`.ipython\\cluster_foo`. |
|
202 | "foo" would be :file:`.ipython\\cluster_foo`. | |
203 |
|
203 | |||
204 | To create a new cluster profile (named "mycluster") and the associated cluster |
|
204 | To create a new cluster profile (named "mycluster") and the associated cluster | |
205 | directory, type the following command at the Windows Command Prompt:: |
|
205 | directory, type the following command at the Windows Command Prompt:: | |
206 |
|
206 | |||
207 | ipython profile create --parallel --profile=mycluster |
|
207 | ipython profile create --parallel --profile=mycluster | |
208 |
|
208 | |||
209 | The output of this command is shown in the screenshot below. Notice how |
|
209 | The output of this command is shown in the screenshot below. Notice how | |
210 | :command:`ipcluster` prints out the location of the newly created cluster |
|
210 | :command:`ipcluster` prints out the location of the newly created cluster | |
211 | directory. |
|
211 | directory. | |
212 |
|
212 | |||
213 | .. image:: ipcluster_create.* |
|
213 | .. image:: figs/ipcluster_create.* | |
214 |
|
214 | |||
215 | Configuring a cluster profile |
|
215 | Configuring a cluster profile | |
216 | ----------------------------- |
|
216 | ----------------------------- | |
217 |
|
217 | |||
218 | Next, you will need to configure the newly created cluster profile by editing |
|
218 | Next, you will need to configure the newly created cluster profile by editing | |
219 | the following configuration files in the cluster directory: |
|
219 | the following configuration files in the cluster directory: | |
220 |
|
220 | |||
221 | * :file:`ipcluster_config.py` |
|
221 | * :file:`ipcluster_config.py` | |
222 | * :file:`ipcontroller_config.py` |
|
222 | * :file:`ipcontroller_config.py` | |
223 | * :file:`ipengine_config.py` |
|
223 | * :file:`ipengine_config.py` | |
224 |
|
224 | |||
225 | When :command:`ipcluster` is run, these configuration files are used to |
|
225 | When :command:`ipcluster` is run, these configuration files are used to | |
226 | determine how the engines and controller will be started. In most cases, |
|
226 | determine how the engines and controller will be started. In most cases, | |
227 | you will only have to set a few of the attributes in these files. |
|
227 | you will only have to set a few of the attributes in these files. | |
228 |
|
228 | |||
229 | To configure :command:`ipcluster` to use the Windows HPC job scheduler, you |
|
229 | To configure :command:`ipcluster` to use the Windows HPC job scheduler, you | |
230 | will need to edit the following attributes in the file |
|
230 | will need to edit the following attributes in the file | |
231 | :file:`ipcluster_config.py`:: |
|
231 | :file:`ipcluster_config.py`:: | |
232 |
|
232 | |||
233 | # Set these at the top of the file to tell ipcluster to use the |
|
233 | # Set these at the top of the file to tell ipcluster to use the | |
234 | # Windows HPC job scheduler. |
|
234 | # Windows HPC job scheduler. | |
235 | c.IPClusterStart.controller_launcher = \ |
|
235 | c.IPClusterStart.controller_launcher = \ | |
236 | 'IPython.parallel.apps.launcher.WindowsHPCControllerLauncher' |
|
236 | 'IPython.parallel.apps.launcher.WindowsHPCControllerLauncher' | |
237 | c.IPClusterEngines.engine_launcher = \ |
|
237 | c.IPClusterEngines.engine_launcher = \ | |
238 | 'IPython.parallel.apps.launcher.WindowsHPCEngineSetLauncher' |
|
238 | 'IPython.parallel.apps.launcher.WindowsHPCEngineSetLauncher' | |
239 |
|
239 | |||
240 | # Set these to the host name of the scheduler (head node) of your cluster. |
|
240 | # Set these to the host name of the scheduler (head node) of your cluster. | |
241 | c.WindowsHPCControllerLauncher.scheduler = 'HEADNODE' |
|
241 | c.WindowsHPCControllerLauncher.scheduler = 'HEADNODE' | |
242 | c.WindowsHPCEngineSetLauncher.scheduler = 'HEADNODE' |
|
242 | c.WindowsHPCEngineSetLauncher.scheduler = 'HEADNODE' | |
243 |
|
243 | |||
244 | There are a number of other configuration attributes that can be set, but |
|
244 | There are a number of other configuration attributes that can be set, but | |
245 | in most cases these will be sufficient to get you started. |
|
245 | in most cases these will be sufficient to get you started. | |
246 |
|
246 | |||
247 | .. warning:: |
|
247 | .. warning:: | |
248 | If any of your configuration attributes involve specifying the location |
|
248 | If any of your configuration attributes involve specifying the location | |
249 | of shared directories or files, you must make sure that you use UNC paths |
|
249 | of shared directories or files, you must make sure that you use UNC paths | |
250 | like :file:`\\\\host\\share`. It is also important that you specify |
|
250 | like :file:`\\\\host\\share`. It is also important that you specify | |
251 | these paths using raw Python strings: ``r'\\host\share'`` to make sure |
|
251 | these paths using raw Python strings: ``r'\\host\share'`` to make sure | |
252 | that the backslashes are properly escaped. |
|
252 | that the backslashes are properly escaped. | |
253 |
|
253 | |||
254 | Starting the cluster profile |
|
254 | Starting the cluster profile | |
255 | ---------------------------- |
|
255 | ---------------------------- | |
256 |
|
256 | |||
257 | Once a cluster profile has been configured, starting an IPython cluster using |
|
257 | Once a cluster profile has been configured, starting an IPython cluster using | |
258 | the profile is simple:: |
|
258 | the profile is simple:: | |
259 |
|
259 | |||
260 | ipcluster start --profile=mycluster -n 32 |
|
260 | ipcluster start --profile=mycluster -n 32 | |
261 |
|
261 | |||
262 | The ``-n`` option tells :command:`ipcluster` how many engines to start (in |
|
262 | The ``-n`` option tells :command:`ipcluster` how many engines to start (in | |
263 | this case 32). Stopping the cluster is as simple as typing Control-C. |
|
263 | this case 32). Stopping the cluster is as simple as typing Control-C. | |
264 |
|
264 | |||
265 | Using the HPC Job Manager |
|
265 | Using the HPC Job Manager | |
266 | ------------------------- |
|
266 | ------------------------- | |
267 |
|
267 | |||
268 | When ``ipcluster start`` is run the first time, :command:`ipcluster` creates |
|
268 | When ``ipcluster start`` is run the first time, :command:`ipcluster` creates | |
269 | two XML job description files in the cluster directory: |
|
269 | two XML job description files in the cluster directory: | |
270 |
|
270 | |||
271 | * :file:`ipcontroller_job.xml` |
|
271 | * :file:`ipcontroller_job.xml` | |
272 | * :file:`ipengineset_job.xml` |
|
272 | * :file:`ipengineset_job.xml` | |
273 |
|
273 | |||
274 | Once these files have been created, they can be imported into the HPC Job |
|
274 | Once these files have been created, they can be imported into the HPC Job | |
275 | Manager application. Then, the controller and engines for that profile can be |
|
275 | Manager application. Then, the controller and engines for that profile can be | |
276 | started using the HPC Job Manager directly, without using :command:`ipcluster`. |
|
276 | started using the HPC Job Manager directly, without using :command:`ipcluster`. | |
277 | However, anytime the cluster profile is re-configured, ``ipcluster start`` |
|
277 | However, anytime the cluster profile is re-configured, ``ipcluster start`` | |
278 | must be run again to regenerate the XML job description files. The |
|
278 | must be run again to regenerate the XML job description files. The | |
279 | following screenshot shows what the HPC Job Manager interface looks like |
|
279 | following screenshot shows what the HPC Job Manager interface looks like | |
280 | with a running IPython cluster. |
|
280 | with a running IPython cluster. | |
281 |
|
281 | |||
282 | .. image:: hpc_job_manager.* |
|
282 | .. image:: figs/hpc_job_manager.* | |
283 |
|
283 | |||
284 | Performing a simple interactive parallel computation |
|
284 | Performing a simple interactive parallel computation | |
285 | ==================================================== |
|
285 | ==================================================== | |
286 |
|
286 | |||
287 | Once you have started your IPython cluster, you can start to use it. To do |
|
287 | Once you have started your IPython cluster, you can start to use it. To do | |
288 | this, open up a new Windows Command Prompt and start up IPython's interactive |
|
288 | this, open up a new Windows Command Prompt and start up IPython's interactive | |
289 | shell by typing:: |
|
289 | shell by typing:: | |
290 |
|
290 | |||
291 | ipython |
|
291 | ipython | |
292 |
|
292 | |||
293 | Then you can create a :class:`MultiEngineClient` instance for your profile and |
|
293 | Then you can create a :class:`MultiEngineClient` instance for your profile and | |
294 | use the resulting instance to do a simple interactive parallel computation. In |
|
294 | use the resulting instance to do a simple interactive parallel computation. In | |
295 | the code and screenshot that follows, we take a simple Python function and |
|
295 | the code and screenshot that follows, we take a simple Python function and | |
296 | apply it to each element of an array of integers in parallel using the |
|
296 | apply it to each element of an array of integers in parallel using the | |
297 | :meth:`MultiEngineClient.map` method: |
|
297 | :meth:`MultiEngineClient.map` method: | |
298 |
|
298 | |||
299 | .. sourcecode:: ipython |
|
299 | .. sourcecode:: ipython | |
300 |
|
300 | |||
301 | In [1]: from IPython.parallel import * |
|
301 | In [1]: from IPython.parallel import * | |
302 |
|
302 | |||
303 | In [2]: c = MultiEngineClient(profile='mycluster') |
|
303 | In [2]: c = MultiEngineClient(profile='mycluster') | |
304 |
|
304 | |||
305 | In [3]: mec.get_ids() |
|
305 | In [3]: mec.get_ids() | |
306 | Out[3]: [0, 1, 2, 3, 4, 5, 67, 8, 9, 10, 11, 12, 13, 14] |
|
306 | Out[3]: [0, 1, 2, 3, 4, 5, 67, 8, 9, 10, 11, 12, 13, 14] | |
307 |
|
307 | |||
308 | In [4]: def f(x): |
|
308 | In [4]: def f(x): | |
309 | ...: return x**10 |
|
309 | ...: return x**10 | |
310 |
|
310 | |||
311 | In [5]: mec.map(f, range(15)) # f is applied in parallel |
|
311 | In [5]: mec.map(f, range(15)) # f is applied in parallel | |
312 | Out[5]: |
|
312 | Out[5]: | |
313 | [0, |
|
313 | [0, | |
314 | 1, |
|
314 | 1, | |
315 | 1024, |
|
315 | 1024, | |
316 | 59049, |
|
316 | 59049, | |
317 | 1048576, |
|
317 | 1048576, | |
318 | 9765625, |
|
318 | 9765625, | |
319 | 60466176, |
|
319 | 60466176, | |
320 | 282475249, |
|
320 | 282475249, | |
321 | 1073741824, |
|
321 | 1073741824, | |
322 | 3486784401L, |
|
322 | 3486784401L, | |
323 | 10000000000L, |
|
323 | 10000000000L, | |
324 | 25937424601L, |
|
324 | 25937424601L, | |
325 | 61917364224L, |
|
325 | 61917364224L, | |
326 | 137858491849L, |
|
326 | 137858491849L, | |
327 | 289254654976L] |
|
327 | 289254654976L] | |
328 |
|
328 | |||
329 | The :meth:`map` method has the same signature as Python's builtin :func:`map` |
|
329 | The :meth:`map` method has the same signature as Python's builtin :func:`map` | |
330 | function, but runs the calculation in parallel. More involved examples of using |
|
330 | function, but runs the calculation in parallel. More involved examples of using | |
331 | :class:`MultiEngineClient` are provided in the examples that follow. |
|
331 | :class:`MultiEngineClient` are provided in the examples that follow. | |
332 |
|
332 | |||
333 | .. image:: mec_simple.* |
|
333 | .. image:: figs/mec_simple.* | |
334 |
|
334 |
General Comments 0
You need to be logged in to leave comments.
Login now