Show More
@@ -68,17 +68,17 Manual starting using :command:`mpiexec` | |||
|
68 | 68 | If you want to start the IPython engines using the :command:`mpiexec`, just |
|
69 | 69 | do:: |
|
70 | 70 | |
|
71 | $ mpiexec -n 4 ipengine --mpi=mpi4py | |
|
71 | $ mpiexec -n 4 ipenginez --mpi=mpi4py | |
|
72 | 72 | |
|
73 | 73 | This requires that you already have a controller running and that the FURL |
|
74 | 74 | files for the engines are in place. We also have built in support for |
|
75 | 75 | PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by |
|
76 | 76 | starting the engines with:: |
|
77 | 77 | |
|
78 |
|
|
|
78 | $ mpiexec -n 4 ipenginez --mpi=pytrilinos | |
|
79 | 79 | |
|
80 | 80 | Automatic starting using PBS and :command:`ipclusterz` |
|
81 | ----------------------------------------------------- | |
|
81 | ------------------------------------------------------ | |
|
82 | 82 | |
|
83 | 83 | The :command:`ipclusterz` command also has built-in integration with PBS. For |
|
84 | 84 | more information on this approach, see our documentation on :ref:`ipclusterz |
@@ -108,9 +108,14 distributed array. Save the following text in a file called :file:`psum.py`: | |||
|
108 | 108 | op=MPI.SUM) |
|
109 | 109 | return rcvBuf |
|
110 | 110 | |
|
111 |
Now, start an IPython cluster |
|
|
111 | Now, start an IPython cluster:: | |
|
112 | 112 | |
|
113 |
$ ipclusterz mpi |
|
|
113 | $ ipclusterz start -p mpi -n 4 | |
|
114 | ||
|
115 | .. note:: | |
|
116 | ||
|
117 | It is assumed here that the mpi profile has been set up, as described :ref:`here | |
|
118 | <parallel_process>`. | |
|
114 | 119 | |
|
115 | 120 | Finally, connect to the cluster and use this function interactively. In this |
|
116 | 121 | case, we create a random array on each engine and sum up all the random arrays |
@@ -120,61 +125,25 using our :func:`psum` function: | |||
|
120 | 125 | |
|
121 | 126 | In [1]: from IPython.zmq.parallel import client |
|
122 | 127 | |
|
123 | In [2]: c = client.Client() | |
|
124 | ||
|
125 | In [3]: mec.activate() | |
|
128 | In [2]: %load_ext parallel_magic | |
|
126 | 129 | |
|
127 | In [4]: px import numpy as np | |
|
128 | Parallel execution on engines: all | |
|
129 | Out[4]: | |
|
130 | <Results List> | |
|
131 | [0] In [13]: import numpy as np | |
|
132 | [1] In [13]: import numpy as np | |
|
133 | [2] In [13]: import numpy as np | |
|
134 | [3] In [13]: import numpy as np | |
|
130 | In [3]: c = client.Client(profile='mpi') | |
|
135 | 131 | |
|
136 | In [6]: px a = np.random.rand(100) | |
|
137 | Parallel execution on engines: all | |
|
138 | Out[6]: | |
|
139 | <Results List> | |
|
140 | [0] In [15]: a = np.random.rand(100) | |
|
141 | [1] In [15]: a = np.random.rand(100) | |
|
142 | [2] In [15]: a = np.random.rand(100) | |
|
143 | [3] In [15]: a = np.random.rand(100) | |
|
132 | In [4]: view = c[:] | |
|
144 | 133 | |
|
145 | In [7]: px from psum import psum | |
|
146 | Parallel execution on engines: all | |
|
147 | Out[7]: | |
|
148 | <Results List> | |
|
149 | [0] In [16]: from psum import psum | |
|
150 | [1] In [16]: from psum import psum | |
|
151 | [2] In [16]: from psum import psum | |
|
152 | [3] In [16]: from psum import psum | |
|
134 | In [5]: view.activate() | |
|
153 | 135 | |
|
154 | In [8]: px s = psum(a) | |
|
155 | Parallel execution on engines: all | |
|
156 | Out[8]: | |
|
157 | <Results List> | |
|
158 | [0] In [17]: s = psum(a) | |
|
159 | [1] In [17]: s = psum(a) | |
|
160 | [2] In [17]: s = psum(a) | |
|
161 | [3] In [17]: s = psum(a) | |
|
136 | # run the contents of the file on each engine: | |
|
137 | In [6]: view.run('psum.py') | |
|
162 | 138 | |
|
163 | In [9]: px print s | |
|
164 |
Parallel execution on engines: |
|
|
165 | Out[9]: | |
|
166 | <Results List> | |
|
167 | [0] In [18]: print s | |
|
168 | [0] Out[18]: 187.451545803 | |
|
139 | In [6]: px a = np.random.rand(100) | |
|
140 | Parallel execution on engines: [0,1,2,3] | |
|
169 | 141 | |
|
170 |
|
|
|
171 | [1] Out[18]: 187.451545803 | |
|
142 | In [8]: px s = psum(a) | |
|
143 | Parallel execution on engines: [0,1,2,3] | |
|
172 | 144 | |
|
173 | [2] In [18]: print s | |
|
174 | [2] Out[18]: 187.451545803 | |
|
175 | ||
|
176 | [3] In [18]: print s | |
|
177 | [3] Out[18]: 187.451545803 | |
|
145 | In [9]: view['s'] | |
|
146 | Out[9]: [187.451545803,187.451545803,187.451545803,187.451545803] | |
|
178 | 147 | |
|
179 | 148 | Any Python code that makes calls to MPI can be used in this manner, including |
|
180 | 149 | compiled C, C++ and Fortran libraries that have been exposed to Python. |
@@ -274,8 +274,7 connections on all its interfaces, by adding in :file:`ipcontrollerz_config`: | |||
|
274 | 274 | |
|
275 | 275 | .. sourcecode:: python |
|
276 | 276 | |
|
277 |
c. |
|
|
278 | c.HubFactory.engine_ip = '*' | |
|
277 | c.RegistrationFactory.ip = '*' | |
|
279 | 278 | |
|
280 | 279 | You can now run the cluster with:: |
|
281 | 280 | |
@@ -447,7 +446,7 any point in the future. | |||
|
447 | 446 | To do this, the only thing you have to do is specify the `-r` flag, so that |
|
448 | 447 | the connection information in the JSON files remains accurate:: |
|
449 | 448 | |
|
450 |
$ ipcontrollerz -r |
|
|
449 | $ ipcontrollerz -r | |
|
451 | 450 | |
|
452 | 451 | Then, just copy the JSON files over the first time and you are set. You can |
|
453 | 452 | start and stop the controller and engines any many times as you want in the |
General Comments 0
You need to be logged in to leave comments.
Login now