##// END OF EJS Templates
expand engine/controller_launcher_class helpstring and docs...
MinRK -
Show More
@@ -255,11 +255,22 b' class IPClusterEngines(BaseParallelApplication):'
255 MPIExecEngineSetLauncher : use mpiexec to launch in an MPI environment
255 MPIExecEngineSetLauncher : use mpiexec to launch in an MPI environment
256 PBSEngineSetLauncher : use PBS (qsub) to submit engines to a batch queue
256 PBSEngineSetLauncher : use PBS (qsub) to submit engines to a batch queue
257 SGEEngineSetLauncher : use SGE (qsub) to submit engines to a batch queue
257 SGEEngineSetLauncher : use SGE (qsub) to submit engines to a batch queue
258 LSFEngineSetLauncher : use LSF (bsub) to submit engines to a batch queue
258 SSHEngineSetLauncher : use SSH to start the controller
259 SSHEngineSetLauncher : use SSH to start the controller
259 Note that SSH does *not* move the connection files
260 Note that SSH does *not* move the connection files
260 around, so you will likely have to do this manually
261 around, so you will likely have to do this manually
261 unless the machines are on a shared file system.
262 unless the machines are on a shared file system.
262 WindowsHPCEngineSetLauncher : use Windows HPC
263 WindowsHPCEngineSetLauncher : use Windows HPC
264
265 If you are using one of IPython's builtin launchers, you can specify just the
266 prefix, e.g:
267
268 c.IPClusterEngines.engine_launcher_class = 'SSH'
269
270 or:
271
272 ipcluster start --engines 'MPIExec'
273
263 """
274 """
264 )
275 )
265 daemonize = Bool(False, config=True,
276 daemonize = Bool(False, config=True,
@@ -420,8 +431,19 b' class IPClusterStart(IPClusterEngines):'
420 MPIExecControllerLauncher : use mpiexec to launch engines in an MPI universe
431 MPIExecControllerLauncher : use mpiexec to launch engines in an MPI universe
421 PBSControllerLauncher : use PBS (qsub) to submit engines to a batch queue
432 PBSControllerLauncher : use PBS (qsub) to submit engines to a batch queue
422 SGEControllerLauncher : use SGE (qsub) to submit engines to a batch queue
433 SGEControllerLauncher : use SGE (qsub) to submit engines to a batch queue
434 LSFControllerLauncher : use LSF (bsub) to submit engines to a batch queue
423 SSHControllerLauncher : use SSH to start the controller
435 SSHControllerLauncher : use SSH to start the controller
424 WindowsHPCControllerLauncher : use Windows HPC
436 WindowsHPCControllerLauncher : use Windows HPC
437
438 If you are using one of IPython's builtin launchers, you can specify just the
439 prefix, e.g:
440
441 c.IPClusterStart.controller_launcher_class = 'SSH'
442
443 or:
444
445 ipcluster start --controller 'MPIExec'
446
425 """
447 """
426 )
448 )
427 reset = Bool(False, config=True,
449 reset = Bool(False, config=True,
@@ -141,9 +141,39 b' Using various batch systems with :command:`ipcluster`'
141
141
142 :command:`ipcluster` has a notion of Launchers that can start controllers
142 :command:`ipcluster` has a notion of Launchers that can start controllers
143 and engines with various remote execution schemes. Currently supported
143 and engines with various remote execution schemes. Currently supported
144 models include :command:`ssh`, :command:`mpiexec`, PBS-style (Torque, SGE),
144 models include :command:`ssh`, :command:`mpiexec`, PBS-style (Torque, SGE, LSF),
145 and Windows HPC Server.
145 and Windows HPC Server.
146
146
147 In general, these are configured by the :attr:`IPClusterEngines.engine_set_launcher_class`,
148 and :attr:`IPClusterStart.controller_launcher_class` configurables, which can be the
149 fully specified object name (e.g. ``'IPython.parallel.apps.launcher.LocalControllerLauncher'``),
150 but if you are using IPython's builtin launchers, you can specify just the class name,
151 or even just the prefix e.g:
152
153 .. sourcecode:: python
154
155 c.IPClusterEngines.engine_launcher_class = 'SSH'
156 # equivalent to
157 c.IPClusterEngines.engine_launcher_class = 'SSHEngineSetLauncher'
158 # both of which expand to
159 c.IPClusterEngines.engine_launcher_class = 'IPython.parallel.apps.launcher.SSHEngineSetLauncher'
160
161 The shortest form being of particular use on the command line, where all you need to do to
162 get an IPython cluster running with engines started with MPI is:
163
164 .. sourcecode:: bash
165
166 $> ipcluster start --engines=MPIExec
167
168 Assuming that the default MPI config is sufficient.
169
170 .. note::
171
172 shortcuts for builtin launcher names were added in 0.12, as was the ``_class`` suffix
173 on the configurable names. If you use the old 0.11 names (e.g. ``engine_set_launcher``),
174 they will still work, but you will get a deprecation warning that the name has changed.
175
176
147 .. note::
177 .. note::
148
178
149 The Launchers and configuration are designed in such a way that advanced
179 The Launchers and configuration are designed in such a way that advanced
@@ -170,7 +200,7 b' There, instruct ipcluster to use the MPIExec launchers by adding the lines:'
170
200
171 .. sourcecode:: python
201 .. sourcecode:: python
172
202
173 c.IPClusterEngines.engine_launcher = 'IPython.parallel.apps.launcher.MPIExecEngineSetLauncher'
203 c.IPClusterEngines.engine_launcher_class = 'MPIExecEngineSetLauncher'
174
204
175 If the default MPI configuration is correct, then you can now start your cluster, with::
205 If the default MPI configuration is correct, then you can now start your cluster, with::
176
206
@@ -185,7 +215,7 b' If you have a reason to also start the Controller with mpi, you can specify:'
185
215
186 .. sourcecode:: python
216 .. sourcecode:: python
187
217
188 c.IPClusterStart.controller_launcher = 'IPython.parallel.apps.launcher.MPIExecControllerLauncher'
218 c.IPClusterStart.controller_launcher_class = 'MPIExecControllerLauncher'
189
219
190 .. note::
220 .. note::
191
221
@@ -226,10 +256,8 b' and engines:'
226
256
227 .. sourcecode:: python
257 .. sourcecode:: python
228
258
229 c.IPClusterStart.controller_launcher = \
259 c.IPClusterStart.controller_launcher_class = 'PBSControllerLauncher'
230 'IPython.parallel.apps.launcher.PBSControllerLauncher'
260 c.IPClusterEngines.engine_launcher_class = 'PBSEngineSetLauncher'
231 c.IPClusterEngines.engine_launcher = \
232 'IPython.parallel.apps.launcher.PBSEngineSetLauncher'
233
261
234 .. note::
262 .. note::
235
263
@@ -355,12 +383,11 b' To use this mode, select the SSH launchers in :file:`ipcluster_config.py`:'
355
383
356 .. sourcecode:: python
384 .. sourcecode:: python
357
385
358 c.IPClusterEngines.engine_launcher = \
386 c.IPClusterEngines.engine_launcher_class = 'SSHEngineSetLauncher'
359 'IPython.parallel.apps.launcher.SSHEngineSetLauncher'
360 # and if the Controller is also to be remote:
387 # and if the Controller is also to be remote:
361 c.IPClusterStart.controller_launcher = \
388 c.IPClusterStart.controller_launcher_class = 'SSHControllerLauncher'
362 'IPython.parallel.apps.launcher.SSHControllerLauncher'
389
363
390
364
391
365 The controller's remote location and configuration can be specified:
392 The controller's remote location and configuration can be specified:
366
393
@@ -232,10 +232,8 b' will need to edit the following attributes in the file'
232
232
233 # Set these at the top of the file to tell ipcluster to use the
233 # Set these at the top of the file to tell ipcluster to use the
234 # Windows HPC job scheduler.
234 # Windows HPC job scheduler.
235 c.IPClusterStart.controller_launcher = \
235 c.IPClusterStart.controller_launcher_class = 'WindowsHPCControllerLauncher'
236 'IPython.parallel.apps.launcher.WindowsHPCControllerLauncher'
236 c.IPClusterEngines.engine_launcher_class = 'WindowsHPCEngineSetLauncher'
237 c.IPClusterEngines.engine_launcher = \
238 'IPython.parallel.apps.launcher.WindowsHPCEngineSetLauncher'
239
237
240 # Set these to the host name of the scheduler (head node) of your cluster.
238 # Set these to the host name of the scheduler (head node) of your cluster.
241 c.WindowsHPCControllerLauncher.scheduler = 'HEADNODE'
239 c.WindowsHPCControllerLauncher.scheduler = 'HEADNODE'
General Comments 0
You need to be logged in to leave comments. Login now