##// END OF EJS Templates
rename MPIExecLaunchers to MPILaunchers...
MinRK -
Show More
@@ -242,25 +242,25 b' class IPClusterEngines(BaseParallelApplication):'
242 engine_launcher_class = DottedObjectName('LocalEngineSetLauncher',
242 engine_launcher_class = DottedObjectName('LocalEngineSetLauncher',
243 config=True,
243 config=True,
244 help="""The class for launching a set of Engines. Change this value
244 help="""The class for launching a set of Engines. Change this value
245 to use various batch systems to launch your engines, such as PBS,SGE,MPIExec,etc.
245 to use various batch systems to launch your engines, such as PBS,SGE,MPI,etc.
246 Each launcher class has its own set of configuration options, for making sure
246 Each launcher class has its own set of configuration options, for making sure
247 it will work in your environment.
247 it will work in your environment.
248
248
249 You can also write your own launcher, and specify it's absolute import path,
249 You can also write your own launcher, and specify it's absolute import path,
250 as in 'mymodule.launcher.FTLEnginesLauncher`.
250 as in 'mymodule.launcher.FTLEnginesLauncher`.
251
251
252 Examples include:
252 IPython's bundled examples include:
253
253
254 LocalEngineSetLauncher : start engines locally as subprocesses [default]
254 Local : start engines locally as subprocesses [default]
255 MPIExecEngineSetLauncher : use mpiexec to launch in an MPI environment
255 MPI : use mpiexec to launch engines in an MPI environment
256 PBSEngineSetLauncher : use PBS (qsub) to submit engines to a batch queue
256 PBS : use PBS (qsub) to submit engines to a batch queue
257 SGEEngineSetLauncher : use SGE (qsub) to submit engines to a batch queue
257 SGE : use SGE (qsub) to submit engines to a batch queue
258 LSFEngineSetLauncher : use LSF (bsub) to submit engines to a batch queue
258 LSF : use LSF (bsub) to submit engines to a batch queue
259 SSHEngineSetLauncher : use SSH to start the controller
259 SSH : use SSH to start the controller
260 Note that SSH does *not* move the connection files
260 Note that SSH does *not* move the connection files
261 around, so you will likely have to do this manually
261 around, so you will likely have to do this manually
262 unless the machines are on a shared file system.
262 unless the machines are on a shared file system.
263 WindowsHPCEngineSetLauncher : use Windows HPC
263 WindowsHPC : use Windows HPC
264
264
265 If you are using one of IPython's builtin launchers, you can specify just the
265 If you are using one of IPython's builtin launchers, you can specify just the
266 prefix, e.g:
266 prefix, e.g:
@@ -269,7 +269,7 b' class IPClusterEngines(BaseParallelApplication):'
269
269
270 or:
270 or:
271
271
272 ipcluster start --engines 'MPIExec'
272 ipcluster start --engines=MPI
273
273
274 """
274 """
275 )
275 )
@@ -307,7 +307,7 b' class IPClusterEngines(BaseParallelApplication):'
307 # not a module, presume it's the raw name in apps.launcher
307 # not a module, presume it's the raw name in apps.launcher
308 if kind and kind not in clsname:
308 if kind and kind not in clsname:
309 # doesn't match necessary full class name, assume it's
309 # doesn't match necessary full class name, assume it's
310 # just 'PBS' or 'MPIExec' prefix:
310 # just 'PBS' or 'MPI' prefix:
311 clsname = clsname + kind + 'Launcher'
311 clsname = clsname + kind + 'Launcher'
312 clsname = 'IPython.parallel.apps.launcher.'+clsname
312 clsname = 'IPython.parallel.apps.launcher.'+clsname
313 try:
313 try:
@@ -451,20 +451,23 b' class IPClusterStart(IPClusterEngines):'
451 controller_launcher_class = DottedObjectName('LocalControllerLauncher',
451 controller_launcher_class = DottedObjectName('LocalControllerLauncher',
452 config=True,
452 config=True,
453 help="""The class for launching a Controller. Change this value if you want
453 help="""The class for launching a Controller. Change this value if you want
454 your controller to also be launched by a batch system, such as PBS,SGE,MPIExec,etc.
454 your controller to also be launched by a batch system, such as PBS,SGE,MPI,etc.
455
455
456 Each launcher class has its own set of configuration options, for making sure
456 Each launcher class has its own set of configuration options, for making sure
457 it will work in your environment.
457 it will work in your environment.
458
459 Note that using a batch launcher for the controller *does not* put it
460 in the same batch job as the engines, so they will still start separately.
458
461
459 Examples include:
462 IPython's bundled examples include:
460
463
461 LocalControllerLauncher : start engines locally as subprocesses
464 Local : start engines locally as subprocesses
462 MPIExecControllerLauncher : use mpiexec to launch engines in an MPI universe
465 MPI : use mpiexec to launch the controller in an MPI universe
463 PBSControllerLauncher : use PBS (qsub) to submit engines to a batch queue
466 PBS : use PBS (qsub) to submit the controller to a batch queue
464 SGEControllerLauncher : use SGE (qsub) to submit engines to a batch queue
467 SGE : use SGE (qsub) to submit the controller to a batch queue
465 LSFControllerLauncher : use LSF (bsub) to submit engines to a batch queue
468 LSF : use LSF (bsub) to submit the controller to a batch queue
466 SSHControllerLauncher : use SSH to start the controller
469 SSH : use SSH to start the controller
467 WindowsHPCControllerLauncher : use Windows HPC
470 WindowsHPC : use Windows HPC
468
471
469 If you are using one of IPython's builtin launchers, you can specify just the
472 If you are using one of IPython's builtin launchers, you can specify just the
470 prefix, e.g:
473 prefix, e.g:
@@ -473,7 +476,7 b' class IPClusterStart(IPClusterEngines):'
473
476
474 or:
477 or:
475
478
476 ipcluster start --controller 'MPIExec'
479 ipcluster start --controller=MPI
477
480
478 """
481 """
479 )
482 )
@@ -440,11 +440,11 b' class LocalEngineSetLauncher(LocalEngineLauncher):'
440
440
441
441
442 #-----------------------------------------------------------------------------
442 #-----------------------------------------------------------------------------
443 # MPIExec launchers
443 # MPI launchers
444 #-----------------------------------------------------------------------------
444 #-----------------------------------------------------------------------------
445
445
446
446
447 class MPIExecLauncher(LocalProcessLauncher):
447 class MPILauncher(LocalProcessLauncher):
448 """Launch an external process using mpiexec."""
448 """Launch an external process using mpiexec."""
449
449
450 mpi_cmd = List(['mpiexec'], config=True,
450 mpi_cmd = List(['mpiexec'], config=True,
@@ -459,6 +459,18 b' class MPIExecLauncher(LocalProcessLauncher):'
459 help="The command line argument to the program."
459 help="The command line argument to the program."
460 )
460 )
461 n = Integer(1)
461 n = Integer(1)
462
463 def __init__(self, *args, **kwargs):
464 # deprecation for old MPIExec names:
465 config = kwargs.get('config', {})
466 for oldname in ('MPIExecLauncher', 'MPIExecControllerLauncher', 'MPIExecEngineSetLauncher'):
467 deprecated = config.get(oldname)
468 if deprecated:
469 newname = oldname.replace('MPIExec', 'MPI')
470 config[newname].update(deprecated)
471 self.log.warn("WARNING: %s name has been deprecated, use %s", oldname, newname)
472
473 super(MPILauncher, self).__init__(*args, **kwargs)
462
474
463 def find_args(self):
475 def find_args(self):
464 """Build self.args using all the fields."""
476 """Build self.args using all the fields."""
@@ -468,10 +480,10 b' class MPIExecLauncher(LocalProcessLauncher):'
468 def start(self, n):
480 def start(self, n):
469 """Start n instances of the program using mpiexec."""
481 """Start n instances of the program using mpiexec."""
470 self.n = n
482 self.n = n
471 return super(MPIExecLauncher, self).start()
483 return super(MPILauncher, self).start()
472
484
473
485
474 class MPIExecControllerLauncher(MPIExecLauncher, ControllerMixin):
486 class MPIControllerLauncher(MPILauncher, ControllerMixin):
475 """Launch a controller using mpiexec."""
487 """Launch a controller using mpiexec."""
476
488
477 # alias back to *non-configurable* program[_args] for use in find_args()
489 # alias back to *non-configurable* program[_args] for use in find_args()
@@ -487,11 +499,11 b' class MPIExecControllerLauncher(MPIExecLauncher, ControllerMixin):'
487
499
488 def start(self):
500 def start(self):
489 """Start the controller by profile_dir."""
501 """Start the controller by profile_dir."""
490 self.log.info("Starting MPIExecControllerLauncher: %r" % self.args)
502 self.log.info("Starting MPIControllerLauncher: %r", self.args)
491 return super(MPIExecControllerLauncher, self).start(1)
503 return super(MPIControllerLauncher, self).start(1)
492
504
493
505
494 class MPIExecEngineSetLauncher(MPIExecLauncher, EngineMixin):
506 class MPIEngineSetLauncher(MPILauncher, EngineMixin):
495 """Launch engines using mpiexec"""
507 """Launch engines using mpiexec"""
496
508
497 # alias back to *non-configurable* program[_args] for use in find_args()
509 # alias back to *non-configurable* program[_args] for use in find_args()
@@ -508,8 +520,34 b' class MPIExecEngineSetLauncher(MPIExecLauncher, EngineMixin):'
508 def start(self, n):
520 def start(self, n):
509 """Start n engines by profile or profile_dir."""
521 """Start n engines by profile or profile_dir."""
510 self.n = n
522 self.n = n
511 self.log.info('Starting MPIExecEngineSetLauncher: %r' % self.args)
523 self.log.info('Starting MPIEngineSetLauncher: %r', self.args)
512 return super(MPIExecEngineSetLauncher, self).start(n)
524 return super(MPIEngineSetLauncher, self).start(n)
525
526 # deprecated MPIExec names
527 class DeprecatedMPILauncher(object):
528 def warn(self):
529 oldname = self.__class__.__name__
530 newname = oldname.replace('MPIExec', 'MPI')
531 self.log.warn("WARNING: %s name is deprecated, use %s", oldname, newname)
532
533 class MPIExecLauncher(MPILauncher, DeprecatedMPILauncher):
534 """Deprecated, use MPILauncher"""
535 def __init__(self, *args, **kwargs):
536 super(MPIExecLauncher, self).__init__(*args, **kwargs)
537 self.warn()
538
539 class MPIExecControllerLauncher(MPIControllerLauncher, DeprecatedMPILauncher):
540 """Deprecated, use MPIControllerLauncher"""
541 def __init__(self, *args, **kwargs):
542 super(MPIExecControllerLauncher, self).__init__(*args, **kwargs)
543 self.warn()
544
545 class MPIExecEngineSetLauncher(MPIEngineSetLauncher, DeprecatedMPILauncher):
546 """Deprecated, use MPIEngineSetLauncher"""
547 def __init__(self, *args, **kwargs):
548 super(MPIExecEngineSetLauncher, self).__init__(*args, **kwargs)
549 self.warn()
550
513
551
514 #-----------------------------------------------------------------------------
552 #-----------------------------------------------------------------------------
515 # SSH launchers
553 # SSH launchers
@@ -1149,9 +1187,9 b' local_launchers = ['
1149 LocalEngineSetLauncher,
1187 LocalEngineSetLauncher,
1150 ]
1188 ]
1151 mpi_launchers = [
1189 mpi_launchers = [
1152 MPIExecLauncher,
1190 MPILauncher,
1153 MPIExecControllerLauncher,
1191 MPIControllerLauncher,
1154 MPIExecEngineSetLauncher,
1192 MPIEngineSetLauncher,
1155 ]
1193 ]
1156 ssh_launchers = [
1194 ssh_launchers = [
1157 SSHLauncher,
1195 SSHLauncher,
@@ -48,11 +48,11 b' these things to happen.'
48 Automatic starting using :command:`mpiexec` and :command:`ipcluster`
48 Automatic starting using :command:`mpiexec` and :command:`ipcluster`
49 --------------------------------------------------------------------
49 --------------------------------------------------------------------
50
50
51 The easiest approach is to use the `MPIExec` Launchers in :command:`ipcluster`,
51 The easiest approach is to use the `MPI` Launchers in :command:`ipcluster`,
52 which will first start a controller and then a set of engines using
52 which will first start a controller and then a set of engines using
53 :command:`mpiexec`::
53 :command:`mpiexec`::
54
54
55 $ ipcluster start -n 4 --elauncher=MPIExecEngineSetLauncher
55 $ ipcluster start -n 4 --engines=MPIEngineSetLauncher
56
56
57 This approach is best as interrupting :command:`ipcluster` will automatically
57 This approach is best as interrupting :command:`ipcluster` will automatically
58 stop and clean up the controller and engines.
58 stop and clean up the controller and engines.
@@ -63,14 +63,14 b' Manual starting using :command:`mpiexec`'
63 If you want to start the IPython engines using the :command:`mpiexec`, just
63 If you want to start the IPython engines using the :command:`mpiexec`, just
64 do::
64 do::
65
65
66 $ mpiexec n=4 ipengine --mpi=mpi4py
66 $ mpiexec -n 4 ipengine --mpi=mpi4py
67
67
68 This requires that you already have a controller running and that the FURL
68 This requires that you already have a controller running and that the FURL
69 files for the engines are in place. We also have built in support for
69 files for the engines are in place. We also have built in support for
70 PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by
70 PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by
71 starting the engines with::
71 starting the engines with::
72
72
73 $ mpiexec n=4 ipengine --mpi=pytrilinos
73 $ mpiexec -n 4 ipengine --mpi=pytrilinos
74
74
75 Automatic starting using PBS and :command:`ipcluster`
75 Automatic starting using PBS and :command:`ipcluster`
76 ------------------------------------------------------
76 ------------------------------------------------------
@@ -163,7 +163,7 b' get an IPython cluster running with engines started with MPI is:'
163
163
164 .. sourcecode:: bash
164 .. sourcecode:: bash
165
165
166 $> ipcluster start --engines=MPIExec
166 $> ipcluster start --engines=MPI
167
167
168 Assuming that the default MPI config is sufficient.
168 Assuming that the default MPI config is sufficient.
169
169
@@ -196,11 +196,11 b' If these are satisfied, you can create a new profile::'
196
196
197 and edit the file :file:`IPYTHONDIR/profile_mpi/ipcluster_config.py`.
197 and edit the file :file:`IPYTHONDIR/profile_mpi/ipcluster_config.py`.
198
198
199 There, instruct ipcluster to use the MPIExec launchers by adding the lines:
199 There, instruct ipcluster to use the MPI launchers by adding the lines:
200
200
201 .. sourcecode:: python
201 .. sourcecode:: python
202
202
203 c.IPClusterEngines.engine_launcher_class = 'MPIExecEngineSetLauncher'
203 c.IPClusterEngines.engine_launcher_class = 'MPIEngineSetLauncher'
204
204
205 If the default MPI configuration is correct, then you can now start your cluster, with::
205 If the default MPI configuration is correct, then you can now start your cluster, with::
206
206
@@ -215,7 +215,7 b' If you have a reason to also start the Controller with mpi, you can specify:'
215
215
216 .. sourcecode:: python
216 .. sourcecode:: python
217
217
218 c.IPClusterStart.controller_launcher_class = 'MPIExecControllerLauncher'
218 c.IPClusterStart.controller_launcher_class = 'MPIControllerLauncher'
219
219
220 .. note::
220 .. note::
221
221
@@ -140,11 +140,15 b' Backwards incompatible changes'
140
140
141 would now be specified as::
141 would now be specified as::
142
142
143 IPClusterEngines.engine_launcher_class = 'MPIExec'
143 IPClusterEngines.engine_launcher_class = 'MPI'
144 IPClusterStart.controller_launcher_class = 'SSH'
144 IPClusterStart.controller_launcher_class = 'SSH'
145
145
146 The full path will still work, and is necessary for using custom launchers not in
146 The full path will still work, and is necessary for using custom launchers not in
147 IPython's launcher module.
147 IPython's launcher module.
148
149 Further, MPIExec launcher names are now prefixed with just MPI, to better match
150 other batch launchers, and be generally more intuitive. The MPIExec names are
151 deprecated, but continue to work.
148
152
149 * For embedding a shell, note that the parameter ``user_global_ns`` has been
153 * For embedding a shell, note that the parameter ``user_global_ns`` has been
150 replaced by ``user_module``, and expects a module-like object, rather than
154 replaced by ``user_module``, and expects a module-like object, rather than
General Comments 0
You need to be logged in to leave comments. Login now