##// END OF EJS Templates
Adding support for mpiexec as well as mpirun....
Brian Granger -
Show More
@@ -513,7 +513,8 b' def main_local(args):'
513 dstart.addErrback(lambda f: f.raiseException())
513 dstart.addErrback(lambda f: f.raiseException())
514
514
515
515
516 def main_mpirun(args):
516 def main_mpi(args):
517 print vars(args)
517 cont_args = []
518 cont_args = []
518 cont_args.append('--logfile=%s' % pjoin(args.logdir,'ipcontroller'))
519 cont_args.append('--logfile=%s' % pjoin(args.logdir,'ipcontroller'))
519
520
@@ -524,7 +525,7 b' def main_mpirun(args):'
524 cl = ControllerLauncher(extra_args=cont_args)
525 cl = ControllerLauncher(extra_args=cont_args)
525 dstart = cl.start()
526 dstart = cl.start()
526 def start_engines(cont_pid):
527 def start_engines(cont_pid):
527 raw_args = ['mpirun']
528 raw_args = [args.cmd]
528 raw_args.extend(['-n',str(args.n)])
529 raw_args.extend(['-n',str(args.n)])
529 raw_args.append('ipengine')
530 raw_args.append('ipengine')
530 raw_args.append('-l')
531 raw_args.append('-l')
@@ -665,7 +666,7 b' def get_args():'
665
666
666 parser_mpirun = subparsers.add_parser(
667 parser_mpirun = subparsers.add_parser(
667 'mpirun',
668 'mpirun',
668 help='run a cluster using mpirun',
669 help='run a cluster using mpirun (mpiexec also works)',
669 parents=[base_parser]
670 parents=[base_parser]
670 )
671 )
671 parser_mpirun.add_argument(
672 parser_mpirun.add_argument(
@@ -674,7 +675,20 b' def get_args():'
674 dest="mpi", # Don't put a default here to allow no MPI support
675 dest="mpi", # Don't put a default here to allow no MPI support
675 help="how to call MPI_Init (default=mpi4py)"
676 help="how to call MPI_Init (default=mpi4py)"
676 )
677 )
677 parser_mpirun.set_defaults(func=main_mpirun)
678 parser_mpirun.set_defaults(func=main_mpi, cmd='mpirun')
679
680 parser_mpiexec = subparsers.add_parser(
681 'mpiexec',
682 help='run a cluster using mpiexec (mpirun also works)',
683 parents=[base_parser]
684 )
685 parser_mpiexec.add_argument(
686 "--mpi",
687 type=str,
688 dest="mpi", # Don't put a default here to allow no MPI support
689 help="how to call MPI_Init (default=mpi4py)"
690 )
691 parser_mpiexec.set_defaults(func=main_mpi, cmd='mpiexec')
678
692
679 parser_pbs = subparsers.add_parser(
693 parser_pbs = subparsers.add_parser(
680 'pbs',
694 'pbs',
@@ -85,33 +85,40 b' To see other command line options for the local mode, do::'
85
85
86 $ ipcluster local -h
86 $ ipcluster local -h
87
87
88 Using :command:`ipcluster` in mpirun mode
88 Using :command:`ipcluster` in mpiexec/mpirun mode
89 -----------------------------------------
89 -------------------------------------------------
90
90
91 The mpirun mode is useful if you:
91 The mpiexec/mpirun mode is useful if you:
92
92
93 1. Have MPI installed.
93 1. Have MPI installed.
94 2. Your systems are configured to use the :command:`mpirun` command to start
94 2. Your systems are configured to use the :command:`mpiexec` or
95 processes.
95 :command:`mpirun` commands to start MPI processes.
96
97 .. note::
98
99 The preferred command to use is :command:`mpiexec`. However, we also
100 support :command:`mpirun` for backwards compatibility. The underlying
101 logic used is exactly the same, the only difference being the name of the
102 command line program that is called.
96
103
97 If these are satisfied, you can start an IPython cluster using::
104 If these are satisfied, you can start an IPython cluster using::
98
105
99 $ ipcluster mpirun -n 4
106 $ ipcluster mpiexec -n 4
100
107
101 This does the following:
108 This does the following:
102
109
103 1. Starts the IPython controller on current host.
110 1. Starts the IPython controller on current host.
104 2. Uses :command:`mpirun` to start 4 engines.
111 2. Uses :command:`mpiexec` to start 4 engines.
105
112
106 On newer MPI implementations (such as OpenMPI), this will work even if you don't make any calls to MPI or call :func:`MPI_Init`. However, older MPI implementations actually require each process to call :func:`MPI_Init` upon starting. The easiest way of having this done is to install the mpi4py [mpi4py]_ package and then call ipcluster with the ``--mpi`` option::
113 On newer MPI implementations (such as OpenMPI), this will work even if you don't make any calls to MPI or call :func:`MPI_Init`. However, older MPI implementations actually require each process to call :func:`MPI_Init` upon starting. The easiest way of having this done is to install the mpi4py [mpi4py]_ package and then call ipcluster with the ``--mpi`` option::
107
114
108 $ ipcluster mpirun -n 4 --mpi=mpi4py
115 $ ipcluster mpiexec -n 4 --mpi=mpi4py
109
116
110 Unfortunately, even this won't work for some MPI implementations. If you are having problems with this, you will likely have to use a custom Python executable that itself calls :func:`MPI_Init` at the appropriate time. Fortunately, mpi4py comes with such a custom Python executable that is easy to install and use. However, this custom Python executable approach will not work with :command:`ipcluster` currently.
117 Unfortunately, even this won't work for some MPI implementations. If you are having problems with this, you will likely have to use a custom Python executable that itself calls :func:`MPI_Init` at the appropriate time. Fortunately, mpi4py comes with such a custom Python executable that is easy to install and use. However, this custom Python executable approach will not work with :command:`ipcluster` currently.
111
118
112 Additional command line options for this mode can be found by doing::
119 Additional command line options for this mode can be found by doing::
113
120
114 $ ipcluster mpirun -h
121 $ ipcluster mpiexec -h
115
122
116 More details on using MPI with IPython can be found :ref:`here <parallelmpi>`.
123 More details on using MPI with IPython can be found :ref:`here <parallelmpi>`.
117
124
General Comments 0
You need to be logged in to leave comments. Login now