##// END OF EJS Templates
Backport PR #2126: ipcluster broken with any batch (PBS/LSF/SGE)...
Backport PR #2126: ipcluster broken with any batch (PBS/LSF/SGE) I have setup ipcluster_config.py to start with LSF: ``` c.IPClusterStart.controller_launcher_class = 'LSF' c.IPClusterStart.engine_launcher_class = 'LSF' ``` But the ipcluster command fails to start the engines: ``` ipcluster start --profile=lsf -n 10 ``` The problem is fixed if I add quotes to the launch command string ```cmd``` in ```launcher.py```. ``` diff --git a/IPython/parallel/apps/launcher.py b/IPython/parallel/apps/launcher.py index e752d2a..6035303 100644 --- a/IPython/parallel/apps/launcher.py +++ b/IPython/parallel/apps/launcher.py @@ -73,7 +73,7 @@ WINDOWS = os.name == 'nt' # Paths to the kernel apps #----------------------------------------------------------------------------- -cmd = "from IPython.parallel.apps.%s import launch_new_instance; launch_new_instance()" +cmd = "\"from IPython.parallel.apps.%s import launch_new_instance; launch_new_instance()\"" ipcluster_cmd_argv = [sys.executable, "-c", cmd % "ipclusterapp"] ```

File last commit:

r7739:dff285da
r7995:061632b4
Show More
parallel_mpi.ipynb
186 lines | 4.2 KiB | text/plain | TextLexer

Simple usage of a set of MPI engines

This example assumes you've started a cluster of N engines (4 in this example) as part of an MPI world.

Our documentation describes how to create an MPI profile and explains basic MPI usage of the IPython cluster.

For the simplest possible way to start 4 engines that belong to the same MPI world, you can run this in a terminal:

ipcluster start --engines=MPI -n 4

or start an MPI cluster from the cluster tab if you have one configured.

Once the cluster is running, we can connect to it and open a view into it:

In [1]:
from IPython.parallel import Client
c = Client()
view = c[:]

Let's define a simple function that gets the MPI rank from each engine.

In [2]:
@view.remote(block=True)
def mpi_rank():
    from mpi4py import MPI
    comm = MPI.COMM_WORLD
    return comm.Get_rank()
In [3]:
mpi_rank()
Out[3]:
[2, 3, 1, 0]

To get a mapping of IPython IDs and MPI rank (these do not always match), you can use the get_dict method on AsyncResults.

In [4]:
mpi_rank.block = False
ar = mpi_rank()
ar.get_dict()
Out[4]:
{0: 2, 1: 3, 2: 1, 3: 0}

With %%px cell magic, the next cell will actually execute entirely on each engine:

In [5]:
%%px
from mpi4py import MPI

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

if rank == 0:
   data = [(i+1)**2 for i in range(size)]
else:
   data = None
data = comm.scatter(data, root=0)

assert data == (rank+1)**2, 'data=%s, rank=%s' % (data, rank)
In [6]:
view['data']
Out[6]:
[9, 16, 4, 1]
In [6]: