##// END OF EJS Templates
Merge pull request #2046 from jstenar/iptest-unicode...
Merge pull request #2046 from jstenar/iptest-unicode iptest unicode - fix space in path issue for iptest when using --with-xunit - fix unicode issue in path for iptest closes #760

File last commit:

r6035:3077781f
r7734:bb4488ab merge
Show More
parallel_mpi.ipynb
223 lines | 5.4 KiB | text/plain | TextLexer

Simple usage of a set of MPI engines

This example assumes you've started a cluster of N engines (4 in this example) as part of an MPI world.

Our documentation describes how to create an MPI profile and explains basic MPI usage of the IPython cluster.

For the simplest possible way to start 4 engines that belong to the same MPI world, you can run this in a terminal or antoher notebook:

ipcluster start --engines=MPI -n 4

Note: to run the above in a notebook, use a new notebook and prepend the command with !, but do not run it in this notebook, as this command will block until you shut down the cluster. To stop the cluster, use the 'Interrupt' button on the left, which is the equivalent of sending Ctrl-C to the kernel.

Once the cluster is running, we can connect to it and open a view into it:

In [21]:
from IPython.parallel import Client
c = Client()
view = c[:]

Let's define a simple function that gets the MPI rank from each engine.

In [22]:
@view.remote(block=True)
def mpi_rank():
    from mpi4py import MPI
    comm = MPI.COMM_WORLD
    return comm.Get_rank()
In [23]:
mpi_rank()
Out[23]:
[3, 0, 2, 1]

For interactive convenience, we load the parallel magic extensions and make this view the active one for the automatic parallelism magics.

This is not necessary and in production codes likely won't be used, as the engines will load their own MPI codes separately. But it makes it easy to illustrate everything from within a single notebook here.

In [4]:
%load_ext parallelmagic
view.activate()

Use the autopx magic to make the rest of this cell execute on the engines instead of locally

In [24]:
view.block = True
In [32]:
%autopx
%autopx enabled

With autopx enabled, the next cell will actually execute entirely on each engine:

In [29]:
from mpi4py import MPI

comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()

if rank == 0:
   data = [(i+1)**2 for i in range(size)]
else:
   data = None
data = comm.scatter(data, root=0)

assert data == (rank+1)**2, 'data=%s, rank=%s' % (data, rank)

Though the assertion at the end of the previous block validated the code, we can now pull the 'data' variable from all the nodes for local inspection. First, don't forget to toggle off autopx mode so code runs again in the notebook:

In [33]:
%autopx
%autopx disabled
In [34]:
view['data']
Out[34]:
[16, 1, 9, 4]
In [ ]: