{ "worksheets": [ { "cells": [ { "source": "# Simple usage of a set of MPI engines\n\nThis example assumes you've started a cluster of N engines (4 in this example) as part\nof an MPI world. \n\nOur documentation describes [how to create an MPI profile](http://ipython.org/ipython-doc/dev/parallel/parallel_process.html#using-ipcluster-in-mpiexec-mpirun-mode)\nand explains [basic MPI usage of the IPython cluster](http://ipython.org/ipython-doc/dev/parallel/parallel_mpi.html).\n\n\nFor the simplest possible way to start 4 engines that belong to the same MPI world, \nyou can run this in a terminal or antoher notebook:\n\n
\nipcluster start --engines=MPIExecEngineSetLauncher -n 4\n\n\nNote: to run the above in a notebook, use a *new* notebook and prepend the command with `!`, but do not run\nit in *this* notebook, as this command will block until you shut down the cluster. To stop the cluster, use \nthe 'Interrupt' button on the left, which is the equivalent of sending `Ctrl-C` to the kernel.\n\nOnce the cluster is running, we can connect to it and open a view into it:", "cell_type": "markdown" }, { "cell_type": "code", "language": "python", "outputs": [], "collapsed": true, "prompt_number": 21, "input": "from IPython.parallel import Client\nc = Client()\nview = c[:]" }, { "source": "Let's define a simple function that ", "cell_type": "markdown" }, { "cell_type": "code", "language": "python", "outputs": [], "collapsed": true, "prompt_number": 22, "input": "@view.remote(block=True)\ndef mpi_rank():\n from mpi4py import MPI\n comm = MPI.COMM_WORLD\n return comm.Get_rank()" }, { "cell_type": "code", "language": "python", "outputs": [ { "output_type": "pyout", "prompt_number": 23, "text": "[3, 0, 2, 1]" } ], "collapsed": false, "prompt_number": 23, "input": "mpi_rank()" }, { "source": "For interactive convenience, we load the parallel magic extensions and make this view\nthe active one for the automatic parallelism magics.\n\nThis is not necessary and in production codes likely won't be used, as the engines will \nload their own MPI codes separately. But it makes it easy to illustrate everything from\nwithin a single notebook here.", "cell_type": "markdown" }, { "cell_type": "code", "language": "python", "outputs": [], "collapsed": true, "prompt_number": 4, "input": "%load_ext parallelmagic\nview.activate()" }, { "source": "Use the autopx magic to make the rest of this cell execute on the engines instead\nof locally", "cell_type": "markdown" }, { "cell_type": "code", "language": "python", "outputs": [], "collapsed": true, "prompt_number": 24, "input": "view.block = True" }, { "cell_type": "code", "language": "python", "outputs": [ { "output_type": "stream", "stream": "stdout", "text": "%autopx enabled\n\n" } ], "collapsed": false, "prompt_number": 32, "input": "%autopx" }, { "source": "With autopx enabled, the next cell will actually execute *entirely on each engine*:", "cell_type": "markdown" }, { "cell_type": "code", "language": "python", "outputs": [], "collapsed": true, "prompt_number": 29, "input": "from mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nsize = comm.Get_size()\nrank = comm.Get_rank()\n\nif rank == 0:\n data = [(i+1)**2 for i in range(size)]\nelse:\n data = None\ndata = comm.scatter(data, root=0)\n\nassert data == (rank+1)**2, 'data=%s, rank=%s' % (data, rank)" }, { "source": "Though the assertion at the end of the previous block validated the code, we can now \npull the 'data' variable from all the nodes for local inspection.\nFirst, don't forget to toggle off `autopx` mode so code runs again in the notebook:\n", "cell_type": "markdown" }, { "cell_type": "code", "language": "python", "outputs": [ { "output_type": "stream", "stream": "stdout", "text": "%autopx disabled\n\n" } ], "collapsed": false, "prompt_number": 33, "input": "%autopx" }, { "cell_type": "code", "language": "python", "outputs": [ { "output_type": "pyout", "prompt_number": 34, "text": "[16, 1, 9, 4]" } ], "collapsed": false, "prompt_number": 34, "input": "view['data']" }, { "input": "", "cell_type": "code", "collapsed": true, "language": "python", "outputs": [] } ] } ], "metadata": { "name": "parallel_mpi" }, "nbformat": 2 }