##// END OF EJS Templates
parallelz updates
MinRK -
Show More
@@ -116,7 +116,7 b' using IPython by following these steps:'
116 1. Copy the text files with the digits of pi
116 1. Copy the text files with the digits of pi
117 (ftp://pi.super-computing.org/.2/pi200m/) to the working directory of the
117 (ftp://pi.super-computing.org/.2/pi200m/) to the working directory of the
118 engines on the compute nodes.
118 engines on the compute nodes.
119 2. Use :command:`ipcluster` to start 15 engines. We used an 8 core (2 quad
119 2. Use :command:`ipclusterz` to start 15 engines. We used an 8 core (2 quad
120 core CPUs) cluster with hyperthreading enabled which makes the 8 cores
120 core CPUs) cluster with hyperthreading enabled which makes the 8 cores
121 looks like 16 (1 controller + 15 engines) in the OS. However, the maximum
121 looks like 16 (1 controller + 15 engines) in the OS. However, the maximum
122 speedup we can observe is still only 8x.
122 speedup we can observe is still only 8x.
@@ -133,13 +133,13 b' calculation can also be run by simply typing the commands from'
133
133
134 .. sourcecode:: ipython
134 .. sourcecode:: ipython
135
135
136 In [1]: from IPython.kernel import client
136 In [1]: from IPython.zmq.parallel import client
137 2009-11-19 11:32:38-0800 [-] Log opened.
137 2009-11-19 11:32:38-0800 [-] Log opened.
138
138
139 # The MultiEngineClient allows us to use the engines interactively.
139 # The MultiEngineClient allows us to use the engines interactively.
140 # We simply pass MultiEngineClient the name of the cluster profile we
140 # We simply pass MultiEngineClient the name of the cluster profile we
141 # are using.
141 # are using.
142 In [2]: mec = client.MultiEngineClient(profile='mycluster')
142 In [2]: c = client.Client(profile='mycluster')
143 2009-11-19 11:32:44-0800 [-] Connecting [0]
143 2009-11-19 11:32:44-0800 [-] Connecting [0]
144 2009-11-19 11:32:44-0800 [Negotiation,client] Connected: ./ipcontroller-mec.furl
144 2009-11-19 11:32:44-0800 [Negotiation,client] Connected: ./ipcontroller-mec.furl
145
145
@@ -233,7 +233,7 b' plot using Matplotlib.'
233 .. literalinclude:: ../../examples/kernel/mcdriver.py
233 .. literalinclude:: ../../examples/kernel/mcdriver.py
234 :language: python
234 :language: python
235
235
236 To use this code, start an IPython cluster using :command:`ipcluster`, open
236 To use this code, start an IPython cluster using :command:`ipclusterz`, open
237 IPython in the pylab mode with the file :file:`mcdriver.py` in your current
237 IPython in the pylab mode with the file :file:`mcdriver.py` in your current
238 working directory and then type:
238 working directory and then type:
239
239
@@ -4,6 +4,10 b''
4 Using MPI with IPython
4 Using MPI with IPython
5 =======================
5 =======================
6
6
7 .. note::
8
9 Not adapted to zmq yet
10
7 Often, a parallel algorithm will require moving data between the engines. One
11 Often, a parallel algorithm will require moving data between the engines. One
8 way of accomplishing this is by doing a pull and then a push using the
12 way of accomplishing this is by doing a pull and then a push using the
9 multiengine client. However, this will be slow as all the data has to go
13 multiengine client. However, this will be slow as all the data has to go
@@ -45,16 +49,16 b' To use code that calls MPI, there are typically two things that MPI requires.'
45 There are a couple of ways that you can start the IPython engines and get
49 There are a couple of ways that you can start the IPython engines and get
46 these things to happen.
50 these things to happen.
47
51
48 Automatic starting using :command:`mpiexec` and :command:`ipcluster`
52 Automatic starting using :command:`mpiexec` and :command:`ipclusterz`
49 --------------------------------------------------------------------
53 --------------------------------------------------------------------
50
54
51 The easiest approach is to use the `mpiexec` mode of :command:`ipcluster`,
55 The easiest approach is to use the `mpiexec` mode of :command:`ipclusterz`,
52 which will first start a controller and then a set of engines using
56 which will first start a controller and then a set of engines using
53 :command:`mpiexec`::
57 :command:`mpiexec`::
54
58
55 $ ipcluster mpiexec -n 4
59 $ ipclusterz mpiexec -n 4
56
60
57 This approach is best as interrupting :command:`ipcluster` will automatically
61 This approach is best as interrupting :command:`ipclusterz` will automatically
58 stop and clean up the controller and engines.
62 stop and clean up the controller and engines.
59
63
60 Manual starting using :command:`mpiexec`
64 Manual starting using :command:`mpiexec`
@@ -72,11 +76,11 b' starting the engines with::'
72
76
73 mpiexec -n 4 ipengine --mpi=pytrilinos
77 mpiexec -n 4 ipengine --mpi=pytrilinos
74
78
75 Automatic starting using PBS and :command:`ipcluster`
79 Automatic starting using PBS and :command:`ipclusterz`
76 -----------------------------------------------------
80 -----------------------------------------------------
77
81
78 The :command:`ipcluster` command also has built-in integration with PBS. For
82 The :command:`ipclusterz` command also has built-in integration with PBS. For
79 more information on this approach, see our documentation on :ref:`ipcluster
83 more information on this approach, see our documentation on :ref:`ipclusterz
80 <parallel_process>`.
84 <parallel_process>`.
81
85
82 Actually using MPI
86 Actually using MPI
@@ -105,7 +109,7 b' distributed array. Save the following text in a file called :file:`psum.py`:'
105
109
106 Now, start an IPython cluster in the same directory as :file:`psum.py`::
110 Now, start an IPython cluster in the same directory as :file:`psum.py`::
107
111
108 $ ipcluster mpiexec -n 4
112 $ ipclusterz mpiexec -n 4
109
113
110 Finally, connect to the cluster and use this function interactively. In this
114 Finally, connect to the cluster and use this function interactively. In this
111 case, we create a random array on each engine and sum up all the random arrays
115 case, we create a random array on each engine and sum up all the random arrays
@@ -113,9 +117,9 b' using our :func:`psum` function:'
113
117
114 .. sourcecode:: ipython
118 .. sourcecode:: ipython
115
119
116 In [1]: from IPython.kernel import client
120 In [1]: from IPython.zmq.parallel import client
117
121
118 In [2]: mec = client.MultiEngineClient()
122 In [2]: c = client.Client()
119
123
120 In [3]: mec.activate()
124 In [3]: mec.activate()
121
125
@@ -110,8 +110,7 b' some decorators:'
110 ....:
110 ....:
111
111
112 In [11]: map(f, range(32)) # this is done in parallel
112 In [11]: map(f, range(32)) # this is done in parallel
113 Out[11]:
113 Out[11]: [0.0,10.0,160.0,...]
114 [0.0,10.0,160.0,...]
115
114
116 See the docstring for the :func:`parallel` and :func:`remote` decorators for
115 See the docstring for the :func:`parallel` and :func:`remote` decorators for
117 options.
116 options.
@@ -185,7 +184,7 b' blocks until the engines are done executing the command:'
185 In [5]: dview['b'] = 10
184 In [5]: dview['b'] = 10
186
185
187 In [6]: dview.apply_bound(lambda x: a+b+x, 27)
186 In [6]: dview.apply_bound(lambda x: a+b+x, 27)
188 Out[6]: {0: 42, 1: 42, 2: 42, 3: 42}
187 Out[6]: [42,42,42,42]
189
188
190 Python commands can be executed on specific engines by calling execute using
189 Python commands can be executed on specific engines by calling execute using
191 the ``targets`` keyword argument, or creating a :class:`DirectView` instance
190 the ``targets`` keyword argument, or creating a :class:`DirectView` instance
@@ -198,7 +197,7 b' by index-access to the client:'
198 In [7]: rc.execute('c=a-b',targets=[1,3])
197 In [7]: rc.execute('c=a-b',targets=[1,3])
199
198
200 In [8]: rc[:]['c'] # shorthand for rc.pull('c',targets='all')
199 In [8]: rc[:]['c'] # shorthand for rc.pull('c',targets='all')
201 Out[8]: {0: 15, 1: -5, 2: 15, 3: -5}
200 Out[8]: [15,-5,15,-5]
202
201
203 .. note::
202 .. note::
204
203
@@ -425,7 +424,7 b' engines specified by the :attr:`targets` attribute of the'
425
424
426 In [26]: %px import numpy
425 In [26]: %px import numpy
427 Parallel execution on engines: [0, 1, 2, 3]
426 Parallel execution on engines: [0, 1, 2, 3]
428 Out[26]:{0: None, 1: None, 2: None, 3: None}
427 Out[26]:[None,None,None,None]
429
428
430 In [27]: %px a = numpy.random.rand(2,2)
429 In [27]: %px a = numpy.random.rand(2,2)
431 Parallel execution on engines: [0, 1, 2, 3]
430 Parallel execution on engines: [0, 1, 2, 3]
@@ -434,10 +433,11 b' engines specified by the :attr:`targets` attribute of the'
434 Parallel execution on engines: [0, 1, 2, 3]
433 Parallel execution on engines: [0, 1, 2, 3]
435
434
436 In [28]: dv['ev']
435 In [28]: dv['ev']
437 Out[44]: {0: array([ 1.09522024, -0.09645227]),
436 Out[44]: [ array([ 1.09522024, -0.09645227]),
438 1: array([ 1.21435496, -0.35546712]),
437 array([ 1.21435496, -0.35546712]),
439 2: array([ 0.72180653, 0.07133042]),
438 array([ 0.72180653, 0.07133042]),
440 3: array([ 1.46384341e+00, 1.04353244e-04])}
439 array([ 1.46384341e+00, 1.04353244e-04])
440 ]
441
441
442 .. Note::
442 .. Note::
443
443
@@ -496,10 +496,10 b' on the engines given by the :attr:`targets` attribute:'
496 Parallel execution on engines: [0, 1, 2, 3]
496 Parallel execution on engines: [0, 1, 2, 3]
497
497
498 In [37]: dv['ans']
498 In [37]: dv['ans']
499 Out[37]: {0 : 'Average max eigenvalue is: 10.1387247332',
499 Out[37]: [ 'Average max eigenvalue is: 10.1387247332',
500 1 : 'Average max eigenvalue is: 10.2076902286',
500 'Average max eigenvalue is: 10.2076902286',
501 2 : 'Average max eigenvalue is: 10.1891484655',
501 'Average max eigenvalue is: 10.1891484655',
502 3 : 'Average max eigenvalue is: 10.1158837784',}
502 'Average max eigenvalue is: 10.1158837784',]
503
503
504
504
505 .. Note::
505 .. Note::
@@ -524,23 +524,23 b' Here are some examples of how you use :meth:`push` and :meth:`pull`:'
524 .. sourcecode:: ipython
524 .. sourcecode:: ipython
525
525
526 In [38]: rc.push(dict(a=1.03234,b=3453))
526 In [38]: rc.push(dict(a=1.03234,b=3453))
527 Out[38]: {0: None, 1: None, 2: None, 3: None}
527 Out[38]: [None,None,None,None]
528
528
529 In [39]: rc.pull('a')
529 In [39]: rc.pull('a')
530 Out[39]: {0: 1.03234, 1: 1.03234, 2: 1.03234, 3: 1.03234}
530 Out[39]: [ 1.03234, 1.03234, 1.03234, 1.03234]
531
531
532 In [40]: rc.pull('b',targets=0)
532 In [40]: rc.pull('b',targets=0)
533 Out[40]: 3453
533 Out[40]: 3453
534
534
535 In [41]: rc.pull(('a','b'))
535 In [41]: rc.pull(('a','b'))
536 Out[41]: {0: [1.03234, 3453], 1: [1.03234, 3453], 2: [1.03234, 3453], 3:[1.03234, 3453]}
536 Out[41]: [ [1.03234, 3453], [1.03234, 3453], [1.03234, 3453], [1.03234, 3453] ]
537
537
538 # zmq client does not have zip_pull
538 # zmq client does not have zip_pull
539 In [42]: rc.zip_pull(('a','b'))
539 In [42]: rc.zip_pull(('a','b'))
540 Out[42]: [(1.03234, 1.03234, 1.03234, 1.03234), (3453, 3453, 3453, 3453)]
540 Out[42]: [(1.03234, 1.03234, 1.03234, 1.03234), (3453, 3453, 3453, 3453)]
541
541
542 In [43]: rc.push(dict(c='speed'))
542 In [43]: rc.push(dict(c='speed'))
543 Out[43]: {0: None, 1: None, 2: None, 3: None}
543 Out[43]: [None,None,None,None]
544
544
545 In non-blocking mode :meth:`push` and :meth:`pull` also return
545 In non-blocking mode :meth:`push` and :meth:`pull` also return
546 :class:`AsyncResult` objects:
546 :class:`AsyncResult` objects:
@@ -573,7 +573,7 b' appear as a local dictionary. Underneath, this uses :meth:`push` and'
573 In [51]: rc[:]['a']=['foo','bar']
573 In [51]: rc[:]['a']=['foo','bar']
574
574
575 In [52]: rc[:]['a']
575 In [52]: rc[:]['a']
576 Out[52]: {0: ['foo', 'bar'], 1: ['foo', 'bar'], 2: ['foo', 'bar'], 3: ['foo', 'bar']}
576 Out[52]: [ ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'], ['foo', 'bar'] ]
577
577
578 Scatter and gather
578 Scatter and gather
579 ------------------
579 ------------------
@@ -589,13 +589,10 b' between engines, MPI should be used:'
589 .. sourcecode:: ipython
589 .. sourcecode:: ipython
590
590
591 In [58]: rc.scatter('a',range(16))
591 In [58]: rc.scatter('a',range(16))
592 Out[58]: {0: None, 1: None, 2: None, 3: None}
592 Out[58]: [None,None,None,None]
593
593
594 In [59]: rc[:]['a']
594 In [59]: rc[:]['a']
595 Out[59]: {0: [0, 1, 2, 3],
595 Out[59]: [ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15] ]
596 1: [4, 5, 6, 7],
597 2: [8, 9, 10, 11],
598 3: [12, 13, 14, 15]}
599
596
600 In [60]: rc.gather('a')
597 In [60]: rc.gather('a')
601 Out[60]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
598 Out[60]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
@@ -613,7 +610,7 b' basic effect using :meth:`scatter` and :meth:`gather`:'
613 .. sourcecode:: ipython
610 .. sourcecode:: ipython
614
611
615 In [66]: rc.scatter('x',range(64))
612 In [66]: rc.scatter('x',range(64))
616 Out[66]: {0: None, 1: None, 2: None, 3: None}
613 Out[66]: [None,None,None,None]
617
614
618 In [67]: px y = [i**10 for i in x]
615 In [67]: px y = [i**10 for i in x]
619 Executing command on Controller
616 Executing command on Controller
@@ -772,10 +769,6 b' instance:'
772
769
773 ZeroDivisionError: integer division or modulo by zero
770 ZeroDivisionError: integer division or modulo by zero
774
771
775 .. note::
776
777 The above example appears to be broken right now because of a change in
778 how we are using Twisted.
779
772
780 All of this same error handling magic even works in non-blocking mode:
773 All of this same error handling magic even works in non-blocking mode:
781
774
@@ -4,6 +4,10 b''
4 Starting the IPython controller and engines
4 Starting the IPython controller and engines
5 ===========================================
5 ===========================================
6
6
7 .. note::
8
9 Not adapted to zmq yet
10
7 To use IPython for parallel computing, you need to start one instance of
11 To use IPython for parallel computing, you need to start one instance of
8 the controller and one or more instances of the engine. The controller
12 the controller and one or more instances of the engine. The controller
9 and each engine can run on different machines or on the same machine.
13 and each engine can run on different machines or on the same machine.
@@ -11,12 +15,12 b' Because of this, there are many different possibilities.'
11
15
12 Broadly speaking, there are two ways of going about starting a controller and engines:
16 Broadly speaking, there are two ways of going about starting a controller and engines:
13
17
14 * In an automated manner using the :command:`ipcluster` command.
18 * In an automated manner using the :command:`ipclusterz` command.
15 * In a more manual way using the :command:`ipcontroller` and
19 * In a more manual way using the :command:`ipcontroller` and
16 :command:`ipengine` commands.
20 :command:`ipengine` commands.
17
21
18 This document describes both of these methods. We recommend that new users
22 This document describes both of these methods. We recommend that new users
19 start with the :command:`ipcluster` command as it simplifies many common usage
23 start with the :command:`ipclusterz` command as it simplifies many common usage
20 cases.
24 cases.
21
25
22 General considerations
26 General considerations
@@ -51,10 +55,10 b' be run. If these file are put into the :file:`~/.ipython/security` directory'
51 of the client's host, they will be found automatically. Otherwise, the full
55 of the client's host, they will be found automatically. Otherwise, the full
52 path to them has to be passed to the client's constructor.
56 path to them has to be passed to the client's constructor.
53
57
54 Using :command:`ipcluster`
58 Using :command:`ipclusterz`
55 ==========================
59 ==========================
56
60
57 The :command:`ipcluster` command provides a simple way of starting a
61 The :command:`ipclusterz` command provides a simple way of starting a
58 controller and engines in the following situations:
62 controller and engines in the following situations:
59
63
60 1. When the controller and engines are all run on localhost. This is useful
64 1. When the controller and engines are all run on localhost. This is useful
@@ -68,33 +72,33 b' controller and engines in the following situations:'
68 .. note::
72 .. note::
69
73
70 It is also possible for advanced users to add support to
74 It is also possible for advanced users to add support to
71 :command:`ipcluster` for starting controllers and engines using other
75 :command:`ipclusterz` for starting controllers and engines using other
72 methods (like Sun's Grid Engine for example).
76 methods (like Sun's Grid Engine for example).
73
77
74 .. note::
78 .. note::
75
79
76 Currently :command:`ipcluster` requires that the
80 Currently :command:`ipclusterz` requires that the
77 :file:`~/.ipython/security` directory live on a shared filesystem that is
81 :file:`~/.ipython/security` directory live on a shared filesystem that is
78 seen by both the controller and engines. If you don't have a shared file
82 seen by both the controller and engines. If you don't have a shared file
79 system you will need to use :command:`ipcontroller` and
83 system you will need to use :command:`ipcontroller` and
80 :command:`ipengine` directly. This constraint can be relaxed if you are
84 :command:`ipengine` directly. This constraint can be relaxed if you are
81 using the :command:`ssh` method to start the cluster.
85 using the :command:`ssh` method to start the cluster.
82
86
83 Underneath the hood, :command:`ipcluster` just uses :command:`ipcontroller`
87 Underneath the hood, :command:`ipclusterz` just uses :command:`ipcontroller`
84 and :command:`ipengine` to perform the steps described above.
88 and :command:`ipengine` to perform the steps described above.
85
89
86 Using :command:`ipcluster` in local mode
90 Using :command:`ipclusterz` in local mode
87 ----------------------------------------
91 ----------------------------------------
88
92
89 To start one controller and 4 engines on localhost, just do::
93 To start one controller and 4 engines on localhost, just do::
90
94
91 $ ipcluster local -n 4
95 $ ipclusterz -n 4
92
96
93 To see other command line options for the local mode, do::
97 To see other command line options for the local mode, do::
94
98
95 $ ipcluster local -h
99 $ ipclusterz -h
96
100
97 Using :command:`ipcluster` in mpiexec/mpirun mode
101 Using :command:`ipclusterz` in mpiexec/mpirun mode
98 -------------------------------------------------
102 -------------------------------------------------
99
103
100 The mpiexec/mpirun mode is useful if you:
104 The mpiexec/mpirun mode is useful if you:
@@ -112,7 +116,7 b' The mpiexec/mpirun mode is useful if you:'
112
116
113 If these are satisfied, you can start an IPython cluster using::
117 If these are satisfied, you can start an IPython cluster using::
114
118
115 $ ipcluster mpiexec -n 4
119 $ ipclusterz mpiexec -n 4
116
120
117 This does the following:
121 This does the following:
118
122
@@ -123,25 +127,25 b' On newer MPI implementations (such as OpenMPI), this will work even if you'
123 don't make any calls to MPI or call :func:`MPI_Init`. However, older MPI
127 don't make any calls to MPI or call :func:`MPI_Init`. However, older MPI
124 implementations actually require each process to call :func:`MPI_Init` upon
128 implementations actually require each process to call :func:`MPI_Init` upon
125 starting. The easiest way of having this done is to install the mpi4py
129 starting. The easiest way of having this done is to install the mpi4py
126 [mpi4py]_ package and then call ipcluster with the ``--mpi`` option::
130 [mpi4py]_ package and then call ipclusterz with the ``--mpi`` option::
127
131
128 $ ipcluster mpiexec -n 4 --mpi=mpi4py
132 $ ipclusterz mpiexec -n 4 --mpi=mpi4py
129
133
130 Unfortunately, even this won't work for some MPI implementations. If you are
134 Unfortunately, even this won't work for some MPI implementations. If you are
131 having problems with this, you will likely have to use a custom Python
135 having problems with this, you will likely have to use a custom Python
132 executable that itself calls :func:`MPI_Init` at the appropriate time.
136 executable that itself calls :func:`MPI_Init` at the appropriate time.
133 Fortunately, mpi4py comes with such a custom Python executable that is easy to
137 Fortunately, mpi4py comes with such a custom Python executable that is easy to
134 install and use. However, this custom Python executable approach will not work
138 install and use. However, this custom Python executable approach will not work
135 with :command:`ipcluster` currently.
139 with :command:`ipclusterz` currently.
136
140
137 Additional command line options for this mode can be found by doing::
141 Additional command line options for this mode can be found by doing::
138
142
139 $ ipcluster mpiexec -h
143 $ ipclusterz mpiexec -h
140
144
141 More details on using MPI with IPython can be found :ref:`here <parallelmpi>`.
145 More details on using MPI with IPython can be found :ref:`here <parallelmpi>`.
142
146
143
147
144 Using :command:`ipcluster` in PBS mode
148 Using :command:`ipclusterz` in PBS mode
145 --------------------------------------
149 --------------------------------------
146
150
147 The PBS mode uses the Portable Batch System [PBS]_ to start the engines. To
151 The PBS mode uses the Portable Batch System [PBS]_ to start the engines. To
@@ -184,13 +188,13 b' There are a few important points about this template:'
184 Once you have created such a script, save it with a name like
188 Once you have created such a script, save it with a name like
185 :file:`pbs.template`. Now you are ready to start your job::
189 :file:`pbs.template`. Now you are ready to start your job::
186
190
187 $ ipcluster pbs -n 128 --pbs-script=pbs.template
191 $ ipclusterz pbs -n 128 --pbs-script=pbs.template
188
192
189 Additional command line options for this mode can be found by doing::
193 Additional command line options for this mode can be found by doing::
190
194
191 $ ipcluster pbs -h
195 $ ipclusterz pbs -h
192
196
193 Using :command:`ipcluster` in SSH mode
197 Using :command:`ipclusterz` in SSH mode
194 --------------------------------------
198 --------------------------------------
195
199
196 The SSH mode uses :command:`ssh` to execute :command:`ipengine` on remote
200 The SSH mode uses :command:`ssh` to execute :command:`ipengine` on remote
@@ -225,7 +229,7 b' start your cluster like so:'
225
229
226 .. sourcecode:: bash
230 .. sourcecode:: bash
227
231
228 $ ipcluster ssh --clusterfile /path/to/my/clusterfile.py
232 $ ipclusterz ssh --clusterfile /path/to/my/clusterfile.py
229
233
230
234
231 Two helper shell scripts are used to start and stop :command:`ipengine` on
235 Two helper shell scripts are used to start and stop :command:`ipengine` on
@@ -235,7 +239,7 b' remote hosts:'
235 * engine_killer.sh
239 * engine_killer.sh
236
240
237 Defaults for both of these are contained in the source code for
241 Defaults for both of these are contained in the source code for
238 :command:`ipcluster`. The default scripts are written to a local file in a
242 :command:`ipclusterz`. The default scripts are written to a local file in a
239 tmep directory and then copied to a temp directory on the remote host and
243 tmep directory and then copied to a temp directory on the remote host and
240 executed from there. On most Unix, Linux and OS X systems this is /tmp.
244 executed from there. On most Unix, Linux and OS X systems this is /tmp.
241
245
@@ -256,9 +260,9 b' For a detailed options list:'
256
260
257 .. sourcecode:: bash
261 .. sourcecode:: bash
258
262
259 $ ipcluster ssh -h
263 $ ipclusterz ssh -h
260
264
261 Current limitations of the SSH mode of :command:`ipcluster` are:
265 Current limitations of the SSH mode of :command:`ipclusterz` are:
262
266
263 * Untested on Windows. Would require a working :command:`ssh` on Windows.
267 * Untested on Windows. Would require a working :command:`ssh` on Windows.
264 Also, we are using shell scripts to setup and execute commands on remote
268 Also, we are using shell scripts to setup and execute commands on remote
@@ -358,9 +362,9 b' listen on for the engines and clients. This is done as follows::'
358 $ ipcontroller -r --client-port=10101 --engine-port=10102
362 $ ipcontroller -r --client-port=10101 --engine-port=10102
359
363
360 These options also work with all of the various modes of
364 These options also work with all of the various modes of
361 :command:`ipcluster`::
365 :command:`ipclusterz`::
362
366
363 $ ipcluster local -n 2 -r --client-port=10101 --engine-port=10102
367 $ ipclusterz -n 2 -r --client-port=10101 --engine-port=10102
364
368
365 Then, just copy the furl files over the first time and you are set. You can
369 Then, just copy the furl files over the first time and you are set. You can
366 start and stop the controller and engines any many times as you want in the
370 start and stop the controller and engines any many times as you want in the
@@ -4,11 +4,15 b''
4 Security details of IPython
4 Security details of IPython
5 ===========================
5 ===========================
6
6
7 IPython's :mod:`IPython.kernel` package exposes the full power of the Python
7 .. note::
8 interpreter over a TCP/IP network for the purposes of parallel computing. This
8
9 feature brings up the important question of IPython's security model. This
9 Not adapted to zmq yet
10 document gives details about this model and how it is implemented in IPython's
10
11 architecture.
11 IPython's :mod:`IPython.zmq.parallel` package exposes the full power of the
12 Python interpreter over a TCP/IP network for the purposes of parallel
13 computing. This feature brings up the important question of IPython's security
14 model. This document gives details about this model and how it is implemented
15 in IPython's architecture.
12
16
13 Processs and network topology
17 Processs and network topology
14 =============================
18 =============================
@@ -2,6 +2,10 b''
2 Getting started with Windows HPC Server 2008
2 Getting started with Windows HPC Server 2008
3 ============================================
3 ============================================
4
4
5 .. note::
6
7 Not adapted to zmq yet
8
5 Introduction
9 Introduction
6 ============
10 ============
7
11
@@ -143,42 +147,42 b' in parallel on the engines from within the IPython shell using an appropriate'
143 client. This includes the ability to interact with, plot and visualize data
147 client. This includes the ability to interact with, plot and visualize data
144 from the engines.
148 from the engines.
145
149
146 IPython has a command line program called :command:`ipcluster` that automates
150 IPython has a command line program called :command:`ipclusterz` that automates
147 all aspects of starting the controller and engines on the compute nodes.
151 all aspects of starting the controller and engines on the compute nodes.
148 :command:`ipcluster` has full support for the Windows HPC job scheduler,
152 :command:`ipclusterz` has full support for the Windows HPC job scheduler,
149 meaning that :command:`ipcluster` can use this job scheduler to start the
153 meaning that :command:`ipclusterz` can use this job scheduler to start the
150 controller and engines. In our experience, the Windows HPC job scheduler is
154 controller and engines. In our experience, the Windows HPC job scheduler is
151 particularly well suited for interactive applications, such as IPython. Once
155 particularly well suited for interactive applications, such as IPython. Once
152 :command:`ipcluster` is configured properly, a user can start an IPython
156 :command:`ipclusterz` is configured properly, a user can start an IPython
153 cluster from their local workstation almost instantly, without having to log
157 cluster from their local workstation almost instantly, without having to log
154 on to the head node (as is typically required by Unix based job schedulers).
158 on to the head node (as is typically required by Unix based job schedulers).
155 This enables a user to move seamlessly between serial and parallel
159 This enables a user to move seamlessly between serial and parallel
156 computations.
160 computations.
157
161
158 In this section we show how to use :command:`ipcluster` to start an IPython
162 In this section we show how to use :command:`ipclusterz` to start an IPython
159 cluster using the Windows HPC Server 2008 job scheduler. To make sure that
163 cluster using the Windows HPC Server 2008 job scheduler. To make sure that
160 :command:`ipcluster` is installed and working properly, you should first try
164 :command:`ipclusterz` is installed and working properly, you should first try
161 to start an IPython cluster on your local host. To do this, open a Windows
165 to start an IPython cluster on your local host. To do this, open a Windows
162 Command Prompt and type the following command::
166 Command Prompt and type the following command::
163
167
164 ipcluster start -n 2
168 ipclusterz start -n 2
165
169
166 You should see a number of messages printed to the screen, ending with
170 You should see a number of messages printed to the screen, ending with
167 "IPython cluster: started". The result should look something like the following
171 "IPython cluster: started". The result should look something like the following
168 screenshot:
172 screenshot:
169
173
170 .. image:: ipcluster_start.*
174 .. image:: ipclusterz_start.*
171
175
172 At this point, the controller and two engines are running on your local host.
176 At this point, the controller and two engines are running on your local host.
173 This configuration is useful for testing and for situations where you want to
177 This configuration is useful for testing and for situations where you want to
174 take advantage of multiple cores on your local computer.
178 take advantage of multiple cores on your local computer.
175
179
176 Now that we have confirmed that :command:`ipcluster` is working properly, we
180 Now that we have confirmed that :command:`ipclusterz` is working properly, we
177 describe how to configure and run an IPython cluster on an actual compute
181 describe how to configure and run an IPython cluster on an actual compute
178 cluster running Windows HPC Server 2008. Here is an outline of the needed
182 cluster running Windows HPC Server 2008. Here is an outline of the needed
179 steps:
183 steps:
180
184
181 1. Create a cluster profile using: ``ipcluster create -p mycluster``
185 1. Create a cluster profile using: ``ipclusterz create -p mycluster``
182
186
183 2. Edit configuration files in the directory :file:`.ipython\\cluster_mycluster`
187 2. Edit configuration files in the directory :file:`.ipython\\cluster_mycluster`
184
188
@@ -190,7 +194,7 b' Creating a cluster profile'
190 In most cases, you will have to create a cluster profile to use IPython on a
194 In most cases, you will have to create a cluster profile to use IPython on a
191 cluster. A cluster profile is a name (like "mycluster") that is associated
195 cluster. A cluster profile is a name (like "mycluster") that is associated
192 with a particular cluster configuration. The profile name is used by
196 with a particular cluster configuration. The profile name is used by
193 :command:`ipcluster` when working with the cluster.
197 :command:`ipclusterz` when working with the cluster.
194
198
195 Associated with each cluster profile is a cluster directory. This cluster
199 Associated with each cluster profile is a cluster directory. This cluster
196 directory is a specially named directory (typically located in the
200 directory is a specially named directory (typically located in the
@@ -203,13 +207,13 b' security keys. The naming convention for cluster directories is:'
203 To create a new cluster profile (named "mycluster") and the associated cluster
207 To create a new cluster profile (named "mycluster") and the associated cluster
204 directory, type the following command at the Windows Command Prompt::
208 directory, type the following command at the Windows Command Prompt::
205
209
206 ipcluster create -p mycluster
210 ipclusterz create -p mycluster
207
211
208 The output of this command is shown in the screenshot below. Notice how
212 The output of this command is shown in the screenshot below. Notice how
209 :command:`ipcluster` prints out the location of the newly created cluster
213 :command:`ipclusterz` prints out the location of the newly created cluster
210 directory.
214 directory.
211
215
212 .. image:: ipcluster_create.*
216 .. image:: ipclusterz_create.*
213
217
214 Configuring a cluster profile
218 Configuring a cluster profile
215 -----------------------------
219 -----------------------------
@@ -217,24 +221,24 b' Configuring a cluster profile'
217 Next, you will need to configure the newly created cluster profile by editing
221 Next, you will need to configure the newly created cluster profile by editing
218 the following configuration files in the cluster directory:
222 the following configuration files in the cluster directory:
219
223
220 * :file:`ipcluster_config.py`
224 * :file:`ipclusterz_config.py`
221 * :file:`ipcontroller_config.py`
225 * :file:`ipcontroller_config.py`
222 * :file:`ipengine_config.py`
226 * :file:`ipengine_config.py`
223
227
224 When :command:`ipcluster` is run, these configuration files are used to
228 When :command:`ipclusterz` is run, these configuration files are used to
225 determine how the engines and controller will be started. In most cases,
229 determine how the engines and controller will be started. In most cases,
226 you will only have to set a few of the attributes in these files.
230 you will only have to set a few of the attributes in these files.
227
231
228 To configure :command:`ipcluster` to use the Windows HPC job scheduler, you
232 To configure :command:`ipclusterz` to use the Windows HPC job scheduler, you
229 will need to edit the following attributes in the file
233 will need to edit the following attributes in the file
230 :file:`ipcluster_config.py`::
234 :file:`ipclusterz_config.py`::
231
235
232 # Set these at the top of the file to tell ipcluster to use the
236 # Set these at the top of the file to tell ipclusterz to use the
233 # Windows HPC job scheduler.
237 # Windows HPC job scheduler.
234 c.Global.controller_launcher = \
238 c.Global.controller_launcher = \
235 'IPython.kernel.launcher.WindowsHPCControllerLauncher'
239 'IPython.zmq.parallel.launcher.WindowsHPCControllerLauncher'
236 c.Global.engine_launcher = \
240 c.Global.engine_launcher = \
237 'IPython.kernel.launcher.WindowsHPCEngineSetLauncher'
241 'IPython.zmq.parallel.launcher.WindowsHPCEngineSetLauncher'
238
242
239 # Set these to the host name of the scheduler (head node) of your cluster.
243 # Set these to the host name of the scheduler (head node) of your cluster.
240 c.WindowsHPCControllerLauncher.scheduler = 'HEADNODE'
244 c.WindowsHPCControllerLauncher.scheduler = 'HEADNODE'
@@ -256,15 +260,15 b' Starting the cluster profile'
256 Once a cluster profile has been configured, starting an IPython cluster using
260 Once a cluster profile has been configured, starting an IPython cluster using
257 the profile is simple::
261 the profile is simple::
258
262
259 ipcluster start -p mycluster -n 32
263 ipclusterz start -p mycluster -n 32
260
264
261 The ``-n`` option tells :command:`ipcluster` how many engines to start (in
265 The ``-n`` option tells :command:`ipclusterz` how many engines to start (in
262 this case 32). Stopping the cluster is as simple as typing Control-C.
266 this case 32). Stopping the cluster is as simple as typing Control-C.
263
267
264 Using the HPC Job Manager
268 Using the HPC Job Manager
265 -------------------------
269 -------------------------
266
270
267 When ``ipcluster start`` is run the first time, :command:`ipcluster` creates
271 When ``ipclusterz start`` is run the first time, :command:`ipclusterz` creates
268 two XML job description files in the cluster directory:
272 two XML job description files in the cluster directory:
269
273
270 * :file:`ipcontroller_job.xml`
274 * :file:`ipcontroller_job.xml`
@@ -272,8 +276,8 b' two XML job description files in the cluster directory:'
272
276
273 Once these files have been created, they can be imported into the HPC Job
277 Once these files have been created, they can be imported into the HPC Job
274 Manager application. Then, the controller and engines for that profile can be
278 Manager application. Then, the controller and engines for that profile can be
275 started using the HPC Job Manager directly, without using :command:`ipcluster`.
279 started using the HPC Job Manager directly, without using :command:`ipclusterz`.
276 However, anytime the cluster profile is re-configured, ``ipcluster start``
280 However, anytime the cluster profile is re-configured, ``ipclusterz start``
277 must be run again to regenerate the XML job description files. The
281 must be run again to regenerate the XML job description files. The
278 following screenshot shows what the HPC Job Manager interface looks like
282 following screenshot shows what the HPC Job Manager interface looks like
279 with a running IPython cluster.
283 with a running IPython cluster.
@@ -297,9 +301,9 b' apply it to each element of an array of integers in parallel using the'
297
301
298 .. sourcecode:: ipython
302 .. sourcecode:: ipython
299
303
300 In [1]: from IPython.kernel.client import *
304 In [1]: from IPython.zmq.parallel.client import *
301
305
302 In [2]: mec = MultiEngineClient(profile='mycluster')
306 In [2]: c = MultiEngineClient(profile='mycluster')
303
307
304 In [3]: mec.get_ids()
308 In [3]: mec.get_ids()
305 Out[3]: [0, 1, 2, 3, 4, 5, 67, 8, 9, 10, 11, 12, 13, 14]
309 Out[3]: [0, 1, 2, 3, 4, 5, 67, 8, 9, 10, 11, 12, 13, 14]
General Comments 0
You need to be logged in to leave comments. Login now