Show More
@@ -103,7 +103,7 b' calculation, we will need two top-level functions from :file:`pidigits.py`:' | |||
|
103 | 103 | |
|
104 | 104 | .. literalinclude:: ../../examples/newparallel/pidigits.py |
|
105 | 105 | :language: python |
|
106 |
:lines: 4 |
|
|
106 | :lines: 47-62 | |
|
107 | 107 | |
|
108 | 108 | We will also use the :func:`plot_two_digit_freqs` function to plot the |
|
109 | 109 | results. The code to run this calculation in parallel is contained in |
@@ -195,7 +195,7 b' simply start a controller and engines on a single host using the' | |||
|
195 | 195 | :command:`ipcluster` command. To start a controller and 4 engines on your |
|
196 | 196 | localhost, just do:: |
|
197 | 197 | |
|
198 | $ ipcluster start n=4 | |
|
198 | $ ipcluster start --n=4 | |
|
199 | 199 | |
|
200 | 200 | More details about starting the IPython controller and engines can be found |
|
201 | 201 | :ref:`here <parallel_process>` |
@@ -57,7 +57,7 b' The easiest approach is to use the `MPIExec` Launchers in :command:`ipcluster`,' | |||
|
57 | 57 | which will first start a controller and then a set of engines using |
|
58 | 58 | :command:`mpiexec`:: |
|
59 | 59 | |
|
60 | $ ipcluster start n=4 elauncher=MPIExecEngineSetLauncher | |
|
60 | $ ipcluster start --n=4 --elauncher=MPIExecEngineSetLauncher | |
|
61 | 61 | |
|
62 | 62 | This approach is best as interrupting :command:`ipcluster` will automatically |
|
63 | 63 | stop and clean up the controller and engines. |
@@ -68,14 +68,14 b' Manual starting using :command:`mpiexec`' | |||
|
68 | 68 | If you want to start the IPython engines using the :command:`mpiexec`, just |
|
69 | 69 | do:: |
|
70 | 70 | |
|
71 | $ mpiexec n=4 ipengine mpi=mpi4py | |
|
71 | $ mpiexec n=4 ipengine --mpi=mpi4py | |
|
72 | 72 | |
|
73 | 73 | This requires that you already have a controller running and that the FURL |
|
74 | 74 | files for the engines are in place. We also have built in support for |
|
75 | 75 | PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by |
|
76 | 76 | starting the engines with:: |
|
77 | 77 | |
|
78 | $ mpiexec n=4 ipengine mpi=pytrilinos | |
|
78 | $ mpiexec n=4 ipengine --mpi=pytrilinos | |
|
79 | 79 | |
|
80 | 80 | Automatic starting using PBS and :command:`ipcluster` |
|
81 | 81 | ------------------------------------------------------ |
@@ -110,7 +110,7 b' distributed array. Save the following text in a file called :file:`psum.py`:' | |||
|
110 | 110 | |
|
111 | 111 | Now, start an IPython cluster:: |
|
112 | 112 | |
|
113 | $ ipcluster start profile=mpi n=4 | |
|
113 | $ ipcluster start --profile=mpi --n=4 | |
|
114 | 114 | |
|
115 | 115 | .. note:: |
|
116 | 116 |
@@ -19,7 +19,7 b' To follow along with this tutorial, you will need to start the IPython' | |||
|
19 | 19 | controller and four IPython engines. The simplest way of doing this is to use |
|
20 | 20 | the :command:`ipcluster` command:: |
|
21 | 21 | |
|
22 | $ ipcluster start n=4 | |
|
22 | $ ipcluster start --n=4 | |
|
23 | 23 | |
|
24 | 24 | For more detailed information about starting the controller and engines, see |
|
25 | 25 | our :ref:`introduction <ip1par>` to using IPython for parallel computing. |
@@ -35,7 +35,7 b' the ``ip`` argument on the command-line, or the ``HubFactory.ip`` configurable i' | |||
|
35 | 35 | If your machines are on a trusted network, you can safely instruct the controller to listen |
|
36 | 36 | on all public interfaces with:: |
|
37 | 37 | |
|
38 | $> ipcontroller ip=* | |
|
38 | $> ipcontroller --ip=* | |
|
39 | 39 | |
|
40 | 40 | Or you can set the same behavior as the default by adding the following line to your :file:`ipcontroller_config.py`: |
|
41 | 41 | |
@@ -109,7 +109,7 b' The simplest way to use ipcluster requires no configuration, and will' | |||
|
109 | 109 | launch a controller and a number of engines on the local machine. For instance, |
|
110 | 110 | to start one controller and 4 engines on localhost, just do:: |
|
111 | 111 | |
|
112 | $ ipcluster start n=4 | |
|
112 | $ ipcluster start --n=4 | |
|
113 | 113 | |
|
114 | 114 | To see other command line options, do:: |
|
115 | 115 | |
@@ -121,7 +121,7 b' Configuring an IPython cluster' | |||
|
121 | 121 | |
|
122 | 122 | Cluster configurations are stored as `profiles`. You can create a new profile with:: |
|
123 | 123 | |
|
124 | $ ipython profile create --parallel profile=myprofile | |
|
124 | $ ipython profile create --parallel --profile=myprofile | |
|
125 | 125 | |
|
126 | 126 | This will create the directory :file:`IPYTHONDIR/profile_myprofile`, and populate it |
|
127 | 127 | with the default configuration files for the three IPython cluster commands. Once |
@@ -162,7 +162,7 b' The mpiexec/mpirun mode is useful if you:' | |||
|
162 | 162 | |
|
163 | 163 | If these are satisfied, you can create a new profile:: |
|
164 | 164 | |
|
165 | $ ipython profile create --parallel profile=mpi | |
|
165 | $ ipython profile create --parallel --profile=mpi | |
|
166 | 166 | |
|
167 | 167 | and edit the file :file:`IPYTHONDIR/profile_mpi/ipcluster_config.py`. |
|
168 | 168 | |
@@ -174,7 +174,7 b' There, instruct ipcluster to use the MPIExec launchers by adding the lines:' | |||
|
174 | 174 | |
|
175 | 175 | If the default MPI configuration is correct, then you can now start your cluster, with:: |
|
176 | 176 | |
|
177 | $ ipcluster start n=4 profile=mpi | |
|
177 | $ ipcluster start --n=4 --profile=mpi | |
|
178 | 178 | |
|
179 | 179 | This does the following: |
|
180 | 180 | |
@@ -219,7 +219,7 b' The PBS mode uses the Portable Batch System (PBS) to start the engines.' | |||
|
219 | 219 | |
|
220 | 220 | As usual, we will start by creating a fresh profile:: |
|
221 | 221 | |
|
222 | $ ipython profile create --parallel profile=pbs | |
|
222 | $ ipython profile create --parallel --profile=pbs | |
|
223 | 223 | |
|
224 | 224 | And in :file:`ipcluster_config.py`, we will select the PBS launchers for the controller |
|
225 | 225 | and engines: |
@@ -253,7 +253,7 b' to specify your own. Here is a sample PBS script template:' | |||
|
253 | 253 | cd $PBS_O_WORKDIR |
|
254 | 254 | export PATH=$HOME/usr/local/bin |
|
255 | 255 | export PYTHONPATH=$HOME/usr/local/lib/python2.7/site-packages |
|
256 | /usr/local/bin/mpiexec -n {n} ipengine profile_dir={profile_dir} | |
|
256 | /usr/local/bin/mpiexec -n {n} ipengine --profile_dir={profile_dir} | |
|
257 | 257 | |
|
258 | 258 | There are a few important points about this template: |
|
259 | 259 | |
@@ -288,7 +288,7 b' The controller template should be similar, but simpler:' | |||
|
288 | 288 | cd $PBS_O_WORKDIR |
|
289 | 289 | export PATH=$HOME/usr/local/bin |
|
290 | 290 | export PYTHONPATH=$HOME/usr/local/lib/python2.7/site-packages |
|
291 | ipcontroller profile_dir={profile_dir} | |
|
291 | ipcontroller --profile_dir={profile_dir} | |
|
292 | 292 | |
|
293 | 293 | |
|
294 | 294 | Once you have created these scripts, save them with names like |
@@ -324,7 +324,7 b' connections on all its interfaces, by adding in :file:`ipcontroller_config`:' | |||
|
324 | 324 | |
|
325 | 325 | You can now run the cluster with:: |
|
326 | 326 | |
|
327 | $ ipcluster start profile=pbs n=128 | |
|
327 | $ ipcluster start --profile=pbs --n=128 | |
|
328 | 328 | |
|
329 | 329 | Additional configuration options can be found in the PBS section of :file:`ipcluster_config`. |
|
330 | 330 | |
@@ -349,7 +349,7 b' nodes and :command:`ipcontroller` can be run remotely as well, or on localhost.' | |||
|
349 | 349 | |
|
350 | 350 | As usual, we start by creating a clean profile:: |
|
351 | 351 | |
|
352 | $ ipython profile create --parallel profile=ssh | |
|
352 | $ ipython profile create --parallel --profile=ssh | |
|
353 | 353 | |
|
354 | 354 | To use this mode, select the SSH launchers in :file:`ipcluster_config.py`: |
|
355 | 355 | |
@@ -374,7 +374,7 b" The controller's remote location and configuration can be specified:" | |||
|
374 | 374 | # note that remotely launched ipcontroller will not get the contents of |
|
375 | 375 | # the local ipcontroller_config.py unless it resides on the *remote host* |
|
376 | 376 | # in the location specified by the `profile_dir` argument. |
|
377 | # c.SSHControllerLauncher.program_args = ['--reuse', 'ip=*', 'profile_dir=/path/to/cd'] | |
|
377 | # c.SSHControllerLauncher.program_args = ['--reuse', '--ip=*', '--profile_dir=/path/to/cd'] | |
|
378 | 378 | |
|
379 | 379 | .. note:: |
|
380 | 380 | |
@@ -390,7 +390,7 b' on that host.' | |||
|
390 | 390 | |
|
391 | 391 | c.SSHEngineSetLauncher.engines = { 'host1.example.com' : 2, |
|
392 | 392 | 'host2.example.com' : 5, |
|
393 | 'host3.example.com' : (1, ['profile_dir=/home/different/location']), | |
|
393 | 'host3.example.com' : (1, ['--profile_dir=/home/different/location']), | |
|
394 | 394 | 'host4.example.com' : 8 } |
|
395 | 395 | |
|
396 | 396 | * The `engines` dict, where the keys are the host we want to run engines on and |
@@ -403,7 +403,7 b' a single location:' | |||
|
403 | 403 | |
|
404 | 404 | .. sourcecode:: python |
|
405 | 405 | |
|
406 | c.SSHEngineSetLauncher.engine_args = ['profile_dir=/path/to/profile_ssh'] | |
|
406 | c.SSHEngineSetLauncher.engine_args = ['--profile_dir=/path/to/profile_ssh'] | |
|
407 | 407 | |
|
408 | 408 | Current limitations of the SSH mode of :command:`ipcluster` are: |
|
409 | 409 | |
@@ -471,12 +471,12 b' can do this:' | |||
|
471 | 471 | |
|
472 | 472 | * Put :file:`ipcontroller-engine.json` in the :file:`~/.ipython/profile_<name>/security` |
|
473 | 473 | directory on the engine's host, where it will be found automatically. |
|
474 | * Call :command:`ipengine` with the ``file=full_path_to_the_file`` | |
|
474 | * Call :command:`ipengine` with the ``--file=full_path_to_the_file`` | |
|
475 | 475 | flag. |
|
476 | 476 | |
|
477 | 477 | The ``file`` flag works like this:: |
|
478 | 478 | |
|
479 | $ ipengine file=/path/to/my/ipcontroller-engine.json | |
|
479 | $ ipengine --file=/path/to/my/ipcontroller-engine.json | |
|
480 | 480 | |
|
481 | 481 | .. note:: |
|
482 | 482 |
@@ -24,7 +24,7 b' To follow along with this tutorial, you will need to start the IPython' | |||
|
24 | 24 | controller and four IPython engines. The simplest way of doing this is to use |
|
25 | 25 | the :command:`ipcluster` command:: |
|
26 | 26 | |
|
27 | $ ipcluster start n=4 | |
|
27 | $ ipcluster start --n=4 | |
|
28 | 28 | |
|
29 | 29 | For more detailed information about starting the controller and engines, see |
|
30 | 30 | our :ref:`introduction <ip1par>` to using IPython for parallel computing. |
@@ -350,9 +350,9 b' The built-in routing schemes:' | |||
|
350 | 350 | |
|
351 | 351 | To select one of these schemes, simply do:: |
|
352 | 352 | |
|
353 | $ ipcontroller scheme=<schemename> | |
|
353 | $ ipcontroller --scheme=<schemename> | |
|
354 | 354 | for instance: |
|
355 | $ ipcontroller scheme=lru | |
|
355 | $ ipcontroller --scheme=lru | |
|
356 | 356 | |
|
357 | 357 | lru: Least Recently Used |
|
358 | 358 |
@@ -204,7 +204,7 b' security keys. The naming convention for cluster directories is:' | |||
|
204 | 204 | To create a new cluster profile (named "mycluster") and the associated cluster |
|
205 | 205 | directory, type the following command at the Windows Command Prompt:: |
|
206 | 206 | |
|
207 | ipython profile create --parallel profile=mycluster | |
|
207 | ipython profile create --parallel --profile=mycluster | |
|
208 | 208 | |
|
209 | 209 | The output of this command is shown in the screenshot below. Notice how |
|
210 | 210 | :command:`ipcluster` prints out the location of the newly created cluster |
@@ -257,7 +257,7 b' Starting the cluster profile' | |||
|
257 | 257 | Once a cluster profile has been configured, starting an IPython cluster using |
|
258 | 258 | the profile is simple:: |
|
259 | 259 | |
|
260 | ipcluster start profile=mycluster n=32 | |
|
260 | ipcluster start --profile=mycluster --n=32 | |
|
261 | 261 | |
|
262 | 262 | The ``-n`` option tells :command:`ipcluster` how many engines to start (in |
|
263 | 263 | this case 32). Stopping the cluster is as simple as typing Control-C. |
General Comments 0
You need to be logged in to leave comments.
Login now