##// END OF EJS Templates
Updating the Sphinx docs in preparation for the release....
Brian Granger -
Show More
@@ -27,6 +27,12 b' Release 0.9'
27 27 New features
28 28 ------------
29 29
30 * All furl files and security certificates are now put in a read-only directory
31 named ~./ipython/security.
32
33 * A single function :func:`get_ipython_dir`, in :mod:`IPython.genutils` that
34 determines the user's IPython directory in a robust manner.
35
30 36 * Laurent's WX application has been given a top-level script called ipython-wx,
31 37 and it has received numerous fixes. We expect this code to be
32 38 architecturally better integrated with Gael's WX 'ipython widget' over the
@@ -58,7 +64,8 b' New features'
58 64 time and report problems), but it now works for the developers. We are
59 65 working hard on continuing to improve it, as this was probably IPython's
60 66 major Achilles heel (the lack of proper test coverage made it effectively
61 impossible to do large-scale refactoring).
67 impossible to do large-scale refactoring). The full test suite can now
68 be run using the :command:`iptest` command line program.
62 69
63 70 * The notion of a task has been completely reworked. An `ITask` interface has
64 71 been created. This interface defines the methods that tasks need to implement.
@@ -120,6 +127,9 b' New features'
120 127 Bug fixes
121 128 ---------
122 129
130 * The Windows installer has been fixed. Now all IPython scripts have ``.bat``
131 versions created. Also, the Start Menu shortcuts have been updated.
132
123 133 * The colors escapes in the multiengine client are now turned off on win32 as they
124 134 don't print correctly.
125 135
@@ -128,7 +138,7 b' Bug fixes'
128 138
129 139 * A few subpackages has missing `__init__.py` files.
130 140
131 * The documentation is only created is Sphinx is found. Previously, the `setup.py`
141 * The documentation is only created if Sphinx is found. Previously, the `setup.py`
132 142 script would fail if it was missing.
133 143
134 144 * Greedy 'cd' completion has been disabled again (it was enabled in 0.8.4)
@@ -137,6 +147,13 b' Bug fixes'
137 147 Backwards incompatible changes
138 148 ------------------------------
139 149
150 * The ``clusterfile`` options of the :command:`ipcluster` command has been
151 removed as it was not working and it will be replaced soon by something much
152 more robust.
153
154 * The :mod:`IPython.kernel` configuration now properly find the user's
155 IPython directory.
156
140 157 * In ipapi, the :func:`make_user_ns` function has been replaced with
141 158 :func:`make_user_namespaces`, to support dict subclasses in namespace
142 159 creation.
@@ -4,24 +4,25 b''
4 4 Credits
5 5 =======
6 6
7 IPython is mainly developed by Fernando PΓ©rez
8 <Fernando.Perez@colorado.edu>, but the project was born from mixing in
9 Fernando's code with the IPP project by Janko Hauser
10 <jhauser-AT-zscout.de> and LazyPython by Nathan Gray
11 <n8gray-AT-caltech.edu>. For all IPython-related requests, please
12 contact Fernando.
7 IPython is led by Fernando PΓ©rez.
13 8
14 9 As of early 2006, the following developers have joined the core team:
15 10
16 * [Robert Kern] <rkern-AT-enthought.com>: co-mentored the 2005
17 Google Summer of Code project to develop python interactive
18 notebooks (XML documents) and graphical interface. This project
19 was awarded to the students Tzanko Matev <tsanko-AT-gmail.com> and
20 Toni Alatalo <antont-AT-an.org>
21 * [Brian Granger] <bgranger-AT-scu.edu>: extending IPython to allow
22 support for interactive parallel computing.
23 * [Ville Vainio] <vivainio-AT-gmail.com>: Ville is the new
24 maintainer for the main trunk of IPython after version 0.7.1.
11 * [Robert Kern] <rkern-AT-enthought.com>: co-mentored the 2005
12 Google Summer of Code project to develop python interactive
13 notebooks (XML documents) and graphical interface. This project
14 was awarded to the students Tzanko Matev <tsanko-AT-gmail.com> and
15 Toni Alatalo <antont-AT-an.org>.
16
17 * [Brian Granger] <ellisonbg-AT-gmail.com>: extending IPython to allow
18 support for interactive parallel computing.
19
20 * [Benjamin (Min) Ragan-Kelley]: key work on IPython's parallel
21 computing infrastructure.
22
23 * [Ville Vainio] <vivainio-AT-gmail.com>: Ville has made many improvements
24 to the core of IPython and was the maintainer of the main IPython
25 trunk from version 0.7.1 to 0.8.4.
25 26
26 27 The IPython project is also very grateful to:
27 28
@@ -54,86 +55,134 b' And last but not least, all the kind IPython users who have emailed new'
54 55 code, bug reports, fixes, comments and ideas. A brief list follows,
55 56 please let me know if I have ommitted your name by accident:
56 57
57 * [Jack Moffit] <jack-AT-xiph.org> Bug fixes, including the infamous
58 color problem. This bug alone caused many lost hours and
59 frustration, many thanks to him for the fix. I've always been a
60 fan of Ogg & friends, now I have one more reason to like these folks.
61 Jack is also contributing with Debian packaging and many other
62 things.
63 * [Alexander Schmolck] <a.schmolck-AT-gmx.net> Emacs work, bug
64 reports, bug fixes, ideas, lots more. The ipython.el mode for
65 (X)Emacs is Alex's code, providing full support for IPython under
66 (X)Emacs.
67 * [Andrea Riciputi] <andrea.riciputi-AT-libero.it> Mac OSX
68 information, Fink package management.
69 * [Gary Bishop] <gb-AT-cs.unc.edu> Bug reports, and patches to work
70 around the exception handling idiosyncracies of WxPython. Readline
71 and color support for Windows.
72 * [Jeffrey Collins] <Jeff.Collins-AT-vexcel.com> Bug reports. Much
73 improved readline support, including fixes for Python 2.3.
74 * [Dryice Liu] <dryice-AT-liu.com.cn> FreeBSD port.
75 * [Mike Heeter] <korora-AT-SDF.LONESTAR.ORG>
76 * [Christopher Hart] <hart-AT-caltech.edu> PDB integration.
77 * [Milan Zamazal] <pdm-AT-zamazal.org> Emacs info.
78 * [Philip Hisley] <compsys-AT-starpower.net>
79 * [Holger Krekel] <pyth-AT-devel.trillke.net> Tab completion, lots
80 more.
81 * [Robin Siebler] <robinsiebler-AT-starband.net>
82 * [Ralf Ahlbrink] <ralf_ahlbrink-AT-web.de>
83 * [Thorsten Kampe] <thorsten-AT-thorstenkampe.de>
84 * [Fredrik Kant] <fredrik.kant-AT-front.com> Windows setup.
85 * [Syver Enstad] <syver-en-AT-online.no> Windows setup.
86 * [Richard] <rxe-AT-renre-europe.com> Global embedding.
87 * [Hayden Callow] <h.callow-AT-elec.canterbury.ac.nz> Gnuplot.py 1.6
88 compatibility.
89 * [Leonardo Santagada] <retype-AT-terra.com.br> Fixes for Windows
90 installation.
91 * [Christopher Armstrong] <radix-AT-twistedmatrix.com> Bugfixes.
92 * [Francois Pinard] <pinard-AT-iro.umontreal.ca> Code and
93 documentation fixes.
94 * [Cory Dodt] <cdodt-AT-fcoe.k12.ca.us> Bug reports and Windows
95 ideas. Patches for Windows installer.
96 * [Olivier Aubert] <oaubert-AT-bat710.univ-lyon1.fr> New magics.
97 * [King C. Shu] <kingshu-AT-myrealbox.com> Autoindent patch.
98 * [Chris Drexler] <chris-AT-ac-drexler.de> Readline packages for
99 Win32/CygWin.
100 * [Gustavo Cordova Avila] <gcordova-AT-sismex.com> EvalDict code for
101 nice, lightweight string interpolation.
102 * [Kasper Souren] <Kasper.Souren-AT-ircam.fr> Bug reports, ideas.
103 * [Gever Tulley] <gever-AT-helium.com> Code contributions.
104 * [Ralf Schmitt] <ralf-AT-brainbot.com> Bug reports & fixes.
105 * [Oliver Sander] <osander-AT-gmx.de> Bug reports.
106 * [Rod Holland] <rhh-AT-structurelabs.com> Bug reports and fixes to
107 logging module.
108 * [Daniel 'Dang' Griffith] <pythondev-dang-AT-lazytwinacres.net>
109 Fixes, enhancement suggestions for system shell use.
110 * [Viktor Ransmayr] <viktor.ransmayr-AT-t-online.de> Tests and
111 reports on Windows installation issues. Contributed a true Windows
112 binary installer.
113 * [Mike Salib] <msalib-AT-mit.edu> Help fixing a subtle bug related
114 to traceback printing.
115 * [W.J. van der Laan] <gnufnork-AT-hetdigitalegat.nl> Bash-like
116 prompt specials.
117 * [Antoon Pardon] <Antoon.Pardon-AT-rece.vub.ac.be> Critical fix for
118 the multithreaded IPython.
119 * [John Hunter] <jdhunter-AT-nitace.bsd.uchicago.edu> Matplotlib
120 author, helped with all the development of support for matplotlib
121 in IPyhton, including making necessary changes to matplotlib itself.
122 * [Matthew Arnison] <maffew-AT-cat.org.au> Bug reports, '%run -d' idea.
123 * [Prabhu Ramachandran] <prabhu_r-AT-users.sourceforge.net> Help
124 with (X)Emacs support, threading patches, ideas...
125 * [Norbert Tretkowski] <tretkowski-AT-inittab.de> help with Debian
126 packaging and distribution.
127 * [George Sakkis] <gsakkis-AT-eden.rutgers.edu> New matcher for
128 tab-completing named arguments of user-defined functions.
129 * [JΓΆrgen Stenarson] <jorgen.stenarson-AT-bostream.nu> Wildcard
130 support implementation for searching namespaces.
131 * [Vivian De Smedt] <vivian-AT-vdesmedt.com> Debugger enhancements,
132 so that when pdb is activated from within IPython, coloring, tab
133 completion and other features continue to work seamlessly.
134 * [Scott Tsai] <scottt958-AT-yahoo.com.tw> Support for automatic
135 editor invocation on syntax errors (see
136 http://www.scipy.net/roundup/ipython/issue36).
137 * [Alexander Belchenko] <bialix-AT-ukr.net> Improvements for win32
138 paging system.
139 * [Will Maier] <willmaier-AT-ml1.net> Official OpenBSD port. No newline at end of file
58 * Dan Milstein <danmil-AT-comcast.net>. A bold refactoring of the
59 core prefilter stuff in the IPython interpreter.
60
61 * [Jack Moffit] <jack-AT-xiph.org> Bug fixes, including the infamous
62 color problem. This bug alone caused many lost hours and
63 frustration, many thanks to him for the fix. I've always been a
64 fan of Ogg & friends, now I have one more reason to like these folks.
65 Jack is also contributing with Debian packaging and many other
66 things.
67
68 * [Alexander Schmolck] <a.schmolck-AT-gmx.net> Emacs work, bug
69 reports, bug fixes, ideas, lots more. The ipython.el mode for
70 (X)Emacs is Alex's code, providing full support for IPython under
71 (X)Emacs.
72
73 * [Andrea Riciputi] <andrea.riciputi-AT-libero.it> Mac OSX
74 information, Fink package management.
75
76 * [Gary Bishop] <gb-AT-cs.unc.edu> Bug reports, and patches to work
77 around the exception handling idiosyncracies of WxPython. Readline
78 and color support for Windows.
79
80 * [Jeffrey Collins] <Jeff.Collins-AT-vexcel.com> Bug reports. Much
81 improved readline support, including fixes for Python 2.3.
82
83 * [Dryice Liu] <dryice-AT-liu.com.cn> FreeBSD port.
84
85 * [Mike Heeter] <korora-AT-SDF.LONESTAR.ORG>
86
87 * [Christopher Hart] <hart-AT-caltech.edu> PDB integration.
88
89 * [Milan Zamazal] <pdm-AT-zamazal.org> Emacs info.
90
91 * [Philip Hisley] <compsys-AT-starpower.net>
92
93 * [Holger Krekel] <pyth-AT-devel.trillke.net> Tab completion, lots
94 more.
95
96 * [Robin Siebler] <robinsiebler-AT-starband.net>
97
98 * [Ralf Ahlbrink] <ralf_ahlbrink-AT-web.de>
99
100 * [Thorsten Kampe] <thorsten-AT-thorstenkampe.de>
101
102 * [Fredrik Kant] <fredrik.kant-AT-front.com> Windows setup.
103
104 * [Syver Enstad] <syver-en-AT-online.no> Windows setup.
105
106 * [Richard] <rxe-AT-renre-europe.com> Global embedding.
107
108 * [Hayden Callow] <h.callow-AT-elec.canterbury.ac.nz> Gnuplot.py 1.6
109 compatibility.
110
111 * [Leonardo Santagada] <retype-AT-terra.com.br> Fixes for Windows
112 installation.
113
114 * [Christopher Armstrong] <radix-AT-twistedmatrix.com> Bugfixes.
115
116 * [Francois Pinard] <pinard-AT-iro.umontreal.ca> Code and
117 documentation fixes.
118
119 * [Cory Dodt] <cdodt-AT-fcoe.k12.ca.us> Bug reports and Windows
120 ideas. Patches for Windows installer.
121
122 * [Olivier Aubert] <oaubert-AT-bat710.univ-lyon1.fr> New magics.
123
124 * [King C. Shu] <kingshu-AT-myrealbox.com> Autoindent patch.
125
126 * [Chris Drexler] <chris-AT-ac-drexler.de> Readline packages for
127 Win32/CygWin.
128
129 * [Gustavo Cordova Avila] <gcordova-AT-sismex.com> EvalDict code for
130 nice, lightweight string interpolation.
131
132 * [Kasper Souren] <Kasper.Souren-AT-ircam.fr> Bug reports, ideas.
133
134 * [Gever Tulley] <gever-AT-helium.com> Code contributions.
135
136 * [Ralf Schmitt] <ralf-AT-brainbot.com> Bug reports & fixes.
137
138 * [Oliver Sander] <osander-AT-gmx.de> Bug reports.
139
140 * [Rod Holland] <rhh-AT-structurelabs.com> Bug reports and fixes to
141 logging module.
142
143 * [Daniel 'Dang' Griffith] <pythondev-dang-AT-lazytwinacres.net>
144 Fixes, enhancement suggestions for system shell use.
145
146 * [Viktor Ransmayr] <viktor.ransmayr-AT-t-online.de> Tests and
147 reports on Windows installation issues. Contributed a true Windows
148 binary installer.
149
150 * [Mike Salib] <msalib-AT-mit.edu> Help fixing a subtle bug related
151 to traceback printing.
152
153 * [W.J. van der Laan] <gnufnork-AT-hetdigitalegat.nl> Bash-like
154 prompt specials.
155
156 * [Antoon Pardon] <Antoon.Pardon-AT-rece.vub.ac.be> Critical fix for
157 the multithreaded IPython.
158
159 * [John Hunter] <jdhunter-AT-nitace.bsd.uchicago.edu> Matplotlib
160 author, helped with all the development of support for matplotlib
161 in IPyhton, including making necessary changes to matplotlib itself.
162
163 * [Matthew Arnison] <maffew-AT-cat.org.au> Bug reports, '%run -d' idea.
164
165 * [Prabhu Ramachandran] <prabhu_r-AT-users.sourceforge.net> Help
166 with (X)Emacs support, threading patches, ideas...
167
168 * [Norbert Tretkowski] <tretkowski-AT-inittab.de> help with Debian
169 packaging and distribution.
170
171 * [George Sakkis] <gsakkis-AT-eden.rutgers.edu> New matcher for
172 tab-completing named arguments of user-defined functions.
173
174 * [JΓΆrgen Stenarson] <jorgen.stenarson-AT-bostream.nu> Wildcard
175 support implementation for searching namespaces.
176
177 * [Vivian De Smedt] <vivian-AT-vdesmedt.com> Debugger enhancements,
178 so that when pdb is activated from within IPython, coloring, tab
179 completion and other features continue to work seamlessly.
180
181 * [Scott Tsai] <scottt958-AT-yahoo.com.tw> Support for automatic
182 editor invocation on syntax errors (see
183 http://www.scipy.net/roundup/ipython/issue36).
184
185 * [Alexander Belchenko] <bialix-AT-ukr.net> Improvements for win32
186 paging system.
187
188 * [Will Maier] <willmaier-AT-ml1.net> Official OpenBSD port. No newline at end of file
@@ -11,37 +11,39 b' The :mod:`IPython.kernel.core.notification` module will provide a simple impleme'
11 11 Functional Requirements
12 12 =======================
13 13 The notification center must:
14 * Provide synchronous notification of events to all registered observers.
15 * Provide typed or labeled notification types
16 * Allow observers to register callbacks for individual or all notification types
17 * Allow observers to register callbacks for events from individual or all notifying objects
18 * Notification to the observer consists of the notification type, notifying object and user-supplied extra information [implementation: as keyword parameters to the registered callback]
19 * Perform as O(1) in the case of no registered observers.
20 * Permit out-of-process or cross-network extension.
21
14 * Provide synchronous notification of events to all registered observers.
15 * Provide typed or labeled notification types
16 * Allow observers to register callbacks for individual or all notification types
17 * Allow observers to register callbacks for events from individual or all notifying objects
18 * Notification to the observer consists of the notification type, notifying object and user-supplied extra information [implementation: as keyword parameters to the registered callback]
19 * Perform as O(1) in the case of no registered observers.
20 * Permit out-of-process or cross-network extension.
21
22 22 What's not included
23 23 ==============================================================
24 24 As written, the :mod:`IPython.kernel.core.notificaiton` module does not:
25 * Provide out-of-process or network notifications [these should be handled by a separate, Twisted aware module in :mod:`IPython.kernel`].
26 * Provide zope.interface-style interfaces for the notification system [these should also be provided by the :mod:`IPython.kernel` module]
27
25 * Provide out-of-process or network notifications [these should be handled by a separate, Twisted aware module in :mod:`IPython.kernel`].
26 * Provide zope.interface-style interfaces for the notification system [these should also be provided by the :mod:`IPython.kernel` module]
27
28 28 Use Cases
29 29 =========
30 30 The following use cases describe the main intended uses of the notificaiton module and illustrate the main success scenario for each use case:
31 31
32 1. Dwight Schroot is writing a frontend for the IPython project. His frontend is stuck in the stone age and must communicate synchronously with an IPython.kernel.core.Interpreter instance. Because code is executed in blocks by the Interpreter, Dwight's UI freezes every time he executes a long block of code. To keep track of the progress of his long running block, Dwight adds the following code to his frontend's set-up code::
33 from IPython.kernel.core.notification import NotificationCenter
34 center = NotificationCenter.sharedNotificationCenter
35 center.registerObserver(self, type=IPython.kernel.core.Interpreter.STDOUT_NOTIFICATION_TYPE, notifying_object=self.interpreter, callback=self.stdout_notification)
36
37 and elsewhere in his front end::
38 def stdout_notification(self, type, notifying_object, out_string=None):
39 self.writeStdOut(out_string)
40
41 If everything works, the Interpreter will (according to its published API) fire a notification via the :data:`IPython.kernel.core.notification.sharedCenter` of type :const:`STD_OUT_NOTIFICATION_TYPE` before writing anything to stdout [it's up to the Intereter implementation to figure out when to do this]. The notificaiton center will then call the registered callbacks for that event type (in this case, Dwight's frontend's stdout_notification method). Again, according to its API, the Interpreter provides an additional keyword argument when firing the notificaiton of out_string, a copy of the string it will write to stdout.
42
43 Like magic, Dwight's frontend is able to provide output, even during long-running calculations. Now if Jim could just convince Dwight to use Twisted...
44
45 2. Boss Hog is writing a frontend for the IPython project. Because Boss Hog is stuck in the stone age, his frontend will be written in a new Fortran-like dialect of python and will run only from the command line. Because he doesn't need any fancy notification system and is used to worrying about every cycle on his rat-wheel powered mini, Boss Hog is adamant that the new notification system not produce any performance penalty. As they say in Hazard county, there's no such thing as a free lunch. If he wanted zero overhead, he should have kept using IPython 0.8. Instead, those tricky Duke boys slide in a suped-up bridge-out jumpin' awkwardly confederate-lovin' notification module that imparts only a constant (and small) performance penalty when the Interpreter (or any other object) fires an event for which there are no registered observers. Of course, the same notificaiton-enabled Interpreter can then be used in frontends that require notifications, thus saving the IPython project from a nasty civil war.
46
47 3. Barry is wrting a frontend for the IPython project. Because Barry's front end is the *new hotness*, it uses an asynchronous event model to communicate with a Twisted :mod:`~IPython.kernel.engineservice` that communicates with the IPython :class:`~IPython.kernel.core.interpreter.Interpreter`. Using the :mod:`IPython.kernel.notification` module, an asynchronous wrapper on the :mod:`IPython.kernel.core.notification` module, Barry's frontend can register for notifications from the interpreter that are delivered asynchronously. Even if Barry's frontend is running on a separate process or even host from the Interpreter, the notifications are delivered, as if by dark and twisted magic. Just like Dwight's frontend, Barry's frontend can now recieve notifications of e.g. writing to stdout/stderr, opening/closing an external file, an exception in the executing code, etc. No newline at end of file
32 1. Dwight Schroot is writing a frontend for the IPython project. His frontend is stuck in the stone age and must communicate synchronously with an IPython.kernel.core.Interpreter instance. Because code is executed in blocks by the Interpreter, Dwight's UI freezes every time he executes a long block of code. To keep track of the progress of his long running block, Dwight adds the following code to his frontend's set-up code::
33
34 from IPython.kernel.core.notification import NotificationCenter
35 center = NotificationCenter.sharedNotificationCenter
36 center.registerObserver(self, type=IPython.kernel.core.Interpreter.STDOUT_NOTIFICATION_TYPE, notifying_object=self.interpreter, callback=self.stdout_notification)
37
38 and elsewhere in his front end::
39
40 def stdout_notification(self, type, notifying_object, out_string=None):
41 self.writeStdOut(out_string)
42
43 If everything works, the Interpreter will (according to its published API) fire a notification via the :data:`IPython.kernel.core.notification.sharedCenter` of type :const:`STD_OUT_NOTIFICATION_TYPE` before writing anything to stdout [it's up to the Intereter implementation to figure out when to do this]. The notificaiton center will then call the registered callbacks for that event type (in this case, Dwight's frontend's stdout_notification method). Again, according to its API, the Interpreter provides an additional keyword argument when firing the notificaiton of out_string, a copy of the string it will write to stdout.
44
45 Like magic, Dwight's frontend is able to provide output, even during long-running calculations. Now if Jim could just convince Dwight to use Twisted...
46
47 2. Boss Hog is writing a frontend for the IPython project. Because Boss Hog is stuck in the stone age, his frontend will be written in a new Fortran-like dialect of python and will run only from the command line. Because he doesn't need any fancy notification system and is used to worrying about every cycle on his rat-wheel powered mini, Boss Hog is adamant that the new notification system not produce any performance penalty. As they say in Hazard county, there's no such thing as a free lunch. If he wanted zero overhead, he should have kept using IPython 0.8. Instead, those tricky Duke boys slide in a suped-up bridge-out jumpin' awkwardly confederate-lovin' notification module that imparts only a constant (and small) performance penalty when the Interpreter (or any other object) fires an event for which there are no registered observers. Of course, the same notificaiton-enabled Interpreter can then be used in frontends that require notifications, thus saving the IPython project from a nasty civil war.
48
49 3. Barry is wrting a frontend for the IPython project. Because Barry's front end is the *new hotness*, it uses an asynchronous event model to communicate with a Twisted :mod:`~IPython.kernel.engineservice` that communicates with the IPython :class:`~IPython.kernel.core.interpreter.Interpreter`. Using the :mod:`IPython.kernel.notification` module, an asynchronous wrapper on the :mod:`IPython.kernel.core.notification` module, Barry's frontend can register for notifications from the interpreter that are delivered asynchronously. Even if Barry's frontend is running on a separate process or even host from the Interpreter, the notifications are delivered, as if by dark and twisted magic. Just like Dwight's frontend, Barry's frontend can now recieve notifications of e.g. writing to stdout/stderr, opening/closing an external file, an exception in the executing code, etc. No newline at end of file
@@ -32,16 +32,21 b' IPython is implemented using a distributed set of processes that communicate usi'
32 32
33 33 We need to build a system that makes it trivial for users to start and manage IPython processes. This system should have the following properties:
34 34
35 * It should possible to do everything through an extremely simple API that users
36 can call from their own Python script. No shell commands should be needed.
37 * This simple API should be configured using standard .ini files.
38 * The system should make it possible to start processes using a number of different
39 approaches: SSH, PBS/Torque, Xgrid, Windows Server, mpirun, etc.
40 * The controller and engine processes should each have a daemon for monitoring,
41 signaling and clean up.
42 * The system should be secure.
43 * The system should work under all the major operating systems, including
44 Windows.
35 * It should possible to do everything through an extremely simple API that users
36 can call from their own Python script. No shell commands should be needed.
37
38 * This simple API should be configured using standard .ini files.
39
40 * The system should make it possible to start processes using a number of different
41 approaches: SSH, PBS/Torque, Xgrid, Windows Server, mpirun, etc.
42
43 * The controller and engine processes should each have a daemon for monitoring,
44 signaling and clean up.
45
46 * The system should be secure.
47
48 * The system should work under all the major operating systems, including
49 Windows.
45 50
46 51 Initial work has begun on the daemon infrastructure, and some of the needed logic is contained in the ipcluster script.
47 52
@@ -57,12 +62,15 b' Security'
57 62
58 63 Currently, IPython has no built in security or security model. Because we would like IPython to be usable on public computer systems and over wide area networks, we need to come up with a robust solution for security. Here are some of the specific things that need to be included:
59 64
60 * User authentication between all processes (engines, controller and clients).
61 * Optional TSL/SSL based encryption of all communication channels.
62 * A good way of picking network ports so multiple users on the same system can
63 run their own controller and engines without interfering with those of others.
64 * A clear model for security that enables users to evaluate the security risks
65 associated with using IPython in various manners.
65 * User authentication between all processes (engines, controller and clients).
66
67 * Optional TSL/SSL based encryption of all communication channels.
68
69 * A good way of picking network ports so multiple users on the same system can
70 run their own controller and engines without interfering with those of others.
71
72 * A clear model for security that enables users to evaluate the security risks
73 associated with using IPython in various manners.
66 74
67 75 For the implementation of this, we plan on using Twisted's support for SSL and authentication. One things that we really should look at is the `Foolscap`_ network protocol, which provides many of these things out of the box.
68 76
@@ -70,6 +78,9 b" For the implementation of this, we plan on using Twisted's support for SSL and a"
70 78
71 79 The security work needs to be done in conjunction with other network protocol stuff.
72 80
81 As of the 0.9 release of IPython, we are using Foolscap and we have implemented
82 a full security model.
83
73 84 Latent performance issues
74 85 -------------------------
75 86
@@ -82,7 +93,7 b' Currently, we have a number of performance issues that are waiting to bite users'
82 93 * Currently, the client to controller connections are done through XML-RPC using
83 94 HTTP 1.0. This is very inefficient as XML-RPC is a very verbose protocol and
84 95 each request must be handled with a new connection. We need to move these network
85 connections over to PB or Foolscap.
96 connections over to PB or Foolscap. Done!
86 97 * We currently don't have a good way of handling large objects in the controller.
87 98 The biggest problem is that because we don't have any way of streaming objects,
88 99 we get lots of temporary copies in the low-level buffers. We need to implement
@@ -16,10 +16,13 b' Will IPython speed my Python code up?'
16 16 Yes and no. When converting a serial code to run in parallel, there often many
17 17 difficulty questions that need to be answered, such as:
18 18
19 * How should data be decomposed onto the set of processors?
20 * What are the data movement patterns?
21 * Can the algorithm be structured to minimize data movement?
22 * Is dynamic load balancing important?
19 * How should data be decomposed onto the set of processors?
20
21 * What are the data movement patterns?
22
23 * Can the algorithm be structured to minimize data movement?
24
25 * Is dynamic load balancing important?
23 26
24 27 We can't answer such questions for you. This is the hard (but fun) work of parallel
25 28 computing. But, once you understand these things IPython will make it easier for you to
@@ -28,9 +31,7 b' resulting parallel code interactively.'
28 31
29 32 With that said, if your problem is trivial to parallelize, IPython has a number of
30 33 different interfaces that will enable you to parallelize things is almost no time at
31 all. A good place to start is the ``map`` method of our `multiengine interface`_.
32
33 .. _multiengine interface: ./parallel_multiengine
34 all. A good place to start is the ``map`` method of our :class:`MultiEngineClient`.
34 35
35 36 What is the best way to use MPI from Python?
36 37 --------------------------------------------
@@ -40,26 +41,33 b' What about all the other parallel computing packages in Python?'
40 41
41 42 Some of the unique characteristic of IPython are:
42 43
43 * IPython is the only architecture that abstracts out the notion of a
44 parallel computation in such a way that new models of parallel computing
45 can be explored quickly and easily. If you don't like the models we
46 provide, you can simply create your own using the capabilities we provide.
47 * IPython is asynchronous from the ground up (we use `Twisted`_).
48 * IPython's architecture is designed to avoid subtle problems
49 that emerge because of Python's global interpreter lock (GIL).
50 * While IPython'1 architecture is designed to support a wide range
51 of novel parallel computing models, it is fully interoperable with
52 traditional MPI applications.
53 * IPython has been used and tested extensively on modern supercomputers.
54 * IPython's networking layers are completely modular. Thus, is
55 straightforward to replace our existing network protocols with
56 high performance alternatives (ones based upon Myranet/Infiniband).
57 * IPython is designed from the ground up to support collaborative
58 parallel computing. This enables multiple users to actively develop
59 and run the *same* parallel computation.
60 * Interactivity is a central goal for us. While IPython does not have
61 to be used interactivly, is can be.
62
44 * IPython is the only architecture that abstracts out the notion of a
45 parallel computation in such a way that new models of parallel computing
46 can be explored quickly and easily. If you don't like the models we
47 provide, you can simply create your own using the capabilities we provide.
48
49 * IPython is asynchronous from the ground up (we use `Twisted`_).
50
51 * IPython's architecture is designed to avoid subtle problems
52 that emerge because of Python's global interpreter lock (GIL).
53
54 * While IPython's architecture is designed to support a wide range
55 of novel parallel computing models, it is fully interoperable with
56 traditional MPI applications.
57
58 * IPython has been used and tested extensively on modern supercomputers.
59
60 * IPython's networking layers are completely modular. Thus, is
61 straightforward to replace our existing network protocols with
62 high performance alternatives (ones based upon Myranet/Infiniband).
63
64 * IPython is designed from the ground up to support collaborative
65 parallel computing. This enables multiple users to actively develop
66 and run the *same* parallel computation.
67
68 * Interactivity is a central goal for us. While IPython does not have
69 to be used interactivly, it can be.
70
63 71 .. _Twisted: http://www.twistedmatrix.com
64 72
65 73 Why The IPython controller a bottleneck in my parallel calculation?
@@ -71,13 +79,17 b' too much data is being pushed and pulled to and from the engines. If your algori'
71 79 is structured in this way, you really should think about alternative ways of
72 80 handling the data movement. Here are some ideas:
73 81
74 1. Have the engines write data to files on the locals disks of the engines.
75 2. Have the engines write data to files on a file system that is shared by
76 the engines.
77 3. Have the engines write data to a database that is shared by the engines.
78 4. Simply keep data in the persistent memory of the engines and move the
79 computation to the data (rather than the data to the computation).
80 5. See if you can pass data directly between engines using MPI.
82 1. Have the engines write data to files on the locals disks of the engines.
83
84 2. Have the engines write data to files on a file system that is shared by
85 the engines.
86
87 3. Have the engines write data to a database that is shared by the engines.
88
89 4. Simply keep data in the persistent memory of the engines and move the
90 computation to the data (rather than the data to the computation).
91
92 5. See if you can pass data directly between engines using MPI.
81 93
82 94 Isn't Python slow to be used for high-performance parallel computing?
83 95 ---------------------------------------------------------------------
@@ -7,50 +7,32 b' History'
7 7 Origins
8 8 =======
9 9
10 The current IPython system grew out of the following three projects:
11
12 * [ipython] by Fernando PΓ©rez. I was working on adding
13 Mathematica-type prompts and a flexible configuration system
14 (something better than $PYTHONSTARTUP) to the standard Python
15 interactive interpreter.
16 * [IPP] by Janko Hauser. Very well organized, great usability. Had
17 an old help system. IPP was used as the 'container' code into
18 which I added the functionality from ipython and LazyPython.
19 * [LazyPython] by Nathan Gray. Simple but very powerful. The quick
20 syntax (auto parens, auto quotes) and verbose/colored tracebacks
21 were all taken from here.
22
23 When I found out about IPP and LazyPython I tried to join all three
24 into a unified system. I thought this could provide a very nice
25 working environment, both for regular programming and scientific
26 computing: shell-like features, IDL/Matlab numerics, Mathematica-type
27 prompt history and great object introspection and help facilities. I
28 think it worked reasonably well, though it was a lot more work than I
29 had initially planned.
30
31
32 Current status
33 ==============
34
35 The above listed features work, and quite well for the most part. But
36 until a major internal restructuring is done (see below), only bug
37 fixing will be done, no other features will be added (unless very minor
38 and well localized in the cleaner parts of the code).
39
40 IPython consists of some 18000 lines of pure python code, of which
41 roughly two thirds is reasonably clean. The rest is, messy code which
42 needs a massive restructuring before any further major work is done.
43 Even the messy code is fairly well documented though, and most of the
44 problems in the (non-existent) class design are well pointed to by a
45 PyChecker run. So the rewriting work isn't that bad, it will just be
46 time-consuming.
47
48
49 Future
50 ------
51
52 See the separate new_design document for details. Ultimately, I would
53 like to see IPython become part of the standard Python distribution as a
54 'big brother with batteries' to the standard Python interactive
55 interpreter. But that will never happen with the current state of the
56 code, so all contributions are welcome. No newline at end of file
10 IPython was starting in 2001 by Fernando Perez. IPython as we know it
11 today grew out of the following three projects:
12
13 * ipython by Fernando PΓ©rez. I was working on adding
14 Mathematica-type prompts and a flexible configuration system
15 (something better than $PYTHONSTARTUP) to the standard Python
16 interactive interpreter.
17 * IPP by Janko Hauser. Very well organized, great usability. Had
18 an old help system. IPP was used as the 'container' code into
19 which I added the functionality from ipython and LazyPython.
20 * LazyPython by Nathan Gray. Simple but very powerful. The quick
21 syntax (auto parens, auto quotes) and verbose/colored tracebacks
22 were all taken from here.
23
24 Here is how Fernando describes it:
25
26 When I found out about IPP and LazyPython I tried to join all three
27 into a unified system. I thought this could provide a very nice
28 working environment, both for regular programming and scientific
29 computing: shell-like features, IDL/Matlab numerics, Mathematica-type
30 prompt history and great object introspection and help facilities. I
31 think it worked reasonably well, though it was a lot more work than I
32 had initially planned.
33
34 Today and how we got here
35 =========================
36
37 This needs to be filled in.
38
@@ -1,56 +1,82 b''
1 1 .. _license:
2 2
3 =============================
4 License and Copyright
5 =============================
3 =====================
4 License and Copyright
5 =====================
6 6
7 This files needs to be updated to reflect what the new COPYING.txt files says about our license and copyright!
7 License
8 =======
8 9
9 IPython is released under the terms of the BSD license, whose general
10 form can be found at: http://www.opensource.org/licenses/bsd-license.php. The full text of the
11 IPython license is reproduced below::
10 IPython is licensed under the terms of the new or revised BSD license, as follows::
12 11
13 IPython is released under a BSD-type license.
12 Copyright (c) 2008, IPython Development Team
14 13
15 Copyright (c) 2001, 2002, 2003, 2004 Fernando Perez
16 <fperez@colorado.edu>.
14 All rights reserved.
17 15
18 Copyright (c) 2001 Janko Hauser <jhauser@zscout.de> and
19 Nathaniel Gray <n8gray@caltech.edu>.
16 Redistribution and use in source and binary forms, with or without modification,
17 are permitted provided that the following conditions are met:
20 18
21 All rights reserved.
19 Redistributions of source code must retain the above copyright notice, this list of
20 conditions and the following disclaimer.
21
22 Redistributions in binary form must reproduce the above copyright notice, this list
23 of conditions and the following disclaimer in the documentation and/or other
24 materials provided with the distribution.
25
26 Neither the name of the IPython Development Team nor the names of its contributors
27 may be used to endorse or promote products derived from this software without
28 specific prior written permission.
29
30 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
31 EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
32 WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
33 IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
34 INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
35 NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
36 PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
37 WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
38 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
39 POSSIBILITY OF SUCH DAMAGE.
40
41 About the IPython Development Team
42 ==================================
43
44 Fernando Perez began IPython in 2001 based on code from Janko Hauser <jhauser@zscout.de>
45 and Nathaniel Gray <n8gray@caltech.edu>. Fernando is still the project lead.
46
47 The IPython Development Team is the set of all contributors to the IPython project.
48 This includes all of the IPython subprojects. Here is a list of the currently active contributors:
49
50 * Matthieu Brucher
51 * Ondrej Certik
52 * Laurent Dufrechou
53 * Robert Kern
54 * Brian E. Granger
55 * Fernando Perez (project leader)
56 * Benjamin Ragan-Kelley
57 * Ville M. Vainio
58 * Gael Varoququx
59 * Stefan van der Walt
60 * Tech-X Corporation
61 * Barry Wark
62
63 If your name is missing, please add it.
64
65 Our Copyright Policy
66 ====================
67
68 IPython uses a shared copyright model. Each contributor maintains copyright over
69 their contributions to IPython. But, it is important to note that these
70 contributions are typically only changes to the repositories. Thus, the IPython
71 source code, in its entirety is not the copyright of any single person or
72 institution. Instead, it is the collective copyright of the entire IPython
73 Development Team. If individual contributors want to maintain a record of what
74 changes/contributions they have specific copyright on, they should indicate their
75 copyright in the commit message of the change, when they commit the change to
76 one of the IPython repositories.
22 77
23 Redistribution and use in source and binary forms, with or without
24 modification, are permitted provided that the following conditions
25 are met:
26
27 a. Redistributions of source code must retain the above copyright
28 notice, this list of conditions and the following disclaimer.
29
30 b. Redistributions in binary form must reproduce the above copyright
31 notice, this list of conditions and the following disclaimer in the
32 documentation and/or other materials provided with the distribution.
33
34 c. Neither the name of the copyright holders nor the names of any
35 contributors to this software may be used to endorse or promote
36 products derived from this software without specific prior written
37 permission.
38
39 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
40 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
41 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
42 FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
43 REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
44 INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
45 BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
46 LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
47 CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
48 LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
49 ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
50 POSSIBILITY OF SUCH DAMAGE.
51
52 Individual authors are the holders of the copyright for their code and
53 are listed in each file.
78 Miscellaneous
79 =============
54 80
55 81 Some files (DPyGetOpt.py, for example) may be licensed under different
56 82 conditions. Ultimately each file indicates clearly the conditions under
@@ -17,133 +17,161 b' The goal of IPython is to create a comprehensive environment for'
17 17 interactive and exploratory computing. To support, this goal, IPython
18 18 has two main components:
19 19
20 * An enhanced interactive Python shell.
21 * An architecture for interactive parallel computing.
20 * An enhanced interactive Python shell.
21 * An architecture for interactive parallel computing.
22 22
23 23 All of IPython is open source (released under the revised BSD license).
24 24
25 25 Enhanced interactive Python shell
26 26 =================================
27 27
28 IPython's interactive shell (`ipython`), has the following goals:
29
30 1. Provide an interactive shell superior to Python's default. IPython
31 has many features for object introspection, system shell access,
32 and its own special command system for adding functionality when
33 working interactively. It tries to be a very efficient environment
34 both for Python code development and for exploration of problems
35 using Python objects (in situations like data analysis).
36 2. Serve as an embeddable, ready to use interpreter for your own
37 programs. IPython can be started with a single call from inside
38 another program, providing access to the current namespace. This
39 can be very useful both for debugging purposes and for situations
40 where a blend of batch-processing and interactive exploration are
41 needed.
42 3. Offer a flexible framework which can be used as the base
43 environment for other systems with Python as the underlying
44 language. Specifically scientific environments like Mathematica,
45 IDL and Matlab inspired its design, but similar ideas can be
46 useful in many fields.
47 4. Allow interactive testing of threaded graphical toolkits. IPython
48 has support for interactive, non-blocking control of GTK, Qt and
49 WX applications via special threading flags. The normal Python
50 shell can only do this for Tkinter applications.
28 IPython's interactive shell (:command:`ipython`), has the following goals,
29 amongst others:
30
31 1. Provide an interactive shell superior to Python's default. IPython
32 has many features for object introspection, system shell access,
33 and its own special command system for adding functionality when
34 working interactively. It tries to be a very efficient environment
35 both for Python code development and for exploration of problems
36 using Python objects (in situations like data analysis).
37
38 2. Serve as an embeddable, ready to use interpreter for your own
39 programs. IPython can be started with a single call from inside
40 another program, providing access to the current namespace. This
41 can be very useful both for debugging purposes and for situations
42 where a blend of batch-processing and interactive exploration are
43 needed. New in the 0.9 version of IPython is a reusable wxPython
44 based IPython widget.
45
46 3. Offer a flexible framework which can be used as the base
47 environment for other systems with Python as the underlying
48 language. Specifically scientific environments like Mathematica,
49 IDL and Matlab inspired its design, but similar ideas can be
50 useful in many fields.
51
52 4. Allow interactive testing of threaded graphical toolkits. IPython
53 has support for interactive, non-blocking control of GTK, Qt and
54 WX applications via special threading flags. The normal Python
55 shell can only do this for Tkinter applications.
51 56
52 57 Main features of the interactive shell
53 58 --------------------------------------
54 59
55 * Dynamic object introspection. One can access docstrings, function
56 definition prototypes, source code, source files and other details
57 of any object accessible to the interpreter with a single
58 keystroke (:samp:`?`, and using :samp:`??` provides additional detail).
59 * Searching through modules and namespaces with :samp:`*` wildcards, both
60 when using the :samp:`?` system and via the :samp:`%psearch` command.
61 * Completion in the local namespace, by typing :kbd:`TAB` at the prompt.
62 This works for keywords, modules, methods, variables and files in the
63 current directory. This is supported via the readline library, and
64 full access to configuring readline's behavior is provided.
65 Custom completers can be implemented easily for different purposes
66 (system commands, magic arguments etc.)
67 * Numbered input/output prompts with command history (persistent
68 across sessions and tied to each profile), full searching in this
69 history and caching of all input and output.
70 * User-extensible 'magic' commands. A set of commands prefixed with
71 :samp:`%` is available for controlling IPython itself and provides
72 directory control, namespace information and many aliases to
73 common system shell commands.
74 * Alias facility for defining your own system aliases.
75 * Complete system shell access. Lines starting with :samp:`!` are passed
76 directly to the system shell, and using :samp:`!!` or :samp:`var = !cmd`
77 captures shell output into python variables for further use.
78 * Background execution of Python commands in a separate thread.
79 IPython has an internal job manager called jobs, and a
80 conveninence backgrounding magic function called :samp:`%bg`.
81 * The ability to expand python variables when calling the system
82 shell. In a shell command, any python variable prefixed with :samp:`$` is
83 expanded. A double :samp:`$$` allows passing a literal :samp:`$` to the shell (for
84 access to shell and environment variables like :envvar:`PATH`).
85 * Filesystem navigation, via a magic :samp:`%cd` command, along with a
86 persistent bookmark system (using :samp:`%bookmark`) for fast access to
87 frequently visited directories.
88 * A lightweight persistence framework via the :samp:`%store` command, which
89 allows you to save arbitrary Python variables. These get restored
90 automatically when your session restarts.
91 * Automatic indentation (optional) of code as you type (through the
92 readline library).
93 * Macro system for quickly re-executing multiple lines of previous
94 input with a single name. Macros can be stored persistently via
95 :samp:`%store` and edited via :samp:`%edit`.
96 * Session logging (you can then later use these logs as code in your
97 programs). Logs can optionally timestamp all input, and also store
98 session output (marked as comments, so the log remains valid
99 Python source code).
100 * Session restoring: logs can be replayed to restore a previous
101 session to the state where you left it.
102 * Verbose and colored exception traceback printouts. Easier to parse
103 visually, and in verbose mode they produce a lot of useful
104 debugging information (basically a terminal version of the cgitb
105 module).
106 * Auto-parentheses: callable objects can be executed without
107 parentheses: :samp:`sin 3` is automatically converted to :samp:`sin(3)`.
108 * Auto-quoting: using :samp:`,`, or :samp:`;` as the first character forces
109 auto-quoting of the rest of the line: :samp:`,my_function a b` becomes
110 automatically :samp:`my_function("a","b")`, while :samp:`;my_function a b`
111 becomes :samp:`my_function("a b")`.
112 * Extensible input syntax. You can define filters that pre-process
113 user input to simplify input in special situations. This allows
114 for example pasting multi-line code fragments which start with
115 :samp:`>>>` or :samp:`...` such as those from other python sessions or the
116 standard Python documentation.
117 * Flexible configuration system. It uses a configuration file which
118 allows permanent setting of all command-line options, module
119 loading, code and file execution. The system allows recursive file
120 inclusion, so you can have a base file with defaults and layers
121 which load other customizations for particular projects.
122 * Embeddable. You can call IPython as a python shell inside your own
123 python programs. This can be used both for debugging code or for
124 providing interactive abilities to your programs with knowledge
125 about the local namespaces (very useful in debugging and data
126 analysis situations).
127 * Easy debugger access. You can set IPython to call up an enhanced
128 version of the Python debugger (pdb) every time there is an
129 uncaught exception. This drops you inside the code which triggered
130 the exception with all the data live and it is possible to
131 navigate the stack to rapidly isolate the source of a bug. The
132 :samp:`%run` magic command (with the :samp:`-d` option) can run any script under
133 pdb's control, automatically setting initial breakpoints for you.
134 This version of pdb has IPython-specific improvements, including
135 tab-completion and traceback coloring support. For even easier
136 debugger access, try :samp:`%debug` after seeing an exception. winpdb is
137 also supported, see ipy_winpdb extension.
138 * Profiler support. You can run single statements (similar to
139 :samp:`profile.run()`) or complete programs under the profiler's control.
140 While this is possible with standard cProfile or profile modules,
141 IPython wraps this functionality with magic commands (see :samp:`%prun`
142 and :samp:`%run -p`) convenient for rapid interactive work.
143 * Doctest support. The special :samp:`%doctest_mode` command toggles a mode
144 that allows you to paste existing doctests (with leading :samp:`>>>`
145 prompts and whitespace) and uses doctest-compatible prompts and
146 output, so you can use IPython sessions as doctest code.
60 * Dynamic object introspection. One can access docstrings, function
61 definition prototypes, source code, source files and other details
62 of any object accessible to the interpreter with a single
63 keystroke (:samp:`?`, and using :samp:`??` provides additional detail).
64
65 * Searching through modules and namespaces with :samp:`*` wildcards, both
66 when using the :samp:`?` system and via the :samp:`%psearch` command.
67
68 * Completion in the local namespace, by typing :kbd:`TAB` at the prompt.
69 This works for keywords, modules, methods, variables and files in the
70 current directory. This is supported via the readline library, and
71 full access to configuring readline's behavior is provided.
72 Custom completers can be implemented easily for different purposes
73 (system commands, magic arguments etc.)
74
75 * Numbered input/output prompts with command history (persistent
76 across sessions and tied to each profile), full searching in this
77 history and caching of all input and output.
78
79 * User-extensible 'magic' commands. A set of commands prefixed with
80 :samp:`%` is available for controlling IPython itself and provides
81 directory control, namespace information and many aliases to
82 common system shell commands.
83
84 * Alias facility for defining your own system aliases.
85
86 * Complete system shell access. Lines starting with :samp:`!` are passed
87 directly to the system shell, and using :samp:`!!` or :samp:`var = !cmd`
88 captures shell output into python variables for further use.
89
90 * Background execution of Python commands in a separate thread.
91 IPython has an internal job manager called jobs, and a
92 convenience backgrounding magic function called :samp:`%bg`.
93
94 * The ability to expand python variables when calling the system
95 shell. In a shell command, any python variable prefixed with :samp:`$` is
96 expanded. A double :samp:`$$` allows passing a literal :samp:`$` to the shell (for
97 access to shell and environment variables like :envvar:`PATH`).
98
99 * Filesystem navigation, via a magic :samp:`%cd` command, along with a
100 persistent bookmark system (using :samp:`%bookmark`) for fast access to
101 frequently visited directories.
102
103 * A lightweight persistence framework via the :samp:`%store` command, which
104 allows you to save arbitrary Python variables. These get restored
105 automatically when your session restarts.
106
107 * Automatic indentation (optional) of code as you type (through the
108 readline library).
109
110 * Macro system for quickly re-executing multiple lines of previous
111 input with a single name. Macros can be stored persistently via
112 :samp:`%store` and edited via :samp:`%edit`.
113
114 * Session logging (you can then later use these logs as code in your
115 programs). Logs can optionally timestamp all input, and also store
116 session output (marked as comments, so the log remains valid
117 Python source code).
118
119 * Session restoring: logs can be replayed to restore a previous
120 session to the state where you left it.
121
122 * Verbose and colored exception traceback printouts. Easier to parse
123 visually, and in verbose mode they produce a lot of useful
124 debugging information (basically a terminal version of the cgitb
125 module).
126
127 * Auto-parentheses: callable objects can be executed without
128 parentheses: :samp:`sin 3` is automatically converted to :samp:`sin(3)`.
129
130 * Auto-quoting: using :samp:`,`, or :samp:`;` as the first character forces
131 auto-quoting of the rest of the line: :samp:`,my_function a b` becomes
132 automatically :samp:`my_function("a","b")`, while :samp:`;my_function a b`
133 becomes :samp:`my_function("a b")`.
134
135 * Extensible input syntax. You can define filters that pre-process
136 user input to simplify input in special situations. This allows
137 for example pasting multi-line code fragments which start with
138 :samp:`>>>` or :samp:`...` such as those from other python sessions or the
139 standard Python documentation.
140
141 * Flexible configuration system. It uses a configuration file which
142 allows permanent setting of all command-line options, module
143 loading, code and file execution. The system allows recursive file
144 inclusion, so you can have a base file with defaults and layers
145 which load other customizations for particular projects.
146
147 * Embeddable. You can call IPython as a python shell inside your own
148 python programs. This can be used both for debugging code or for
149 providing interactive abilities to your programs with knowledge
150 about the local namespaces (very useful in debugging and data
151 analysis situations).
152
153 * Easy debugger access. You can set IPython to call up an enhanced
154 version of the Python debugger (pdb) every time there is an
155 uncaught exception. This drops you inside the code which triggered
156 the exception with all the data live and it is possible to
157 navigate the stack to rapidly isolate the source of a bug. The
158 :samp:`%run` magic command (with the :samp:`-d` option) can run any script under
159 pdb's control, automatically setting initial breakpoints for you.
160 This version of pdb has IPython-specific improvements, including
161 tab-completion and traceback coloring support. For even easier
162 debugger access, try :samp:`%debug` after seeing an exception. winpdb is
163 also supported, see ipy_winpdb extension.
164
165 * Profiler support. You can run single statements (similar to
166 :samp:`profile.run()`) or complete programs under the profiler's control.
167 While this is possible with standard cProfile or profile modules,
168 IPython wraps this functionality with magic commands (see :samp:`%prun`
169 and :samp:`%run -p`) convenient for rapid interactive work.
170
171 * Doctest support. The special :samp:`%doctest_mode` command toggles a mode
172 that allows you to paste existing doctests (with leading :samp:`>>>`
173 prompts and whitespace) and uses doctest-compatible prompts and
174 output, so you can use IPython sessions as doctest code.
147 175
148 176 Interactive parallel computing
149 177 ==============================
@@ -153,6 +181,37 b' architecture within IPython that allows such hardware to be used quickly and eas'
153 181 from Python. Moreover, this architecture is designed to support interactive and
154 182 collaborative parallel computing.
155 183
184 The main features of this system are:
185
186 * Quickly parallelize Python code from an interactive Python/IPython session.
187
188 * A flexible and dynamic process model that be deployed on anything from
189 multicore workstations to supercomputers.
190
191 * An architecture that supports many different styles of parallelism, from
192 message passing to task farming. And all of these styles can be handled
193 interactively.
194
195 * Both blocking and fully asynchronous interfaces.
196
197 * High level APIs that enable many things to be parallelized in a few lines
198 of code.
199
200 * Write parallel code that will run unchanged on everything from multicore
201 workstations to supercomputers.
202
203 * Full integration with Message Passing libraries (MPI).
204
205 * Capabilities based security model with full encryption of network connections.
206
207 * Share live parallel jobs with other users securely. We call this collaborative
208 parallel computing.
209
210 * Dynamically load balanced task farming system.
211
212 * Robust error handling. Python exceptions raised in parallel execution are
213 gathered and presented to the top-level code.
214
156 215 For more information, see our :ref:`overview <parallel_index>` of using IPython for
157 216 parallel computing.
158 217
@@ -1,12 +1,9 b''
1 1 .. _parallel_index:
2 2
3 3 ====================================
4 Using IPython for Parallel computing
4 Using IPython for parallel computing
5 5 ====================================
6 6
7 User Documentation
8 ==================
9
10 7 .. toctree::
11 8 :maxdepth: 2
12 9
@@ -9,49 +9,60 b' Using IPython for parallel computing'
9 9 Introduction
10 10 ============
11 11
12 This file gives an overview of IPython. IPython has a sophisticated and
12 This file gives an overview of IPython's sophisticated and
13 13 powerful architecture for parallel and distributed computing. This
14 14 architecture abstracts out parallelism in a very general way, which
15 15 enables IPython to support many different styles of parallelism
16 16 including:
17 17
18 * Single program, multiple data (SPMD) parallelism.
19 * Multiple program, multiple data (MPMD) parallelism.
20 * Message passing using ``MPI``.
21 * Task farming.
22 * Data parallel.
23 * Combinations of these approaches.
24 * Custom user defined approaches.
18 * Single program, multiple data (SPMD) parallelism.
19 * Multiple program, multiple data (MPMD) parallelism.
20 * Message passing using ``MPI``.
21 * Task farming.
22 * Data parallel.
23 * Combinations of these approaches.
24 * Custom user defined approaches.
25 25
26 26 Most importantly, IPython enables all types of parallel applications to
27 27 be developed, executed, debugged and monitored *interactively*. Hence,
28 28 the ``I`` in IPython. The following are some example usage cases for IPython:
29 29
30 * Quickly parallelize algorithms that are embarrassingly parallel
31 using a number of simple approaches. Many simple things can be
32 parallelized interactively in one or two lines of code.
33 * Steer traditional MPI applications on a supercomputer from an
34 IPython session on your laptop.
35 * Analyze and visualize large datasets (that could be remote and/or
36 distributed) interactively using IPython and tools like
37 matplotlib/TVTK.
38 * Develop, test and debug new parallel algorithms
39 (that may use MPI) interactively.
40 * Tie together multiple MPI jobs running on different systems into
41 one giant distributed and parallel system.
42 * Start a parallel job on your cluster and then have a remote
43 collaborator connect to it and pull back data into their
44 local IPython session for plotting and analysis.
45 * Run a set of tasks on a set of CPUs using dynamic load balancing.
30 * Quickly parallelize algorithms that are embarrassingly parallel
31 using a number of simple approaches. Many simple things can be
32 parallelized interactively in one or two lines of code.
33
34 * Steer traditional MPI applications on a supercomputer from an
35 IPython session on your laptop.
36
37 * Analyze and visualize large datasets (that could be remote and/or
38 distributed) interactively using IPython and tools like
39 matplotlib/TVTK.
40
41 * Develop, test and debug new parallel algorithms
42 (that may use MPI) interactively.
43
44 * Tie together multiple MPI jobs running on different systems into
45 one giant distributed and parallel system.
46
47 * Start a parallel job on your cluster and then have a remote
48 collaborator connect to it and pull back data into their
49 local IPython session for plotting and analysis.
50
51 * Run a set of tasks on a set of CPUs using dynamic load balancing.
46 52
47 53 Architecture overview
48 54 =====================
49 55
50 56 The IPython architecture consists of three components:
51 57
52 * The IPython engine.
53 * The IPython controller.
54 * Various controller Clients.
58 * The IPython engine.
59 * The IPython controller.
60 * Various controller clients.
61
62 These components live in the :mod:`IPython.kernel` package and are
63 installed with IPython. They do, however, have additional dependencies
64 that must be installed. For more information, see our
65 :ref:`installation documentation <install_index>`.
55 66
56 67 IPython engine
57 68 ---------------
@@ -75,16 +86,21 b' IPython engines can connect. For each connected engine, the controller'
75 86 manages a queue. All actions that can be performed on the engine go
76 87 through this queue. While the engines themselves block when user code is
77 88 run, the controller hides that from the user to provide a fully
78 asynchronous interface to a set of engines. Because the controller
79 listens on a network port for engines to connect to it, it must be
80 started before any engines are started.
89 asynchronous interface to a set of engines.
90
91 .. note::
92
93 Because the controller listens on a network port for engines to
94 connect to it, it must be started *before* any engines are started.
81 95
82 96 The controller also provides a single point of contact for users who wish
83 97 to utilize the engines connected to the controller. There are different
84 98 ways of working with a controller. In IPython these ways correspond to different interfaces that the controller is adapted to. Currently we have two default interfaces to the controller:
85 99
86 * The MultiEngine interface.
87 * The Task interface.
100 * The MultiEngine interface, which provides the simplest possible way of working
101 with engines interactively.
102 * The Task interface, which provides presents the engines as a load balanced
103 task farming system.
88 104
89 105 Advanced users can easily add new custom interfaces to enable other
90 106 styles of parallelism.
@@ -100,18 +116,37 b' Controller clients'
100 116
101 117 For each controller interface, there is a corresponding client. These
102 118 clients allow users to interact with a set of engines through the
103 interface.
119 interface. Here are the two default clients:
120
121 * The :class:`MultiEngineClient` class.
122 * The :class:`TaskClient` class.
104 123
105 124 Security
106 125 --------
107 126
108 By default (as long as `pyOpenSSL` is installed) all network connections between the controller and engines and the controller and clients are secure. What does this mean? First of all, all of the connections will be encrypted using SSL. Second, the connections are authenticated. We handle authentication in a `capabilities`__ based security model. In this model, a "capability (known in some systems as a key) is a communicable, unforgeable token of authority". Put simply, a capability is like a key to your house. If you have the key to your house, you can get in, if not you can't.
127 By default (as long as `pyOpenSSL` is installed) all network connections between the controller and engines and the controller and clients are secure. What does this mean? First of all, all of the connections will be encrypted using SSL. Second, the connections are authenticated. We handle authentication in a `capabilities`__ based security model. In this model, a "capability (known in some systems as a key) is a communicable, unforgeable token of authority". Put simply, a capability is like a key to your house. If you have the key to your house, you can get in. If not, you can't.
109 128
110 129 .. __: http://en.wikipedia.org/wiki/Capability-based_security
111 130
112 In our architecture, the controller is the only process that listens on network ports, and is thus responsible to creating these keys. In IPython, these keys are known as Foolscap URLs, or FURLs, because of the underlying network protocol we are using. As a user, you don't need to know anything about the details of these FURLs, other than that when the controller starts, it saves a set of FURLs to files named something.furl. The default location of these files is your ~./ipython directory.
131 In our architecture, the controller is the only process that listens on network ports, and is thus responsible to creating these keys. In IPython, these keys are known as Foolscap URLs, or FURLs, because of the underlying network protocol we are using. As a user, you don't need to know anything about the details of these FURLs, other than that when the controller starts, it saves a set of FURLs to files named :file:`something.furl`. The default location of these files is the :file:`~./ipython/security` directory.
113 132
114 To connect and authenticate to the controller an engine or client simply needs to present an appropriate furl (that was originally created by the controller) to the controller. Thus, the .furl files need to be copied to a location where the clients and engines can find them. Typically, this is the ~./ipython directory on the host where the client/engine is running (which could be a different host than the controller). Once the .furl files are copied over, everything should work fine.
133 To connect and authenticate to the controller an engine or client simply needs to present an appropriate furl (that was originally created by the controller) to the controller. Thus, the .furl files need to be copied to a location where the clients and engines can find them. Typically, this is the :file:`~./ipython/security` directory on the host where the client/engine is running (which could be a different host than the controller). Once the .furl files are copied over, everything should work fine.
134
135 Currently, there are three .furl files that the controller creates:
136
137 ipcontroller-engine.furl
138 This ``.furl`` file is the key that gives an engine the ability to connect
139 to a controller.
140
141 ipcontroller-tc.furl
142 This ``.furl`` file is the key that a :class:`TaskClient` must use to
143 connect to the task interface of a controller.
144
145 ipcontroller-mec.furl
146 This ``.furl`` file is the key that a :class:`MultiEngineClient` must use to
147 connect to the multiengine interface of a controller.
148
149 More details of how these ``.furl`` files are used are given below.
115 150
116 151 Getting Started
117 152 ===============
@@ -127,28 +162,40 b' Starting the controller and engine on your local machine'
127 162
128 163 This is the simplest configuration that can be used and is useful for
129 164 testing the system and on machines that have multiple cores and/or
130 multple CPUs. The easiest way of doing this is using the ``ipcluster``
165 multple CPUs. The easiest way of getting started is to use the :command:`ipcluster`
131 166 command::
132 167
133 168 $ ipcluster -n 4
134
169
135 170 This will start an IPython controller and then 4 engines that connect to
136 171 the controller. Lastly, the script will print out the Python commands
137 172 that you can use to connect to the controller. It is that easy.
138 173
139 Underneath the hood, the ``ipcluster`` script uses two other top-level
174 .. warning::
175
176 The :command:`ipcluster` does not currently work on Windows. We are
177 working on it though.
178
179 Underneath the hood, the controller creates ``.furl`` files in the
180 :file:`~./ipython/security` directory. Because the engines are on the
181 same host, they automatically find the needed :file:`ipcontroller-engine.furl`
182 there and use it to connect to the controller.
183
184 The :command:`ipcluster` script uses two other top-level
140 185 scripts that you can also use yourself. These scripts are
141 ``ipcontroller``, which starts the controller and ``ipengine`` which
186 :command:`ipcontroller`, which starts the controller and :command:`ipengine` which
142 187 starts one engine. To use these scripts to start things on your local
143 188 machine, do the following.
144 189
145 190 First start the controller::
146 191
147 $ ipcontroller &
192 $ ipcontroller
148 193
149 194 Next, start however many instances of the engine you want using (repeatedly) the command::
150 195
151 $ ipengine &
196 $ ipengine
197
198 The engines should start and automatically connect to the controller using the ``.furl`` files in :file:`~./ipython/security`. You are now ready to use the controller and engines from IPython.
152 199
153 200 .. warning::
154 201
@@ -156,47 +203,71 b' Next, start however many instances of the engine you want using (repeatedly) the'
156 203 start the controller before the engines, since the engines connect
157 204 to the controller as they get started.
158 205
159 On some platforms you may need to give these commands in the form
160 ``(ipcontroller &)`` and ``(ipengine &)`` for them to work properly. The
161 engines should start and automatically connect to the controller on the
162 default ports, which are chosen for this type of setup. You are now ready
163 to use the controller and engines from IPython.
206 .. note::
164 207
165 Starting the controller and engines on different machines
166 ---------------------------------------------------------
208 On some platforms (OS X), to put the controller and engine into the background
209 you may need to give these commands in the form ``(ipcontroller &)``
210 and ``(ipengine &)`` (with the parentheses) for them to work properly.
167 211
168 This section needs to be updated to reflect the new Foolscap capabilities based
169 model.
170 212
171 Using ``ipcluster`` with ``ssh``
172 --------------------------------
213 Starting the controller and engines on different hosts
214 ------------------------------------------------------
173 215
174 The ``ipcluster`` command can also start a controller and engines using
175 ``ssh``. We need more documentation on this, but for now here is any
176 example startup script::
216 When the controller and engines are running on different hosts, things are
217 slightly more complicated, but the underlying ideas are the same:
177 218
178 controller = dict(host='myhost',
179 engine_port=None, # default is 10105
180 control_port=None,
181 )
219 1. Start the controller on a host using :command:`ipcontroler`.
220 2. Copy :file:`ipcontroller-engine.furl` from :file:`~./ipython/security` on the controller's host to the host where the engines will run.
221 3. Use :command:`ipengine` on the engine's hosts to start the engines.
182 222
183 # keys are hostnames, values are the number of engine on that host
184 engines = dict(node1=2,
185 node2=2,
186 node3=2,
187 node3=2,
188 )
223 The only thing you have to be careful of is to tell :command:`ipengine` where the :file:`ipcontroller-engine.furl` file is located. There are two ways you can do this:
224
225 * Put :file:`ipcontroller-engine.furl` in the :file:`~./ipython/security` directory
226 on the engine's host, where it will be found automatically.
227 * Call :command:`ipengine` with the ``--furl-file=full_path_to_the_file`` flag.
228
229 The ``--furl-file`` flag works like this::
230
231 $ ipengine --furl-file=/path/to/my/ipcontroller-engine.furl
232
233 .. note::
234
235 If the controller's and engine's hosts all have a shared file system
236 (:file:`~./ipython/security` is the same on all of them), then things
237 will just work!
238
239 Make .furl files persistent
240 ---------------------------
241
242 At fist glance it may seem that that managing the ``.furl`` files is a bit annoying. Going back to the house and key analogy, copying the ``.furl`` around each time you start the controller is like having to make a new key everytime you want to unlock the door and enter your house. As with your house, you want to be able to create the key (or ``.furl`` file) once, and then simply use it at any point in the future.
243
244 This is possible. The only thing you have to do is decide what ports the controller will listen on for the engines and clients. This is done as follows::
245
246 $ ipcontroller --client-port=10101 --engine-port=10102
247
248 Then, just copy the furl files over the first time and you are set. You can start and stop the controller and engines any many times as you want in the future, just make sure to tell the controller to use the *same* ports.
249
250 .. note::
251
252 You may ask the question: what ports does the controller listen on if you
253 don't tell is to use specific ones? The default is to use high random port
254 numbers. We do this for two reasons: i) to increase security through obcurity
255 and ii) to multiple controllers on a given host to start and automatically
256 use different ports.
189 257
190 258 Starting engines using ``mpirun``
191 259 ---------------------------------
192 260
193 261 The IPython engines can be started using ``mpirun``/``mpiexec``, even if
194 the engines don't call MPI_Init() or use the MPI API in any way. This is
262 the engines don't call ``MPI_Init()`` or use the MPI API in any way. This is
195 263 supported on modern MPI implementations like `Open MPI`_.. This provides
196 264 an really nice way of starting a bunch of engine. On a system with MPI
197 265 installed you can do::
198 266
199 mpirun -n 4 ipengine --controller-port=10000 --controller-ip=host0
267 mpirun -n 4 ipengine
268
269 to start 4 engine on a cluster. This works even if you don't have any
270 Python-MPI bindings installed.
200 271
201 272 .. _Open MPI: http://www.open-mpi.org/
202 273
@@ -214,12 +285,12 b' Next Steps'
214 285 ==========
215 286
216 287 Once you have started the IPython controller and one or more engines, you
217 are ready to use the engines to do somnething useful. To make sure
288 are ready to use the engines to do something useful. To make sure
218 289 everything is working correctly, try the following commands::
219 290
220 291 In [1]: from IPython.kernel import client
221 292
222 In [2]: mec = client.MultiEngineClient() # This looks for .furl files in ~./ipython
293 In [2]: mec = client.MultiEngineClient()
223 294
224 295 In [4]: mec.get_ids()
225 296 Out[4]: [0, 1, 2, 3]
@@ -239,4 +310,18 b' everything is working correctly, try the following commands::'
239 310 [3] In [1]: print "Hello World"
240 311 [3] Out[1]: Hello World
241 312
242 If this works, you are ready to learn more about the :ref:`MultiEngine <parallelmultiengine>` and :ref:`Task <paralleltask>` interfaces to the controller.
313 Remember, a client also needs to present a ``.furl`` file to the controller. How does this happen? When a multiengine client is created with no arguments, the client tries to find the corresponding ``.furl`` file in the local :file:`~./ipython/security` directory. If it finds it, you are set. If you have put the ``.furl`` file in a different location or it has a different name, create the client like this::
314
315 mec = client.MultiEngineClient('/path/to/my/ipcontroller-mec.furl')
316
317 Same thing hold true of creating a task client::
318
319 tc = client.TaskClient('/path/to/my/ipcontroller-tc.furl')
320
321 You are now ready to learn more about the :ref:`MultiEngine <parallelmultiengine>` and :ref:`Task <paralleltask>` interfaces to the controller.
322
323 .. note::
324
325 Don't forget that the engine, multiengine client and task client all have
326 *different* furl files. You must move *each* of these around to an appropriate
327 location so that the engines and clients can use them to connect to the controller.
General Comments 0
You need to be logged in to leave comments. Login now