Show More
@@ -1,240 +1,240 b'' | |||
|
1 | 1 | .. _paralleltask: |
|
2 | 2 | |
|
3 | 3 | ========================== |
|
4 | 4 | The IPython task interface |
|
5 | 5 | ========================== |
|
6 | 6 | |
|
7 | 7 | .. contents:: |
|
8 | 8 | |
|
9 | 9 | The ``Task`` interface to the controller presents the engines as a fault tolerant, dynamic load-balanced system or workers. Unlike the ``MultiEngine`` interface, in the ``Task`` interface, the user have no direct access to individual engines. In some ways, this interface is simpler, but in other ways it is more powerful. Best of all the user can use both of these interfaces at the same time to take advantage or both of their strengths. When the user can break up the user's work into segments that do not depend on previous execution, the ``Task`` interface is ideal. But it also has more power and flexibility, allowing the user to guide the distribution of jobs, without having to assign Tasks to engines explicitly. |
|
10 | 10 | |
|
11 | 11 | Starting the IPython controller and engines |
|
12 | 12 | =========================================== |
|
13 | 13 | |
|
14 | 14 | To follow along with this tutorial, the user will need to start the IPython |
|
15 | 15 | controller and four IPython engines. The simplest way of doing this is to |
|
16 | 16 | use the ``ipcluster`` command:: |
|
17 | 17 | |
|
18 | 18 | $ ipcluster -n 4 |
|
19 | 19 | |
|
20 | 20 | For more detailed information about starting the controller and engines, see our :ref:`introduction <parallel_overview>` to using IPython for parallel computing. |
|
21 | 21 | |
|
22 | 22 | The magic here is that this single controller and set of engines is running both the MultiEngine and ``Task`` interfaces simultaneously. |
|
23 | 23 | |
|
24 | 24 | QuickStart Task Farming |
|
25 | 25 | ======================= |
|
26 | 26 | |
|
27 | 27 | First, a quick example of how to start running the most basic Tasks. |
|
28 | 28 | The first step is to import the IPython ``client`` module and then create a ``TaskClient`` instance:: |
|
29 | 29 | |
|
30 | 30 | In [1]: from IPython.kernel import client |
|
31 | 31 | |
|
32 | 32 | In [2]: tc = client.TaskClient() |
|
33 | 33 | |
|
34 | 34 | Then the user wrap the commands the user want to run in Tasks:: |
|
35 | 35 | |
|
36 | 36 | In [3]: tasklist = [] |
|
37 | 37 | In [4]: for n in range(1000): |
|
38 | 38 | ... tasklist.append(client.Task("a = %i"%n, pull="a")) |
|
39 | 39 | |
|
40 | 40 | The first argument of the ``Task`` constructor is a string, the command to be executed. The most important optional keyword argument is ``pull``, which can be a string or list of strings, and it specifies the variable names to be saved as results of the ``Task``. |
|
41 | 41 | |
|
42 | 42 | Next, the user need to submit the Tasks to the ``TaskController`` with the ``TaskClient``:: |
|
43 | 43 | |
|
44 | 44 | In [5]: taskids = [ tc.run(t) for t in tasklist ] |
|
45 | 45 | |
|
46 | 46 | This will give the user a list of the TaskIDs used by the controller to keep track of the Tasks and their results. Now at some point the user are going to want to get those results back. The ``barrier`` method allows the user to wait for the Tasks to finish running:: |
|
47 | 47 | |
|
48 | 48 | In [6]: tc.barrier(taskids) |
|
49 | 49 | |
|
50 | 50 | This command will block until all the Tasks in ``taskids`` have finished. Now, the user probably want to look at the user's results:: |
|
51 | 51 | |
|
52 | 52 | In [7]: task_results = [ tc.get_task_result(taskid) for taskid in taskids ] |
|
53 | 53 | |
|
54 | 54 | Now the user have a list of ``TaskResult`` objects, which have the actual result as a dictionary, but also keep track of some useful metadata about the ``Task``:: |
|
55 | 55 | |
|
56 | 56 | In [8]: tr = ``Task``_results[73] |
|
57 | 57 | |
|
58 | 58 | In [9]: tr |
|
59 | 59 | Out[9]: ``TaskResult``[ID:73]:{'a':73} |
|
60 | 60 | |
|
61 | 61 | In [10]: tr.engineid |
|
62 | 62 | Out[10]: 1 |
|
63 | 63 | |
|
64 | 64 | In [11]: tr.submitted, tr.completed, tr.duration |
|
65 | 65 | Out[11]: ("2008/03/08 03:41:42", "2008/03/08 03:41:44", 2.12345) |
|
66 | 66 | |
|
67 | 67 | The actual results are stored in a dictionary, ``tr.results``, and a namespace object ``tr.ns`` which accesses the result keys by attribute:: |
|
68 | 68 | |
|
69 | 69 | In [12]: tr.results['a'] |
|
70 | 70 | Out[12]: 73 |
|
71 | 71 | |
|
72 | 72 | In [13]: tr.ns.a |
|
73 | 73 | Out[13]: 73 |
|
74 | 74 | |
|
75 | 75 | That should cover the basics of running simple Tasks. There are several more powerful things the user can do with Tasks covered later. The most useful probably being using a ``MutiEngineClient`` interface to initialize all the engines with the import dependencies necessary to run the user's Tasks. |
|
76 | 76 | |
|
77 |
There are many options for running and managing Tasks. The best way to learn further about the ``Task`` interface is to study the examples in `` |
|
|
77 | There are many options for running and managing Tasks. The best way to learn further about the ``Task`` interface is to study the examples in ``examples``. If the user do so and learn a lots about this interface, we encourage the user to expand this documentation about the ``Task`` system. | |
|
78 | 78 | |
|
79 | 79 | Overview of the Task System |
|
80 | 80 | =========================== |
|
81 | 81 | |
|
82 | 82 | The user's view of the ``Task`` system has three basic objects: The ``TaskClient``, the ``Task``, and the ``TaskResult``. The names of these three objects well indicate their role. |
|
83 | 83 | |
|
84 | 84 | The ``TaskClient`` is the user's ``Task`` farming connection to the IPython cluster. Unlike the ``MultiEngineClient``, the ``TaskControler`` handles all the scheduling and distribution of work, so the ``TaskClient`` has no notion of engines, it just submits Tasks and requests their results. The Tasks are described as ``Task`` objects, and their results are wrapped in ``TaskResult`` objects. Thus, there are very few necessary methods for the user to manage. |
|
85 | 85 | |
|
86 | 86 | Inside the task system is a Scheduler object, which assigns tasks to workers. The default scheduler is a simple FIFO queue. Subclassing the Scheduler should be easy, just implementing your own priority system. |
|
87 | 87 | |
|
88 | 88 | The TaskClient |
|
89 | 89 | ============== |
|
90 | 90 | |
|
91 | 91 | The ``TaskClient`` is the object the user use to connect to the ``Controller`` that is managing the user's Tasks. It is the analog of the ``MultiEngineClient`` for the standard IPython multiplexing interface. As with all client interfaces, the first step is to import the IPython Client Module:: |
|
92 | 92 | |
|
93 | 93 | In [1]: from IPython.kernel import client |
|
94 | 94 | |
|
95 | 95 | Just as with the ``MultiEngineClient``, the user create the ``TaskClient`` with a tuple, containing the ip-address and port of the ``Controller``. the ``client`` module conveniently has the default address of the ``Task`` interface of the controller. Creating a default ``TaskClient`` object would be done with this:: |
|
96 | 96 | |
|
97 | 97 | In [2]: tc = client.TaskClient(client.default_task_address) |
|
98 | 98 | |
|
99 | 99 | or, if the user want to specify a non default location of the ``Controller``, the user can specify explicitly:: |
|
100 | 100 | |
|
101 | 101 | In [3]: tc = client.TaskClient(("192.168.1.1", 10113)) |
|
102 | 102 | |
|
103 | 103 | As discussed earlier, the ``TaskClient`` only has a few basic methods. |
|
104 | 104 | |
|
105 | 105 | * ``tc.run(task)`` |
|
106 | 106 | ``run`` is the method by which the user submits Tasks. It takes exactly one argument, a ``Task`` object. All the advanced control of ``Task`` behavior is handled by properties of the ``Task`` object, rather than the submission command, so they will be discussed later in the `Task`_ section. ``run`` returns an integer, the ``Task``ID by which the ``Task`` and its results can be tracked and retrieved:: |
|
107 | 107 | |
|
108 | 108 | In [4]: ``Task``ID = tc.run(``Task``) |
|
109 | 109 | |
|
110 | 110 | * ``tc.get_task_result(taskid, block=``False``)`` |
|
111 | 111 | ``get_task_result`` is the method by which results are retrieved. It takes a single integer argument, the ``Task``ID`` of the result the user wish to retrieve. ``get_task_result`` also takes a keyword argument ``block``. ``block`` specifies whether the user actually want to wait for the result. If ``block`` is false, as it is by default, ``get_task_result`` will return immediately. If the ``Task`` has completed, it will return the ``TaskResult`` object for that ``Task``. But if the ``Task`` has not completed, it will return ``None``. If the user specify ``block=``True``, then ``get_task_result`` will wait for the ``Task`` to complete, and always return the ``TaskResult`` for the requested ``Task``. |
|
112 | 112 | * ``tc.barrier(taskid(s))`` |
|
113 | 113 | ``barrier`` is a synchronization method. It takes exactly one argument, a ``Task``ID or list of taskIDs. ``barrier`` will block until all the specified Tasks have completed. In practice, a barrier is often called between the ``Task`` submission section of the code and the result gathering section:: |
|
114 | 114 | |
|
115 | 115 | In [5]: taskIDs = [ tc.run(``Task``) for ``Task`` in myTasks ] |
|
116 | 116 | |
|
117 | 117 | In [6]: tc.get_task_result(taskIDs[-1]) is None |
|
118 | 118 | Out[6]: ``True`` |
|
119 | 119 | |
|
120 | 120 | In [7]: tc.barrier(``Task``ID) |
|
121 | 121 | |
|
122 | 122 | In [8]: results = [ tc.get_task_result(tid) for tid in taskIDs ] |
|
123 | 123 | |
|
124 | 124 | * ``tc.queue_status(verbose=``False``)`` |
|
125 | 125 | ``queue_status`` is a method for querying the state of the ``TaskControler``. ``queue_status`` returns a dict of the form:: |
|
126 | 126 | |
|
127 | 127 | {'scheduled': Tasks that have been submitted but yet run |
|
128 | 128 | 'pending' : Tasks that are currently running |
|
129 | 129 | 'succeeded': Tasks that have completed successfully |
|
130 | 130 | 'failed' : Tasks that have finished with a failure |
|
131 | 131 | } |
|
132 | 132 | |
|
133 | 133 | if @verbose is not specified (or is ``False``), then the values of the dict are integers - the number of Tasks in each state. if @verbose is ``True``, then each element in the dict is a list of the taskIDs in that state:: |
|
134 | 134 | |
|
135 | 135 | In [8]: tc.queue_status() |
|
136 | 136 | Out[8]: {'scheduled': 4, |
|
137 | 137 | 'pending' : 2, |
|
138 | 138 | 'succeeded': 5, |
|
139 | 139 | 'failed' : 1 |
|
140 | 140 | } |
|
141 | 141 | |
|
142 | 142 | In [9]: tc.queue_status(verbose=True) |
|
143 | 143 | Out[9]: {'scheduled': [8,9,10,11], |
|
144 | 144 | 'pending' : [6,7], |
|
145 | 145 | 'succeeded': [0,1,2,4,5], |
|
146 | 146 | 'failed' : [3] |
|
147 | 147 | } |
|
148 | 148 | |
|
149 | 149 | * ``tc.abort(taskid)`` |
|
150 | 150 | ``abort`` allows the user to abort Tasks that have already been submitted. ``abort`` will always return immediately. If the ``Task`` has completed, ``abort`` will raise an ``IndexError ``Task`` Already Completed``. An obvious case for ``abort`` would be where the user submits a long-running ``Task`` with a number of retries (see ``Task``_ section for how to specify retries) in an interactive session, but realizes there has been a typo. The user can then abort the ``Task``, preventing certain failures from cluttering up the queue. It can also be used for parallel search-type problems, where only one ``Task`` will give the solution, so once the user find the solution, the user would want to abort all remaining Tasks to prevent wasted work. |
|
151 | 151 | * ``tc.spin()`` |
|
152 | 152 | ``spin`` simply triggers the scheduler in the ``TaskControler``. Under most normal circumstances, this will do nothing. The primary known usage case involves the ``Task`` dependency (see `Dependencies`_). The dependency is a function of an Engine's ``properties``, but changing the ``properties`` via the ``MutliEngineClient`` does not trigger a reschedule event. The main example case for this requires the following event sequence: |
|
153 | 153 | * ``engine`` is available, ``Task`` is submitted, but ``engine`` does not have ``Task``'s dependencies. |
|
154 | 154 | * ``engine`` gets necessary dependencies while no new Tasks are submitted or completed. |
|
155 | 155 | * now ``engine`` can run ``Task``, but a ``Task`` event is required for the ``TaskControler`` to try scheduling ``Task`` again. |
|
156 | 156 | |
|
157 | 157 | ``spin`` is just an empty ping method to ensure that the Controller has scheduled all available Tasks, and should not be needed under most normal circumstances. |
|
158 | 158 | |
|
159 | 159 | That covers the ``TaskClient``, a simple interface to the cluster. With this, the user can submit jobs (and abort if necessary), request their results, synchronize on arbitrary subsets of jobs. |
|
160 | 160 | |
|
161 | 161 | .. _task: The Task Object |
|
162 | 162 | |
|
163 | 163 | The Task Object |
|
164 | 164 | =============== |
|
165 | 165 | |
|
166 | 166 | The ``Task`` is the basic object for describing a job. It can be used in a very simple manner, where the user just specifies a command string to be executed as the ``Task``. The usage of this first argument is exactly the same as the ``execute`` method of the ``MultiEngine`` (in fact, ``execute`` is called to run the code):: |
|
167 | 167 | |
|
168 | 168 | In [1]: t = client.Task("a = str(id)") |
|
169 | 169 | |
|
170 | 170 | This ``Task`` would run, and store the string representation of the ``id`` element in ``a`` in each worker's namespace, but it is fairly useless because the user does not know anything about the state of the ``worker`` on which it ran at the time of retrieving results. It is important that each ``Task`` not expect the state of the ``worker`` to persist after the ``Task`` is completed. |
|
171 | 171 | There are many different situations for using ``Task`` Farming, and the ``Task`` object has many attributes for use in customizing the ``Task`` behavior. All of a ``Task``'s attributes may be specified in the constructor, through keyword arguments, or after ``Task`` construction through attribute assignment. |
|
172 | 172 | |
|
173 | 173 | Data Attributes |
|
174 | 174 | *************** |
|
175 | 175 | It is likely that the user may want to move data around before or after executing the ``Task``. We provide methods of sending data to initialize the worker's namespace, and specifying what data to bring back as the ``Task``'s results. |
|
176 | 176 | |
|
177 | 177 | * pull = [] |
|
178 | 178 | The obvious case is as above, where ``t`` would execute and store the result of ``myfunc`` in ``a``, it is likely that the user would want to bring ``a`` back to their namespace. This is done through the ``pull`` attribute. ``pull`` can be a string or list of strings, and it specifies the names of variables to be retrieved. The ``TaskResult`` object retrieved by ``get_task_result`` will have a dictionary of keys and values, and the ``Task``'s ``pull`` attribute determines what goes into it:: |
|
179 | 179 | |
|
180 | 180 | In [2]: t = client.Task("a = str(id)", pull = "a") |
|
181 | 181 | |
|
182 | 182 | In [3]: t = client.Task("a = str(id)", pull = ["a", "id"]) |
|
183 | 183 | |
|
184 | 184 | * push = {} |
|
185 | 185 | A user might also want to initialize some data into the namespace before the code part of the ``Task`` is run. Enter ``push``. ``push`` is a dictionary of key/value pairs to be loaded from the user's namespace into the worker's immediately before execution:: |
|
186 | 186 | |
|
187 | 187 | In [4]: t = client.Task("a = f(submitted)", push=dict(submitted=time.time()), pull="a") |
|
188 | 188 | |
|
189 | 189 | push and pull result directly in calling an ``engine``'s ``push`` and ``pull`` methods before and after ``Task`` execution respectively, and thus their api is the same. |
|
190 | 190 | |
|
191 | 191 | Namespace Cleaning |
|
192 | 192 | ****************** |
|
193 | 193 | When a user is running a large number of Tasks, it is likely that the namespace of the worker's could become cluttered. Some Tasks might be sensitive to clutter, while others might be known to cause namespace pollution. For these reasons, Tasks have two boolean attributes for cleaning up the namespace. |
|
194 | 194 | |
|
195 | 195 | * ``clear_after`` |
|
196 | 196 | if clear_after is specified ``True``, the worker on which the ``Task`` was run will be reset (via ``engine.reset``) upon completion of the ``Task``. This can be useful for both Tasks that produce clutter or Tasks whose intermediate data one might wish to be kept private:: |
|
197 | 197 | |
|
198 | 198 | In [5]: t = client.Task("a = range(1e10)", pull = "a",clear_after=True) |
|
199 | 199 | |
|
200 | 200 | |
|
201 | 201 | * ``clear_before`` |
|
202 | 202 | as one might guess, clear_before is identical to ``clear_after``, but it takes place before the ``Task`` is run. This ensures that the ``Task`` runs on a fresh worker:: |
|
203 | 203 | |
|
204 | 204 | In [6]: t = client.Task("a = globals()", pull = "a",clear_before=True) |
|
205 | 205 | |
|
206 | 206 | Of course, a user can both at the same time, ensuring that all workers are clear except when they are currently running a job. Both of these default to ``False``. |
|
207 | 207 | |
|
208 | 208 | Fault Tolerance |
|
209 | 209 | *************** |
|
210 | 210 | It is possible that Tasks might fail, and there are a variety of reasons this could happen. One might be that the worker it was running on disconnected, and there was nothing wrong with the ``Task`` itself. With the fault tolerance attributes of the ``Task``, the user can specify how many times to resubmit the ``Task``, and what to do if it never succeeds. |
|
211 | 211 | |
|
212 | 212 | * ``retries`` |
|
213 | 213 | ``retries`` is an integer, specifying the number of times a ``Task`` is to be retried. It defaults to zero. It is often a good idea for this number to be 1 or 2, to protect the ``Task`` from disconnecting engines, but not a large number. If a ``Task`` is failing 100 times, there is probably something wrong with the ``Task``. The canonical bad example: |
|
214 | 214 | |
|
215 | 215 | In [7]: t = client.Task("os.kill(os.getpid(), 9)", retries=99) |
|
216 | 216 | |
|
217 | 217 | This would actually take down 100 workers. |
|
218 | 218 | |
|
219 | 219 | * ``recovery_task`` |
|
220 | 220 | ``recovery_task`` is another ``Task`` object, to be run in the event of the original ``Task`` still failing after running out of retries. Since ``recovery_task`` is another ``Task`` object, it can have its own ``recovery_task``. The chain of Tasks is limitless, except loops are not allowed (that would be bad!). |
|
221 | 221 | |
|
222 | 222 | Dependencies |
|
223 | 223 | ************ |
|
224 | 224 | Dependencies are the most powerful part of the ``Task`` farming system, because it allows the user to do some classification of the workers, and guide the ``Task`` distribution without meddling with the controller directly. It makes use of two objects - the ``Task``'s ``depend`` attribute, and the engine's ``properties``. See the `MultiEngine`_ reference for how to use engine properties. The engine properties api exists for extending IPython, allowing conditional execution and new controllers that make decisions based on properties of its engines. Currently the ``Task`` dependency is the only internal use of the properties api. |
|
225 | 225 | |
|
226 | 226 | .. _MultiEngine: ./parallel_multiengine |
|
227 | 227 | |
|
228 | 228 | The ``depend`` attribute of a ``Task`` must be a function of exactly one argument, the worker's properties dictionary, and it should return ``True`` if the ``Task`` should be allowed to run on the worker and ``False`` if not. The usage in the controller is fault tolerant, so exceptions raised by ``Task.depend`` will be ignored and functionally equivalent to always returning ``False``. Tasks`` with invalid ``depend`` functions will never be assigned to a worker:: |
|
229 | 229 | |
|
230 | 230 | In [8]: def dep(properties): |
|
231 | 231 | ... return properties["RAM"] > 2**32 # have at least 4GB |
|
232 | 232 | In [9]: t = client.Task("a = bigfunc()", depend=dep) |
|
233 | 233 | |
|
234 | 234 | It is important to note that assignment of values to the properties dict is done entirely by the user, either locally (in the engine) using the EngineAPI, or remotely, through the ``MultiEngineClient``'s get/set_properties methods. |
|
235 | 235 | |
|
236 | 236 | |
|
237 | 237 | |
|
238 | 238 | |
|
239 | 239 | |
|
240 | 240 |
@@ -1,1165 +1,1165 b'' | |||
|
1 | 1 | ================= |
|
2 | 2 | IPython reference |
|
3 | 3 | ================= |
|
4 | 4 | |
|
5 | 5 | .. _command_line_options: |
|
6 | 6 | |
|
7 | 7 | Command-line usage |
|
8 | 8 | ================== |
|
9 | 9 | |
|
10 | 10 | You start IPython with the command:: |
|
11 | 11 | |
|
12 | 12 | $ ipython [options] files |
|
13 | 13 | |
|
14 | 14 | .. note:: |
|
15 | 15 | |
|
16 | 16 | For IPython on Python 3, use ``ipython3`` in place of ``ipython``. |
|
17 | 17 | |
|
18 | 18 | If invoked with no options, it executes all the files listed in sequence |
|
19 | 19 | and drops you into the interpreter while still acknowledging any options |
|
20 | 20 | you may have set in your ipython_config.py. This behavior is different from |
|
21 | 21 | standard Python, which when called as python -i will only execute one |
|
22 | 22 | file and ignore your configuration setup. |
|
23 | 23 | |
|
24 | 24 | Please note that some of the configuration options are not available at |
|
25 | 25 | the command line, simply because they are not practical here. Look into |
|
26 | 26 | your configuration files for details on those. There are separate configuration |
|
27 | 27 | files for each profile, and the files look like "ipython_config.py" or |
|
28 | 28 | "ipython_config_<frontendname>.py". Profile directories look like |
|
29 | 29 | "profile_profilename" and are typically installed in the IPYTHONDIR directory. |
|
30 | 30 | For Linux users, this will be $HOME/.config/ipython, and for other users it |
|
31 | 31 | will be $HOME/.ipython. For Windows users, $HOME resolves to C:\\Documents and |
|
32 | 32 | Settings\\YourUserName in most instances. |
|
33 | 33 | |
|
34 | 34 | |
|
35 | 35 | Eventloop integration |
|
36 | 36 | --------------------- |
|
37 | 37 | |
|
38 | 38 | Previously IPython had command line options for controlling GUI event loop |
|
39 | 39 | integration (-gthread, -qthread, -q4thread, -wthread, -pylab). As of IPython |
|
40 | 40 | version 0.11, these have been removed. Please see the new ``%gui`` |
|
41 | 41 | magic command or :ref:`this section <gui_support>` for details on the new |
|
42 | 42 | interface, or specify the gui at the commandline:: |
|
43 | 43 | |
|
44 | 44 | $ ipython --gui=qt |
|
45 | 45 | |
|
46 | 46 | |
|
47 | 47 | Command-line Options |
|
48 | 48 | -------------------- |
|
49 | 49 | |
|
50 | 50 | To see the options IPython accepts, use ``ipython --help`` (and you probably |
|
51 | 51 | should run the output through a pager such as ``ipython --help | less`` for |
|
52 | 52 | more convenient reading). This shows all the options that have a single-word |
|
53 | 53 | alias to control them, but IPython lets you configure all of its objects from |
|
54 | 54 | the command-line by passing the full class name and a corresponding value; type |
|
55 | 55 | ``ipython --help-all`` to see this full list. For example:: |
|
56 | 56 | |
|
57 | 57 | ipython --matplotlib qt |
|
58 | 58 | |
|
59 | 59 | is equivalent to:: |
|
60 | 60 | |
|
61 | 61 | ipython --TerminalIPythonApp.matplotlib='qt' |
|
62 | 62 | |
|
63 | 63 | Note that in the second form, you *must* use the equal sign, as the expression |
|
64 | 64 | is evaluated as an actual Python assignment. While in the above example the |
|
65 | 65 | short form is more convenient, only the most common options have a short form, |
|
66 | 66 | while any configurable variable in IPython can be set at the command-line by |
|
67 | 67 | using the long form. This long form is the same syntax used in the |
|
68 | 68 | configuration files, if you want to set these options permanently. |
|
69 | 69 | |
|
70 | 70 | |
|
71 | 71 | Interactive use |
|
72 | 72 | =============== |
|
73 | 73 | |
|
74 | 74 | IPython is meant to work as a drop-in replacement for the standard interactive |
|
75 | 75 | interpreter. As such, any code which is valid python should execute normally |
|
76 | 76 | under IPython (cases where this is not true should be reported as bugs). It |
|
77 | 77 | does, however, offer many features which are not available at a standard python |
|
78 | 78 | prompt. What follows is a list of these. |
|
79 | 79 | |
|
80 | 80 | |
|
81 | 81 | Caution for Windows users |
|
82 | 82 | ------------------------- |
|
83 | 83 | |
|
84 | 84 | Windows, unfortunately, uses the '\\' character as a path separator. This is a |
|
85 | 85 | terrible choice, because '\\' also represents the escape character in most |
|
86 | 86 | modern programming languages, including Python. For this reason, using '/' |
|
87 | 87 | character is recommended if you have problems with ``\``. However, in Windows |
|
88 | 88 | commands '/' flags options, so you can not use it for the root directory. This |
|
89 | 89 | means that paths beginning at the root must be typed in a contrived manner |
|
90 | 90 | like: ``%copy \opt/foo/bar.txt \tmp`` |
|
91 | 91 | |
|
92 | 92 | .. _magic: |
|
93 | 93 | |
|
94 | 94 | Magic command system |
|
95 | 95 | -------------------- |
|
96 | 96 | |
|
97 | 97 | IPython will treat any line whose first character is a % as a special |
|
98 | 98 | call to a 'magic' function. These allow you to control the behavior of |
|
99 | 99 | IPython itself, plus a lot of system-type features. They are all |
|
100 | 100 | prefixed with a % character, but parameters are given without |
|
101 | 101 | parentheses or quotes. |
|
102 | 102 | |
|
103 | 103 | Lines that begin with ``%%`` signal a *cell magic*: they take as arguments not |
|
104 | 104 | only the rest of the current line, but all lines below them as well, in the |
|
105 | 105 | current execution block. Cell magics can in fact make arbitrary modifications |
|
106 | 106 | to the input they receive, which need not even be valid Python code at all. |
|
107 | 107 | They receive the whole block as a single string. |
|
108 | 108 | |
|
109 | 109 | As a line magic example, the ``%cd`` magic works just like the OS command of |
|
110 | 110 | the same name:: |
|
111 | 111 | |
|
112 | 112 | In [8]: %cd |
|
113 | 113 | /home/fperez |
|
114 | 114 | |
|
115 | 115 | The following uses the builtin ``timeit`` in cell mode:: |
|
116 | 116 | |
|
117 | 117 | In [10]: %%timeit x = range(10000) |
|
118 | 118 | ...: min(x) |
|
119 | 119 | ...: max(x) |
|
120 | 120 | ...: |
|
121 | 121 | 1000 loops, best of 3: 438 us per loop |
|
122 | 122 | |
|
123 | 123 | In this case, ``x = range(10000)`` is called as the line argument, and the |
|
124 | 124 | block with ``min(x)`` and ``max(x)`` is called as the cell body. The |
|
125 | 125 | ``timeit`` magic receives both. |
|
126 | 126 | |
|
127 | 127 | If you have 'automagic' enabled (as it by default), you don't need to type in |
|
128 | 128 | the single ``%`` explicitly for line magics; IPython will scan its internal |
|
129 | 129 | list of magic functions and call one if it exists. With automagic on you can |
|
130 | 130 | then just type ``cd mydir`` to go to directory 'mydir':: |
|
131 | 131 | |
|
132 | 132 | In [9]: cd mydir |
|
133 | 133 | /home/fperez/mydir |
|
134 | 134 | |
|
135 | 135 | Note that cell magics *always* require an explicit ``%%`` prefix, automagic |
|
136 | 136 | calling only works for line magics. |
|
137 | 137 | |
|
138 | 138 | The automagic system has the lowest possible precedence in name searches, so |
|
139 | 139 | defining an identifier with the same name as an existing magic function will |
|
140 | 140 | shadow it for automagic use. You can still access the shadowed magic function |
|
141 | 141 | by explicitly using the ``%`` character at the beginning of the line. |
|
142 | 142 | |
|
143 | 143 | An example (with automagic on) should clarify all this: |
|
144 | 144 | |
|
145 | 145 | .. sourcecode:: ipython |
|
146 | 146 | |
|
147 | 147 | In [1]: cd ipython # %cd is called by automagic |
|
148 | 148 | /home/fperez/ipython |
|
149 | 149 | |
|
150 | 150 | In [2]: cd=1 # now cd is just a variable |
|
151 | 151 | |
|
152 | 152 | In [3]: cd .. # and doesn't work as a function anymore |
|
153 | 153 | File "<ipython-input-3-9fedb3aff56c>", line 1 |
|
154 | 154 | cd .. |
|
155 | 155 | ^ |
|
156 | 156 | SyntaxError: invalid syntax |
|
157 | 157 | |
|
158 | 158 | |
|
159 | 159 | In [4]: %cd .. # but %cd always works |
|
160 | 160 | /home/fperez |
|
161 | 161 | |
|
162 | 162 | In [5]: del cd # if you remove the cd variable, automagic works again |
|
163 | 163 | |
|
164 | 164 | In [6]: cd ipython |
|
165 | 165 | |
|
166 | 166 | /home/fperez/ipython |
|
167 | 167 | |
|
168 | 168 | Defining your own magics |
|
169 | 169 | ++++++++++++++++++++++++ |
|
170 | 170 | |
|
171 | 171 | There are two main ways to define your own magic functions: from standalone |
|
172 | 172 | functions and by inheriting from a base class provided by IPython: |
|
173 | 173 | :class:`IPython.core.magic.Magics`. Below we show code you can place in a file |
|
174 | 174 | that you load from your configuration, such as any file in the ``startup`` |
|
175 | 175 | subdirectory of your default IPython profile. |
|
176 | 176 | |
|
177 | 177 | First, let us see the simplest case. The following shows how to create a line |
|
178 | 178 | magic, a cell one and one that works in both modes, using just plain functions: |
|
179 | 179 | |
|
180 | 180 | .. sourcecode:: python |
|
181 | 181 | |
|
182 | 182 | from IPython.core.magic import (register_line_magic, register_cell_magic, |
|
183 | 183 | register_line_cell_magic) |
|
184 | 184 | |
|
185 | 185 | @register_line_magic |
|
186 | 186 | def lmagic(line): |
|
187 | 187 | "my line magic" |
|
188 | 188 | return line |
|
189 | 189 | |
|
190 | 190 | @register_cell_magic |
|
191 | 191 | def cmagic(line, cell): |
|
192 | 192 | "my cell magic" |
|
193 | 193 | return line, cell |
|
194 | 194 | |
|
195 | 195 | @register_line_cell_magic |
|
196 | 196 | def lcmagic(line, cell=None): |
|
197 | 197 | "Magic that works both as %lcmagic and as %%lcmagic" |
|
198 | 198 | if cell is None: |
|
199 | 199 | print "Called as line magic" |
|
200 | 200 | return line |
|
201 | 201 | else: |
|
202 | 202 | print "Called as cell magic" |
|
203 | 203 | return line, cell |
|
204 | 204 | |
|
205 | 205 | # We delete these to avoid name conflicts for automagic to work |
|
206 | 206 | del lmagic, lcmagic |
|
207 | 207 | |
|
208 | 208 | |
|
209 | 209 | You can also create magics of all three kinds by inheriting from the |
|
210 | 210 | :class:`IPython.core.magic.Magics` class. This lets you create magics that can |
|
211 | 211 | potentially hold state in between calls, and that have full access to the main |
|
212 | 212 | IPython object: |
|
213 | 213 | |
|
214 | 214 | .. sourcecode:: python |
|
215 | 215 | |
|
216 | 216 | # This code can be put in any Python module, it does not require IPython |
|
217 | 217 | # itself to be running already. It only creates the magics subclass but |
|
218 | 218 | # doesn't instantiate it yet. |
|
219 | 219 | from IPython.core.magic import (Magics, magics_class, line_magic, |
|
220 | 220 | cell_magic, line_cell_magic) |
|
221 | 221 | |
|
222 | 222 | # The class MUST call this class decorator at creation time |
|
223 | 223 | @magics_class |
|
224 | 224 | class MyMagics(Magics): |
|
225 | 225 | |
|
226 | 226 | @line_magic |
|
227 | 227 | def lmagic(self, line): |
|
228 | 228 | "my line magic" |
|
229 | 229 | print "Full access to the main IPython object:", self.shell |
|
230 | 230 | print "Variables in the user namespace:", self.shell.user_ns.keys() |
|
231 | 231 | return line |
|
232 | 232 | |
|
233 | 233 | @cell_magic |
|
234 | 234 | def cmagic(self, line, cell): |
|
235 | 235 | "my cell magic" |
|
236 | 236 | return line, cell |
|
237 | 237 | |
|
238 | 238 | @line_cell_magic |
|
239 | 239 | def lcmagic(self, line, cell=None): |
|
240 | 240 | "Magic that works both as %lcmagic and as %%lcmagic" |
|
241 | 241 | if cell is None: |
|
242 | 242 | print "Called as line magic" |
|
243 | 243 | return line |
|
244 | 244 | else: |
|
245 | 245 | print "Called as cell magic" |
|
246 | 246 | return line, cell |
|
247 | 247 | |
|
248 | 248 | |
|
249 | 249 | # In order to actually use these magics, you must register them with a |
|
250 | 250 | # running IPython. This code must be placed in a file that is loaded once |
|
251 | 251 | # IPython is up and running: |
|
252 | 252 | ip = get_ipython() |
|
253 | 253 | # You can register the class itself without instantiating it. IPython will |
|
254 | 254 | # call the default constructor on it. |
|
255 | 255 | ip.register_magics(MyMagics) |
|
256 | 256 | |
|
257 | 257 | If you want to create a class with a different constructor that holds |
|
258 | 258 | additional state, then you should always call the parent constructor and |
|
259 | 259 | instantiate the class yourself before registration: |
|
260 | 260 | |
|
261 | 261 | .. sourcecode:: python |
|
262 | 262 | |
|
263 | 263 | @magics_class |
|
264 | 264 | class StatefulMagics(Magics): |
|
265 | 265 | "Magics that hold additional state" |
|
266 | 266 | |
|
267 | 267 | def __init__(self, shell, data): |
|
268 | 268 | # You must call the parent constructor |
|
269 | 269 | super(StatefulMagics, self).__init__(shell) |
|
270 | 270 | self.data = data |
|
271 | 271 | |
|
272 | 272 | # etc... |
|
273 | 273 | |
|
274 | 274 | # This class must then be registered with a manually created instance, |
|
275 | 275 | # since its constructor has different arguments from the default: |
|
276 | 276 | ip = get_ipython() |
|
277 | 277 | magics = StatefulMagics(ip, some_data) |
|
278 | 278 | ip.register_magics(magics) |
|
279 | 279 | |
|
280 | 280 | |
|
281 | 281 | In earlier versions, IPython had an API for the creation of line magics (cell |
|
282 | 282 | magics did not exist at the time) that required you to create functions with a |
|
283 | 283 | method-looking signature and to manually pass both the function and the name. |
|
284 | 284 | While this API is no longer recommended, it remains indefinitely supported for |
|
285 | 285 | backwards compatibility purposes. With the old API, you'd create a magic as |
|
286 | 286 | follows: |
|
287 | 287 | |
|
288 | 288 | .. sourcecode:: python |
|
289 | 289 | |
|
290 | 290 | def func(self, line): |
|
291 | 291 | print "Line magic called with line:", line |
|
292 | 292 | print "IPython object:", self.shell |
|
293 | 293 | |
|
294 | 294 | ip = get_ipython() |
|
295 | 295 | # Declare this function as the magic %mycommand |
|
296 | 296 | ip.define_magic('mycommand', func) |
|
297 | 297 | |
|
298 | 298 | Type ``%magic`` for more information, including a list of all available magic |
|
299 | 299 | functions at any time and their docstrings. You can also type |
|
300 | 300 | ``%magic_function_name?`` (see :ref:`below <dynamic_object_info>` for |
|
301 | 301 | information on the '?' system) to get information about any particular magic |
|
302 | 302 | function you are interested in. |
|
303 | 303 | |
|
304 | 304 | The API documentation for the :mod:`IPython.core.magic` module contains the full |
|
305 | 305 | docstrings of all currently available magic commands. |
|
306 | 306 | |
|
307 | 307 | |
|
308 | 308 | Access to the standard Python help |
|
309 | 309 | ---------------------------------- |
|
310 | 310 | |
|
311 | 311 | Simply type ``help()`` to access Python's standard help system. You can |
|
312 | 312 | also type ``help(object)`` for information about a given object, or |
|
313 | 313 | ``help('keyword')`` for information on a keyword. You may need to configure your |
|
314 | 314 | PYTHONDOCS environment variable for this feature to work correctly. |
|
315 | 315 | |
|
316 | 316 | .. _dynamic_object_info: |
|
317 | 317 | |
|
318 | 318 | Dynamic object information |
|
319 | 319 | -------------------------- |
|
320 | 320 | |
|
321 | 321 | Typing ``?word`` or ``word?`` prints detailed information about an object. If |
|
322 | 322 | certain strings in the object are too long (e.g. function signatures) they get |
|
323 | 323 | snipped in the center for brevity. This system gives access variable types and |
|
324 | 324 | values, docstrings, function prototypes and other useful information. |
|
325 | 325 | |
|
326 | 326 | If the information will not fit in the terminal, it is displayed in a pager |
|
327 | 327 | (``less`` if available, otherwise a basic internal pager). |
|
328 | 328 | |
|
329 | 329 | Typing ``??word`` or ``word??`` gives access to the full information, including |
|
330 | 330 | the source code where possible. Long strings are not snipped. |
|
331 | 331 | |
|
332 | 332 | The following magic functions are particularly useful for gathering |
|
333 | 333 | information about your working environment. You can get more details by |
|
334 | 334 | typing ``%magic`` or querying them individually (``%function_name?``); |
|
335 | 335 | this is just a summary: |
|
336 | 336 | |
|
337 | 337 | * **%pdoc <object>**: Print (or run through a pager if too long) the |
|
338 | 338 | docstring for an object. If the given object is a class, it will |
|
339 | 339 | print both the class and the constructor docstrings. |
|
340 | 340 | * **%pdef <object>**: Print the call signature for any callable |
|
341 | 341 | object. If the object is a class, print the constructor information. |
|
342 | 342 | * **%psource <object>**: Print (or run through a pager if too long) |
|
343 | 343 | the source code for an object. |
|
344 | 344 | * **%pfile <object>**: Show the entire source file where an object was |
|
345 | 345 | defined via a pager, opening it at the line where the object |
|
346 | 346 | definition begins. |
|
347 | 347 | * **%who/%whos**: These functions give information about identifiers |
|
348 | 348 | you have defined interactively (not things you loaded or defined |
|
349 | 349 | in your configuration files). %who just prints a list of |
|
350 | 350 | identifiers and %whos prints a table with some basic details about |
|
351 | 351 | each identifier. |
|
352 | 352 | |
|
353 | 353 | Note that the dynamic object information functions (?/??, ``%pdoc``, |
|
354 | 354 | ``%pfile``, ``%pdef``, ``%psource``) work on object attributes, as well as |
|
355 | 355 | directly on variables. For example, after doing ``import os``, you can use |
|
356 | 356 | ``os.path.abspath??``. |
|
357 | 357 | |
|
358 | 358 | .. _readline: |
|
359 | 359 | |
|
360 | 360 | Readline-based features |
|
361 | 361 | ----------------------- |
|
362 | 362 | |
|
363 | 363 | These features require the GNU readline library, so they won't work if your |
|
364 | 364 | Python installation lacks readline support. We will first describe the default |
|
365 | 365 | behavior IPython uses, and then how to change it to suit your preferences. |
|
366 | 366 | |
|
367 | 367 | |
|
368 | 368 | Command line completion |
|
369 | 369 | +++++++++++++++++++++++ |
|
370 | 370 | |
|
371 | 371 | At any time, hitting TAB will complete any available python commands or |
|
372 | 372 | variable names, and show you a list of the possible completions if |
|
373 | 373 | there's no unambiguous one. It will also complete filenames in the |
|
374 | 374 | current directory if no python names match what you've typed so far. |
|
375 | 375 | |
|
376 | 376 | |
|
377 | 377 | Search command history |
|
378 | 378 | ++++++++++++++++++++++ |
|
379 | 379 | |
|
380 | 380 | IPython provides two ways for searching through previous input and thus |
|
381 | 381 | reduce the need for repetitive typing: |
|
382 | 382 | |
|
383 | 383 | 1. Start typing, and then use Ctrl-p (previous,up) and Ctrl-n |
|
384 | 384 | (next,down) to search through only the history items that match |
|
385 | 385 | what you've typed so far. If you use Ctrl-p/Ctrl-n at a blank |
|
386 | 386 | prompt, they just behave like normal arrow keys. |
|
387 | 387 | 2. Hit Ctrl-r: opens a search prompt. Begin typing and the system |
|
388 | 388 | searches your history for lines that contain what you've typed so |
|
389 | 389 | far, completing as much as it can. |
|
390 | 390 | |
|
391 | 391 | |
|
392 | 392 | Persistent command history across sessions |
|
393 | 393 | ++++++++++++++++++++++++++++++++++++++++++ |
|
394 | 394 | |
|
395 | 395 | IPython will save your input history when it leaves and reload it next |
|
396 | 396 | time you restart it. By default, the history file is named |
|
397 | 397 | $IPYTHONDIR/profile_<name>/history.sqlite. This allows you to keep |
|
398 | 398 | separate histories related to various tasks: commands related to |
|
399 | 399 | numerical work will not be clobbered by a system shell history, for |
|
400 | 400 | example. |
|
401 | 401 | |
|
402 | 402 | |
|
403 | 403 | Autoindent |
|
404 | 404 | ++++++++++ |
|
405 | 405 | |
|
406 | 406 | IPython can recognize lines ending in ':' and indent the next line, |
|
407 | 407 | while also un-indenting automatically after 'raise' or 'return'. |
|
408 | 408 | |
|
409 | 409 | This feature uses the readline library, so it will honor your |
|
410 | 410 | :file:`~/.inputrc` configuration (or whatever file your INPUTRC variable points |
|
411 | 411 | to). Adding the following lines to your :file:`.inputrc` file can make |
|
412 | 412 | indenting/unindenting more convenient (M-i indents, M-u unindents):: |
|
413 | 413 | |
|
414 | 414 | # if you don't already have a ~/.inputrc file, you need this include: |
|
415 | 415 | $include /etc/inputrc |
|
416 | 416 | |
|
417 | 417 | $if Python |
|
418 | 418 | "\M-i": " " |
|
419 | 419 | "\M-u": "\d\d\d\d" |
|
420 | 420 | $endif |
|
421 | 421 | |
|
422 | 422 | Note that there are 4 spaces between the quote marks after "M-i" above. |
|
423 | 423 | |
|
424 | 424 | .. warning:: |
|
425 | 425 | |
|
426 | 426 | Setting the above indents will cause problems with unicode text entry in |
|
427 | 427 | the terminal. |
|
428 | 428 | |
|
429 | 429 | .. warning:: |
|
430 | 430 | |
|
431 | 431 | Autoindent is ON by default, but it can cause problems with the pasting of |
|
432 | 432 | multi-line indented code (the pasted code gets re-indented on each line). A |
|
433 | 433 | magic function %autoindent allows you to toggle it on/off at runtime. You |
|
434 | 434 | can also disable it permanently on in your :file:`ipython_config.py` file |
|
435 | 435 | (set TerminalInteractiveShell.autoindent=False). |
|
436 | 436 | |
|
437 | 437 | If you want to paste multiple lines in the terminal, it is recommended that |
|
438 | 438 | you use ``%paste``. |
|
439 | 439 | |
|
440 | 440 | |
|
441 | 441 | Customizing readline behavior |
|
442 | 442 | +++++++++++++++++++++++++++++ |
|
443 | 443 | |
|
444 | 444 | All these features are based on the GNU readline library, which has an |
|
445 | 445 | extremely customizable interface. Normally, readline is configured via a |
|
446 | 446 | file which defines the behavior of the library; the details of the |
|
447 | 447 | syntax for this can be found in the readline documentation available |
|
448 | 448 | with your system or on the Internet. IPython doesn't read this file (if |
|
449 | 449 | it exists) directly, but it does support passing to readline valid |
|
450 | 450 | options via a simple interface. In brief, you can customize readline by |
|
451 | 451 | setting the following options in your configuration file (note |
|
452 | 452 | that these options can not be specified at the command line): |
|
453 | 453 | |
|
454 | 454 | * **readline_parse_and_bind**: this holds a list of strings to be executed |
|
455 | 455 | via a readline.parse_and_bind() command. The syntax for valid commands |
|
456 | 456 | of this kind can be found by reading the documentation for the GNU |
|
457 | 457 | readline library, as these commands are of the kind which readline |
|
458 | 458 | accepts in its configuration file. |
|
459 | 459 | * **readline_remove_delims**: a string of characters to be removed |
|
460 | 460 | from the default word-delimiters list used by readline, so that |
|
461 | 461 | completions may be performed on strings which contain them. Do not |
|
462 | 462 | change the default value unless you know what you're doing. |
|
463 | 463 | |
|
464 | 464 | You will find the default values in your configuration file. |
|
465 | 465 | |
|
466 | 466 | |
|
467 | 467 | Session logging and restoring |
|
468 | 468 | ----------------------------- |
|
469 | 469 | |
|
470 | 470 | You can log all input from a session either by starting IPython with the |
|
471 | 471 | command line switch ``--logfile=foo.py`` (see :ref:`here <command_line_options>`) |
|
472 | 472 | or by activating the logging at any moment with the magic function %logstart. |
|
473 | 473 | |
|
474 | 474 | Log files can later be reloaded by running them as scripts and IPython |
|
475 | 475 | will attempt to 'replay' the log by executing all the lines in it, thus |
|
476 | 476 | restoring the state of a previous session. This feature is not quite |
|
477 | 477 | perfect, but can still be useful in many cases. |
|
478 | 478 | |
|
479 | 479 | The log files can also be used as a way to have a permanent record of |
|
480 | 480 | any code you wrote while experimenting. Log files are regular text files |
|
481 | 481 | which you can later open in your favorite text editor to extract code or |
|
482 | 482 | to 'clean them up' before using them to replay a session. |
|
483 | 483 | |
|
484 | 484 | The `%logstart` function for activating logging in mid-session is used as |
|
485 | 485 | follows:: |
|
486 | 486 | |
|
487 | 487 | %logstart [log_name [log_mode]] |
|
488 | 488 | |
|
489 | 489 | If no name is given, it defaults to a file named 'ipython_log.py' in your |
|
490 | 490 | current working directory, in 'rotate' mode (see below). |
|
491 | 491 | |
|
492 | 492 | '%logstart name' saves to file 'name' in 'backup' mode. It saves your |
|
493 | 493 | history up to that point and then continues logging. |
|
494 | 494 | |
|
495 | 495 | %logstart takes a second optional parameter: logging mode. This can be |
|
496 | 496 | one of (note that the modes are given unquoted): |
|
497 | 497 | |
|
498 | 498 | * [over:] overwrite existing log_name. |
|
499 | 499 | * [backup:] rename (if exists) to log_name~ and start log_name. |
|
500 | 500 | * [append:] well, that says it. |
|
501 | 501 | * [rotate:] create rotating logs log_name.1~, log_name.2~, etc. |
|
502 | 502 | |
|
503 | 503 | The %logoff and %logon functions allow you to temporarily stop and |
|
504 | 504 | resume logging to a file which had previously been started with |
|
505 | 505 | %logstart. They will fail (with an explanation) if you try to use them |
|
506 | 506 | before logging has been started. |
|
507 | 507 | |
|
508 | 508 | .. _system_shell_access: |
|
509 | 509 | |
|
510 | 510 | System shell access |
|
511 | 511 | ------------------- |
|
512 | 512 | |
|
513 | 513 | Any input line beginning with a ! character is passed verbatim (minus |
|
514 | 514 | the !, of course) to the underlying operating system. For example, |
|
515 | 515 | typing ``!ls`` will run 'ls' in the current directory. |
|
516 | 516 | |
|
517 | 517 | Manual capture of command output |
|
518 | 518 | -------------------------------- |
|
519 | 519 | |
|
520 | 520 | You can assign the result of a system command to a Python variable with the |
|
521 | 521 | syntax ``myfiles = !ls``. This gets machine readable output from stdout |
|
522 | 522 | (e.g. without colours), and splits on newlines. To explicitly get this sort of |
|
523 | 523 | output without assigning to a variable, use two exclamation marks (``!!ls``) or |
|
524 | 524 | the ``%sx`` magic command. |
|
525 | 525 | |
|
526 | 526 | The captured list has some convenience features. ``myfiles.n`` or ``myfiles.s`` |
|
527 | 527 | returns a string delimited by newlines or spaces, respectively. ``myfiles.p`` |
|
528 | 528 | produces `path objects <http://pypi.python.org/pypi/path.py>`_ from the list items. |
|
529 | 529 | See :ref:`string_lists` for details. |
|
530 | 530 | |
|
531 | 531 | IPython also allows you to expand the value of python variables when |
|
532 | 532 | making system calls. Wrap variables or expressions in {braces}:: |
|
533 | 533 | |
|
534 | 534 | In [1]: pyvar = 'Hello world' |
|
535 | 535 | In [2]: !echo "A python variable: {pyvar}" |
|
536 | 536 | A python variable: Hello world |
|
537 | 537 | In [3]: import math |
|
538 | 538 | In [4]: x = 8 |
|
539 | 539 | In [5]: !echo {math.factorial(x)} |
|
540 | 540 | 40320 |
|
541 | 541 | |
|
542 | 542 | For simple cases, you can alternatively prepend $ to a variable name:: |
|
543 | 543 | |
|
544 | 544 | In [6]: !echo $sys.argv |
|
545 | 545 | [/home/fperez/usr/bin/ipython] |
|
546 | 546 | In [7]: !echo "A system variable: $$HOME" # Use $$ for literal $ |
|
547 | 547 | A system variable: /home/fperez |
|
548 | 548 | |
|
549 | 549 | System command aliases |
|
550 | 550 | ---------------------- |
|
551 | 551 | |
|
552 | 552 | The %alias magic function allows you to define magic functions which are in fact |
|
553 | 553 | system shell commands. These aliases can have parameters. |
|
554 | 554 | |
|
555 | 555 | ``%alias alias_name cmd`` defines 'alias_name' as an alias for 'cmd' |
|
556 | 556 | |
|
557 | 557 | Then, typing ``alias_name params`` will execute the system command 'cmd |
|
558 | 558 | params' (from your underlying operating system). |
|
559 | 559 | |
|
560 | 560 | You can also define aliases with parameters using %s specifiers (one per |
|
561 | 561 | parameter). The following example defines the parts function as an |
|
562 | 562 | alias to the command 'echo first %s second %s' where each %s will be |
|
563 | 563 | replaced by a positional parameter to the call to %parts:: |
|
564 | 564 | |
|
565 | 565 | In [1]: %alias parts echo first %s second %s |
|
566 | 566 | In [2]: parts A B |
|
567 | 567 | first A second B |
|
568 | 568 | In [3]: parts A |
|
569 | 569 | ERROR: Alias <parts> requires 2 arguments, 1 given. |
|
570 | 570 | |
|
571 | 571 | If called with no parameters, %alias prints the table of currently |
|
572 | 572 | defined aliases. |
|
573 | 573 | |
|
574 | 574 | The %rehashx magic allows you to load your entire $PATH as |
|
575 | 575 | ipython aliases. See its docstring for further details. |
|
576 | 576 | |
|
577 | 577 | |
|
578 | 578 | .. _dreload: |
|
579 | 579 | |
|
580 | 580 | Recursive reload |
|
581 | 581 | ---------------- |
|
582 | 582 | |
|
583 | 583 | The :mod:`IPython.lib.deepreload` module allows you to recursively reload a |
|
584 | 584 | module: changes made to any of its dependencies will be reloaded without |
|
585 | 585 | having to exit. To start using it, do:: |
|
586 | 586 | |
|
587 | 587 | from IPython.lib.deepreload import reload as dreload |
|
588 | 588 | |
|
589 | 589 | |
|
590 | 590 | Verbose and colored exception traceback printouts |
|
591 | 591 | ------------------------------------------------- |
|
592 | 592 | |
|
593 | 593 | IPython provides the option to see very detailed exception tracebacks, |
|
594 | 594 | which can be especially useful when debugging large programs. You can |
|
595 | 595 | run any Python file with the %run function to benefit from these |
|
596 | 596 | detailed tracebacks. Furthermore, both normal and verbose tracebacks can |
|
597 | 597 | be colored (if your terminal supports it) which makes them much easier |
|
598 | 598 | to parse visually. |
|
599 | 599 | |
|
600 | 600 | See the magic xmode and colors functions for details (just type %magic). |
|
601 | 601 | |
|
602 | 602 | These features are basically a terminal version of Ka-Ping Yee's cgitb |
|
603 | 603 | module, now part of the standard Python library. |
|
604 | 604 | |
|
605 | 605 | |
|
606 | 606 | .. _input_caching: |
|
607 | 607 | |
|
608 | 608 | Input caching system |
|
609 | 609 | -------------------- |
|
610 | 610 | |
|
611 | 611 | IPython offers numbered prompts (In/Out) with input and output caching |
|
612 | 612 | (also referred to as 'input history'). All input is saved and can be |
|
613 | 613 | retrieved as variables (besides the usual arrow key recall), in |
|
614 | 614 | addition to the %rep magic command that brings a history entry |
|
615 | 615 | up for editing on the next command line. |
|
616 | 616 | |
|
617 | 617 | The following GLOBAL variables always exist (so don't overwrite them!): |
|
618 | 618 | |
|
619 | 619 | * _i, _ii, _iii: store previous, next previous and next-next previous inputs. |
|
620 | 620 | * In, _ih : a list of all inputs; _ih[n] is the input from line n. If you |
|
621 | 621 | overwrite In with a variable of your own, you can remake the assignment to the |
|
622 | 622 | internal list with a simple ``In=_ih``. |
|
623 | 623 | |
|
624 | 624 | Additionally, global variables named _i<n> are dynamically created (<n> |
|
625 | 625 | being the prompt counter), so ``_i<n> == _ih[<n>] == In[<n>]``. |
|
626 | 626 | |
|
627 | 627 | For example, what you typed at prompt 14 is available as _i14, _ih[14] |
|
628 | 628 | and In[14]. |
|
629 | 629 | |
|
630 | 630 | This allows you to easily cut and paste multi line interactive prompts |
|
631 | 631 | by printing them out: they print like a clean string, without prompt |
|
632 | 632 | characters. You can also manipulate them like regular variables (they |
|
633 | 633 | are strings), modify or exec them (typing ``exec _i9`` will re-execute the |
|
634 | 634 | contents of input prompt 9. |
|
635 | 635 | |
|
636 | 636 | You can also re-execute multiple lines of input easily by using the |
|
637 | 637 | magic %rerun or %macro functions. The macro system also allows you to re-execute |
|
638 | 638 | previous lines which include magic function calls (which require special |
|
639 | 639 | processing). Type %macro? for more details on the macro system. |
|
640 | 640 | |
|
641 | 641 | A history function %hist allows you to see any part of your input |
|
642 | 642 | history by printing a range of the _i variables. |
|
643 | 643 | |
|
644 | 644 | You can also search ('grep') through your history by typing |
|
645 | 645 | ``%hist -g somestring``. This is handy for searching for URLs, IP addresses, |
|
646 | 646 | etc. You can bring history entries listed by '%hist -g' up for editing |
|
647 | 647 | with the %recall command, or run them immediately with %rerun. |
|
648 | 648 | |
|
649 | 649 | .. _output_caching: |
|
650 | 650 | |
|
651 | 651 | Output caching system |
|
652 | 652 | --------------------- |
|
653 | 653 | |
|
654 | 654 | For output that is returned from actions, a system similar to the input |
|
655 | 655 | cache exists but using _ instead of _i. Only actions that produce a |
|
656 | 656 | result (NOT assignments, for example) are cached. If you are familiar |
|
657 | 657 | with Mathematica, IPython's _ variables behave exactly like |
|
658 | 658 | Mathematica's % variables. |
|
659 | 659 | |
|
660 | 660 | The following GLOBAL variables always exist (so don't overwrite them!): |
|
661 | 661 | |
|
662 | 662 | * [_] (a single underscore) : stores previous output, like Python's |
|
663 | 663 | default interpreter. |
|
664 | 664 | * [__] (two underscores): next previous. |
|
665 | 665 | * [___] (three underscores): next-next previous. |
|
666 | 666 | |
|
667 | 667 | Additionally, global variables named _<n> are dynamically created (<n> |
|
668 | 668 | being the prompt counter), such that the result of output <n> is always |
|
669 | 669 | available as _<n> (don't use the angle brackets, just the number, e.g. |
|
670 | 670 | _21). |
|
671 | 671 | |
|
672 | 672 | These variables are also stored in a global dictionary (not a |
|
673 | 673 | list, since it only has entries for lines which returned a result) |
|
674 | 674 | available under the names _oh and Out (similar to _ih and In). So the |
|
675 | 675 | output from line 12 can be obtained as _12, Out[12] or _oh[12]. If you |
|
676 | 676 | accidentally overwrite the Out variable you can recover it by typing |
|
677 | 677 | 'Out=_oh' at the prompt. |
|
678 | 678 | |
|
679 | 679 | This system obviously can potentially put heavy memory demands on your |
|
680 | 680 | system, since it prevents Python's garbage collector from removing any |
|
681 | 681 | previously computed results. You can control how many results are kept |
|
682 | 682 | in memory with the option (at the command line or in your configuration |
|
683 | 683 | file) cache_size. If you set it to 0, the whole system is completely |
|
684 | 684 | disabled and the prompts revert to the classic '>>>' of normal Python. |
|
685 | 685 | |
|
686 | 686 | |
|
687 | 687 | Directory history |
|
688 | 688 | ----------------- |
|
689 | 689 | |
|
690 | 690 | Your history of visited directories is kept in the global list _dh, and |
|
691 | 691 | the magic %cd command can be used to go to any entry in that list. The |
|
692 | 692 | %dhist command allows you to view this history. Do ``cd -<TAB>`` to |
|
693 | 693 | conveniently view the directory history. |
|
694 | 694 | |
|
695 | 695 | |
|
696 | 696 | Automatic parentheses and quotes |
|
697 | 697 | -------------------------------- |
|
698 | 698 | |
|
699 | 699 | These features were adapted from Nathan Gray's LazyPython. They are |
|
700 | 700 | meant to allow less typing for common situations. |
|
701 | 701 | |
|
702 | 702 | |
|
703 | 703 | Automatic parentheses |
|
704 | 704 | +++++++++++++++++++++ |
|
705 | 705 | |
|
706 | 706 | Callable objects (i.e. functions, methods, etc) can be invoked like this |
|
707 | 707 | (notice the commas between the arguments):: |
|
708 | 708 | |
|
709 | 709 | In [1]: callable_ob arg1, arg2, arg3 |
|
710 | 710 | ------> callable_ob(arg1, arg2, arg3) |
|
711 | 711 | |
|
712 | 712 | You can force automatic parentheses by using '/' as the first character |
|
713 | 713 | of a line. For example:: |
|
714 | 714 | |
|
715 | 715 | In [2]: /globals # becomes 'globals()' |
|
716 | 716 | |
|
717 | 717 | Note that the '/' MUST be the first character on the line! This won't work:: |
|
718 | 718 | |
|
719 | 719 | In [3]: print /globals # syntax error |
|
720 | 720 | |
|
721 | 721 | In most cases the automatic algorithm should work, so you should rarely |
|
722 | 722 | need to explicitly invoke /. One notable exception is if you are trying |
|
723 | 723 | to call a function with a list of tuples as arguments (the parenthesis |
|
724 | 724 | will confuse IPython):: |
|
725 | 725 | |
|
726 | 726 | In [4]: zip (1,2,3),(4,5,6) # won't work |
|
727 | 727 | |
|
728 | 728 | but this will work:: |
|
729 | 729 | |
|
730 | 730 | In [5]: /zip (1,2,3),(4,5,6) |
|
731 | 731 | ------> zip ((1,2,3),(4,5,6)) |
|
732 | 732 | Out[5]: [(1, 4), (2, 5), (3, 6)] |
|
733 | 733 | |
|
734 | 734 | IPython tells you that it has altered your command line by displaying |
|
735 | 735 | the new command line preceded by ->. e.g.:: |
|
736 | 736 | |
|
737 | 737 | In [6]: callable list |
|
738 | 738 | ------> callable(list) |
|
739 | 739 | |
|
740 | 740 | |
|
741 | 741 | Automatic quoting |
|
742 | 742 | +++++++++++++++++ |
|
743 | 743 | |
|
744 | 744 | You can force automatic quoting of a function's arguments by using ',' |
|
745 | 745 | or ';' as the first character of a line. For example:: |
|
746 | 746 | |
|
747 | 747 | In [1]: ,my_function /home/me # becomes my_function("/home/me") |
|
748 | 748 | |
|
749 | 749 | If you use ';' the whole argument is quoted as a single string, while ',' splits |
|
750 | 750 | on whitespace:: |
|
751 | 751 | |
|
752 | 752 | In [2]: ,my_function a b c # becomes my_function("a","b","c") |
|
753 | 753 | |
|
754 | 754 | In [3]: ;my_function a b c # becomes my_function("a b c") |
|
755 | 755 | |
|
756 | 756 | Note that the ',' or ';' MUST be the first character on the line! This |
|
757 | 757 | won't work:: |
|
758 | 758 | |
|
759 | 759 | In [4]: x = ,my_function /home/me # syntax error |
|
760 | 760 | |
|
761 | 761 | IPython as your default Python environment |
|
762 | 762 | ========================================== |
|
763 | 763 | |
|
764 | 764 | Python honors the environment variable PYTHONSTARTUP and will execute at |
|
765 | 765 | startup the file referenced by this variable. If you put the following code at |
|
766 | 766 | the end of that file, then IPython will be your working environment anytime you |
|
767 | 767 | start Python:: |
|
768 | 768 | |
|
769 | 769 | from IPython.frontend.terminal.ipapp import launch_new_instance |
|
770 | 770 | launch_new_instance() |
|
771 | 771 | raise SystemExit |
|
772 | 772 | |
|
773 | 773 | The ``raise SystemExit`` is needed to exit Python when |
|
774 | 774 | it finishes, otherwise you'll be back at the normal Python '>>>' |
|
775 | 775 | prompt. |
|
776 | 776 | |
|
777 | 777 | This is probably useful to developers who manage multiple Python |
|
778 | 778 | versions and don't want to have correspondingly multiple IPython |
|
779 | 779 | versions. Note that in this mode, there is no way to pass IPython any |
|
780 | 780 | command-line options, as those are trapped first by Python itself. |
|
781 | 781 | |
|
782 | 782 | .. _Embedding: |
|
783 | 783 | |
|
784 | 784 | Embedding IPython |
|
785 | 785 | ================= |
|
786 | 786 | |
|
787 | 787 | You can start a regular IPython session with |
|
788 | 788 | |
|
789 | 789 | .. sourcecode:: python |
|
790 | 790 | |
|
791 | 791 | import IPython |
|
792 | 792 | IPython.start_ipython() |
|
793 | 793 | |
|
794 | 794 | at any point in your program. This will load IPython configuration, |
|
795 | 795 | startup files, and everything, just as if it were a normal IPython session. |
|
796 | 796 | In addition to this, |
|
797 | 797 | it is possible to embed an IPython instance inside your own Python programs. |
|
798 | 798 | This allows you to evaluate dynamically the state of your code, |
|
799 | 799 | operate with your variables, analyze them, etc. Note however that |
|
800 | 800 | any changes you make to values while in the shell do not propagate back |
|
801 | 801 | to the running code, so it is safe to modify your values because you |
|
802 | 802 | won't break your code in bizarre ways by doing so. |
|
803 | 803 | |
|
804 | 804 | .. note:: |
|
805 | 805 | |
|
806 | 806 | At present, embedding IPython cannot be done from inside IPython. |
|
807 | 807 | Run the code samples below outside IPython. |
|
808 | 808 | |
|
809 | 809 | This feature allows you to easily have a fully functional python |
|
810 | 810 | environment for doing object introspection anywhere in your code with a |
|
811 | 811 | simple function call. In some cases a simple print statement is enough, |
|
812 | 812 | but if you need to do more detailed analysis of a code fragment this |
|
813 | 813 | feature can be very valuable. |
|
814 | 814 | |
|
815 | 815 | It can also be useful in scientific computing situations where it is |
|
816 | 816 | common to need to do some automatic, computationally intensive part and |
|
817 | 817 | then stop to look at data, plots, etc. |
|
818 | 818 | Opening an IPython instance will give you full access to your data and |
|
819 | 819 | functions, and you can resume program execution once you are done with |
|
820 | 820 | the interactive part (perhaps to stop again later, as many times as |
|
821 | 821 | needed). |
|
822 | 822 | |
|
823 | 823 | The following code snippet is the bare minimum you need to include in |
|
824 | 824 | your Python programs for this to work (detailed examples follow later):: |
|
825 | 825 | |
|
826 | 826 | from IPython import embed |
|
827 | 827 | |
|
828 | 828 | embed() # this call anywhere in your program will start IPython |
|
829 | 829 | |
|
830 | 830 | .. note:: |
|
831 | 831 | |
|
832 | 832 | As of 0.13, you can embed an IPython *kernel*, for use with qtconsole, |
|
833 | 833 | etc. via ``IPython.embed_kernel()`` instead of ``IPython.embed()``. |
|
834 | 834 | It should function just the same as regular embed, but you connect |
|
835 | 835 | an external frontend rather than IPython starting up in the local |
|
836 | 836 | terminal. |
|
837 | 837 | |
|
838 | 838 | You can run embedded instances even in code which is itself being run at |
|
839 | 839 | the IPython interactive prompt with '%run <filename>'. Since it's easy |
|
840 | 840 | to get lost as to where you are (in your top-level IPython or in your |
|
841 | 841 | embedded one), it's a good idea in such cases to set the in/out prompts |
|
842 | 842 | to something different for the embedded instances. The code examples |
|
843 | 843 | below illustrate this. |
|
844 | 844 | |
|
845 | 845 | You can also have multiple IPython instances in your program and open |
|
846 | 846 | them separately, for example with different options for data |
|
847 | 847 | presentation. If you close and open the same instance multiple times, |
|
848 | 848 | its prompt counters simply continue from each execution to the next. |
|
849 | 849 | |
|
850 | 850 | Please look at the docstrings in the :mod:`~IPython.frontend.terminal.embed` |
|
851 | 851 | module for more details on the use of this system. |
|
852 | 852 | |
|
853 | 853 | The following sample file illustrating how to use the embedding |
|
854 | 854 | functionality is provided in the examples directory as example-embed.py. |
|
855 | 855 | It should be fairly self-explanatory: |
|
856 | 856 | |
|
857 | 857 | .. literalinclude:: ../../../examples/core/example-embed.py |
|
858 | 858 | :language: python |
|
859 | 859 | |
|
860 | 860 | Once you understand how the system functions, you can use the following |
|
861 | 861 | code fragments in your programs which are ready for cut and paste: |
|
862 | 862 | |
|
863 | 863 | .. literalinclude:: ../../../examples/core/example-embed-short.py |
|
864 | 864 | :language: python |
|
865 | 865 | |
|
866 | 866 | Using the Python debugger (pdb) |
|
867 | 867 | =============================== |
|
868 | 868 | |
|
869 | 869 | Running entire programs via pdb |
|
870 | 870 | ------------------------------- |
|
871 | 871 | |
|
872 | 872 | pdb, the Python debugger, is a powerful interactive debugger which |
|
873 | 873 | allows you to step through code, set breakpoints, watch variables, |
|
874 | 874 | etc. IPython makes it very easy to start any script under the control |
|
875 | 875 | of pdb, regardless of whether you have wrapped it into a 'main()' |
|
876 | 876 | function or not. For this, simply type '%run -d myscript' at an |
|
877 | 877 | IPython prompt. See the %run command's documentation (via '%run?' or |
|
878 | 878 | in Sec. magic_ for more details, including how to control where pdb |
|
879 | 879 | will stop execution first. |
|
880 | 880 | |
|
881 | 881 | For more information on the use of the pdb debugger, read the included |
|
882 | 882 | pdb.doc file (part of the standard Python distribution). On a stock |
|
883 | 883 | Linux system it is located at /usr/lib/python2.3/pdb.doc, but the |
|
884 | 884 | easiest way to read it is by using the help() function of the pdb module |
|
885 | 885 | as follows (in an IPython prompt):: |
|
886 | 886 | |
|
887 | 887 | In [1]: import pdb |
|
888 | 888 | In [2]: pdb.help() |
|
889 | 889 | |
|
890 | 890 | This will load the pdb.doc document in a file viewer for you automatically. |
|
891 | 891 | |
|
892 | 892 | |
|
893 | 893 | Automatic invocation of pdb on exceptions |
|
894 | 894 | ----------------------------------------- |
|
895 | 895 | |
|
896 | 896 | IPython, if started with the ``--pdb`` option (or if the option is set in |
|
897 | 897 | your config file) can call the Python pdb debugger every time your code |
|
898 | 898 | triggers an uncaught exception. This feature |
|
899 | 899 | can also be toggled at any time with the %pdb magic command. This can be |
|
900 | 900 | extremely useful in order to find the origin of subtle bugs, because pdb |
|
901 | 901 | opens up at the point in your code which triggered the exception, and |
|
902 | 902 | while your program is at this point 'dead', all the data is still |
|
903 | 903 | available and you can walk up and down the stack frame and understand |
|
904 | 904 | the origin of the problem. |
|
905 | 905 | |
|
906 | 906 | Furthermore, you can use these debugging facilities both with the |
|
907 | 907 | embedded IPython mode and without IPython at all. For an embedded shell |
|
908 | 908 | (see sec. Embedding_), simply call the constructor with |
|
909 | 909 | ``--pdb`` in the argument string and pdb will automatically be called if an |
|
910 | 910 | uncaught exception is triggered by your code. |
|
911 | 911 | |
|
912 | 912 | For stand-alone use of the feature in your programs which do not use |
|
913 | 913 | IPython at all, put the following lines toward the top of your 'main' |
|
914 | 914 | routine:: |
|
915 | 915 | |
|
916 | 916 | import sys |
|
917 | 917 | from IPython.core import ultratb |
|
918 | 918 | sys.excepthook = ultratb.FormattedTB(mode='Verbose', |
|
919 | 919 | color_scheme='Linux', call_pdb=1) |
|
920 | 920 | |
|
921 | 921 | The mode keyword can be either 'Verbose' or 'Plain', giving either very |
|
922 | 922 | detailed or normal tracebacks respectively. The color_scheme keyword can |
|
923 | 923 | be one of 'NoColor', 'Linux' (default) or 'LightBG'. These are the same |
|
924 | 924 | options which can be set in IPython with ``--colors`` and ``--xmode``. |
|
925 | 925 | |
|
926 | 926 | This will give any of your programs detailed, colored tracebacks with |
|
927 | 927 | automatic invocation of pdb. |
|
928 | 928 | |
|
929 | 929 | |
|
930 | 930 | Extensions for syntax processing |
|
931 | 931 | ================================ |
|
932 | 932 | |
|
933 | 933 | This isn't for the faint of heart, because the potential for breaking |
|
934 | 934 | things is quite high. But it can be a very powerful and useful feature. |
|
935 | 935 | In a nutshell, you can redefine the way IPython processes the user input |
|
936 | 936 | line to accept new, special extensions to the syntax without needing to |
|
937 | 937 | change any of IPython's own code. |
|
938 | 938 | |
|
939 | 939 | In the IPython/extensions directory you will find some examples |
|
940 | 940 | supplied, which we will briefly describe now. These can be used 'as is' |
|
941 | 941 | (and both provide very useful functionality), or you can use them as a |
|
942 | 942 | starting point for writing your own extensions. |
|
943 | 943 | |
|
944 | 944 | .. _pasting_with_prompts: |
|
945 | 945 | |
|
946 | 946 | Pasting of code starting with Python or IPython prompts |
|
947 | 947 | ------------------------------------------------------- |
|
948 | 948 | |
|
949 | 949 | IPython is smart enough to filter out input prompts, be they plain Python ones |
|
950 | 950 | (``>>>`` and ``...``) or IPython ones (``In [N]:`` and ``...:``). You can |
|
951 | 951 | therefore copy and paste from existing interactive sessions without worry. |
|
952 | 952 | |
|
953 | 953 | The following is a 'screenshot' of how things work, copying an example from the |
|
954 | 954 | standard Python tutorial:: |
|
955 | 955 | |
|
956 | 956 | In [1]: >>> # Fibonacci series: |
|
957 | 957 | |
|
958 | 958 | In [2]: ... # the sum of two elements defines the next |
|
959 | 959 | |
|
960 | 960 | In [3]: ... a, b = 0, 1 |
|
961 | 961 | |
|
962 | 962 | In [4]: >>> while b < 10: |
|
963 | 963 | ...: ... print b |
|
964 | 964 | ...: ... a, b = b, a+b |
|
965 | 965 | ...: |
|
966 | 966 | 1 |
|
967 | 967 | 1 |
|
968 | 968 | 2 |
|
969 | 969 | 3 |
|
970 | 970 | 5 |
|
971 | 971 | 8 |
|
972 | 972 | |
|
973 | 973 | And pasting from IPython sessions works equally well:: |
|
974 | 974 | |
|
975 | 975 | In [1]: In [5]: def f(x): |
|
976 | 976 | ...: ...: "A simple function" |
|
977 | 977 | ...: ...: return x**2 |
|
978 | 978 | ...: ...: |
|
979 | 979 | |
|
980 | 980 | In [2]: f(3) |
|
981 | 981 | Out[2]: 9 |
|
982 | 982 | |
|
983 | 983 | .. _gui_support: |
|
984 | 984 | |
|
985 | 985 | GUI event loop support |
|
986 | 986 | ====================== |
|
987 | 987 | |
|
988 | 988 | .. versionadded:: 0.11 |
|
989 | 989 | The ``%gui`` magic and :mod:`IPython.lib.inputhook`. |
|
990 | 990 | |
|
991 | 991 | IPython has excellent support for working interactively with Graphical User |
|
992 | 992 | Interface (GUI) toolkits, such as wxPython, PyQt4/PySide, PyGTK and Tk. This is |
|
993 | 993 | implemented using Python's builtin ``PyOSInputHook`` hook. This implementation |
|
994 | 994 | is extremely robust compared to our previous thread-based version. The |
|
995 | 995 | advantages of this are: |
|
996 | 996 | |
|
997 | 997 | * GUIs can be enabled and disabled dynamically at runtime. |
|
998 | 998 | * The active GUI can be switched dynamically at runtime. |
|
999 | 999 | * In some cases, multiple GUIs can run simultaneously with no problems. |
|
1000 | 1000 | * There is a developer API in :mod:`IPython.lib.inputhook` for customizing |
|
1001 | 1001 | all of these things. |
|
1002 | 1002 | |
|
1003 | 1003 | For users, enabling GUI event loop integration is simple. You simple use the |
|
1004 | 1004 | ``%gui`` magic as follows:: |
|
1005 | 1005 | |
|
1006 | 1006 | %gui [GUINAME] |
|
1007 | 1007 | |
|
1008 | 1008 | With no arguments, ``%gui`` removes all GUI support. Valid ``GUINAME`` |
|
1009 | 1009 | arguments are ``wx``, ``qt``, ``gtk`` and ``tk``. |
|
1010 | 1010 | |
|
1011 | 1011 | Thus, to use wxPython interactively and create a running :class:`wx.App` |
|
1012 | 1012 | object, do:: |
|
1013 | 1013 | |
|
1014 | 1014 | %gui wx |
|
1015 | 1015 | |
|
1016 | 1016 | For information on IPython's Matplotlib integration (and the ``matplotlib`` |
|
1017 | 1017 | mode) see :ref:`this section <matplotlib_support>`. |
|
1018 | 1018 | |
|
1019 | 1019 | For developers that want to use IPython's GUI event loop integration in the |
|
1020 | 1020 | form of a library, these capabilities are exposed in library form in the |
|
1021 | 1021 | :mod:`IPython.lib.inputhook` and :mod:`IPython.lib.guisupport` modules. |
|
1022 | 1022 | Interested developers should see the module docstrings for more information, |
|
1023 | 1023 | but there are a few points that should be mentioned here. |
|
1024 | 1024 | |
|
1025 | 1025 | First, the ``PyOSInputHook`` approach only works in command line settings |
|
1026 | 1026 | where readline is activated. The integration with various eventloops |
|
1027 | 1027 | is handled somewhat differently (and more simply) when using the standalone |
|
1028 | 1028 | kernel, as in the qtconsole and notebook. |
|
1029 | 1029 | |
|
1030 | 1030 | Second, when using the ``PyOSInputHook`` approach, a GUI application should |
|
1031 | 1031 | *not* start its event loop. Instead all of this is handled by the |
|
1032 | 1032 | ``PyOSInputHook``. This means that applications that are meant to be used both |
|
1033 | 1033 | in IPython and as standalone apps need to have special code to detects how the |
|
1034 | 1034 | application is being run. We highly recommend using IPython's support for this. |
|
1035 | 1035 | Since the details vary slightly between toolkits, we point you to the various |
|
1036 |
examples in our source directory :file:` |
|
|
1036 | examples in our source directory :file:`examples/lib` that demonstrate | |
|
1037 | 1037 | these capabilities. |
|
1038 | 1038 | |
|
1039 | 1039 | Third, unlike previous versions of IPython, we no longer "hijack" (replace |
|
1040 | 1040 | them with no-ops) the event loops. This is done to allow applications that |
|
1041 | 1041 | actually need to run the real event loops to do so. This is often needed to |
|
1042 | 1042 | process pending events at critical points. |
|
1043 | 1043 | |
|
1044 | 1044 | Finally, we also have a number of examples in our source directory |
|
1045 |
:file:` |
|
|
1045 | :file:`examples/lib` that demonstrate these capabilities. | |
|
1046 | 1046 | |
|
1047 | 1047 | PyQt and PySide |
|
1048 | 1048 | --------------- |
|
1049 | 1049 | |
|
1050 | 1050 | .. attempt at explanation of the complete mess that is Qt support |
|
1051 | 1051 | |
|
1052 | 1052 | When you use ``--gui=qt`` or ``--matplotlib=qt``, IPython can work with either |
|
1053 | 1053 | PyQt4 or PySide. There are three options for configuration here, because |
|
1054 | 1054 | PyQt4 has two APIs for QString and QVariant - v1, which is the default on |
|
1055 | 1055 | Python 2, and the more natural v2, which is the only API supported by PySide. |
|
1056 | 1056 | v2 is also the default for PyQt4 on Python 3. IPython's code for the QtConsole |
|
1057 | 1057 | uses v2, but you can still use any interface in your code, since the |
|
1058 | 1058 | Qt frontend is in a different process. |
|
1059 | 1059 | |
|
1060 | 1060 | The default will be to import PyQt4 without configuration of the APIs, thus |
|
1061 | 1061 | matching what most applications would expect. It will fall back of PySide if |
|
1062 | 1062 | PyQt4 is unavailable. |
|
1063 | 1063 | |
|
1064 | 1064 | If specified, IPython will respect the environment variable ``QT_API`` used |
|
1065 | 1065 | by ETS. ETS 4.0 also works with both PyQt4 and PySide, but it requires |
|
1066 | 1066 | PyQt4 to use its v2 API. So if ``QT_API=pyside`` PySide will be used, |
|
1067 | 1067 | and if ``QT_API=pyqt`` then PyQt4 will be used *with the v2 API* for |
|
1068 | 1068 | QString and QVariant, so ETS codes like MayaVi will also work with IPython. |
|
1069 | 1069 | |
|
1070 | 1070 | If you launch IPython in matplotlib mode with ``ipython --matplotlib=qt``, |
|
1071 | 1071 | then IPython will ask matplotlib which Qt library to use (only if QT_API is |
|
1072 | 1072 | *not set*), via the 'backend.qt4' rcParam. If matplotlib is version 1.0.1 or |
|
1073 | 1073 | older, then IPython will always use PyQt4 without setting the v2 APIs, since |
|
1074 | 1074 | neither v2 PyQt nor PySide work. |
|
1075 | 1075 | |
|
1076 | 1076 | .. warning:: |
|
1077 | 1077 | |
|
1078 | 1078 | Note that this means for ETS 4 to work with PyQt4, ``QT_API`` *must* be set |
|
1079 | 1079 | to work with IPython's qt integration, because otherwise PyQt4 will be |
|
1080 | 1080 | loaded in an incompatible mode. |
|
1081 | 1081 | |
|
1082 | 1082 | It also means that you must *not* have ``QT_API`` set if you want to |
|
1083 | 1083 | use ``--gui=qt`` with code that requires PyQt4 API v1. |
|
1084 | 1084 | |
|
1085 | 1085 | |
|
1086 | 1086 | .. _matplotlib_support: |
|
1087 | 1087 | |
|
1088 | 1088 | Plotting with matplotlib |
|
1089 | 1089 | ======================== |
|
1090 | 1090 | |
|
1091 | 1091 | `Matplotlib`_ provides high quality 2D and 3D plotting for Python. Matplotlib |
|
1092 | 1092 | can produce plots on screen using a variety of GUI toolkits, including Tk, |
|
1093 | 1093 | PyGTK, PyQt4 and wxPython. It also provides a number of commands useful for |
|
1094 | 1094 | scientific computing, all with a syntax compatible with that of the popular |
|
1095 | 1095 | Matlab program. |
|
1096 | 1096 | |
|
1097 | 1097 | To start IPython with matplotlib support, use the ``--matplotlib`` switch. If |
|
1098 | 1098 | IPython is already running, you can run the ``%matplotlib`` magic. If no |
|
1099 | 1099 | arguments are given, IPython will automatically detect your choice of |
|
1100 | 1100 | matplotlib backend. You can also request a specific backend with |
|
1101 | 1101 | ``%matplotlib backend``, where ``backend`` must be one of: 'tk', 'qt', 'wx', |
|
1102 | 1102 | 'gtk', 'osx'. In the web notebook and Qt console, 'inline' is also a valid |
|
1103 | 1103 | backend value, which produces static figures inlined inside the application |
|
1104 | 1104 | window instead of matplotlib's interactive figures that live in separate |
|
1105 | 1105 | windows. |
|
1106 | 1106 | |
|
1107 | 1107 | .. _Matplotlib: http://matplotlib.sourceforge.net |
|
1108 | 1108 | |
|
1109 | 1109 | .. _interactive_demos: |
|
1110 | 1110 | |
|
1111 | 1111 | Interactive demos with IPython |
|
1112 | 1112 | ============================== |
|
1113 | 1113 | |
|
1114 | 1114 | IPython ships with a basic system for running scripts interactively in |
|
1115 | 1115 | sections, useful when presenting code to audiences. A few tags embedded |
|
1116 | 1116 | in comments (so that the script remains valid Python code) divide a file |
|
1117 | 1117 | into separate blocks, and the demo can be run one block at a time, with |
|
1118 | 1118 | IPython printing (with syntax highlighting) the block before executing |
|
1119 | 1119 | it, and returning to the interactive prompt after each block. The |
|
1120 | 1120 | interactive namespace is updated after each block is run with the |
|
1121 | 1121 | contents of the demo's namespace. |
|
1122 | 1122 | |
|
1123 | 1123 | This allows you to show a piece of code, run it and then execute |
|
1124 | 1124 | interactively commands based on the variables just created. Once you |
|
1125 | 1125 | want to continue, you simply execute the next block of the demo. The |
|
1126 | 1126 | following listing shows the markup necessary for dividing a script into |
|
1127 | 1127 | sections for execution as a demo: |
|
1128 | 1128 | |
|
1129 | 1129 | .. literalinclude:: ../../../examples/lib/example-demo.py |
|
1130 | 1130 | :language: python |
|
1131 | 1131 | |
|
1132 | 1132 | In order to run a file as a demo, you must first make a Demo object out |
|
1133 | 1133 | of it. If the file is named myscript.py, the following code will make a |
|
1134 | 1134 | demo:: |
|
1135 | 1135 | |
|
1136 | 1136 | from IPython.lib.demo import Demo |
|
1137 | 1137 | |
|
1138 | 1138 | mydemo = Demo('myscript.py') |
|
1139 | 1139 | |
|
1140 | 1140 | This creates the mydemo object, whose blocks you run one at a time by |
|
1141 | 1141 | simply calling the object with no arguments. If you have autocall active |
|
1142 | 1142 | in IPython (the default), all you need to do is type:: |
|
1143 | 1143 | |
|
1144 | 1144 | mydemo |
|
1145 | 1145 | |
|
1146 | 1146 | and IPython will call it, executing each block. Demo objects can be |
|
1147 | 1147 | restarted, you can move forward or back skipping blocks, re-execute the |
|
1148 | 1148 | last block, etc. Simply use the Tab key on a demo object to see its |
|
1149 | 1149 | methods, and call '?' on them to see their docstrings for more usage |
|
1150 | 1150 | details. In addition, the demo module itself contains a comprehensive |
|
1151 | 1151 | docstring, which you can access via:: |
|
1152 | 1152 | |
|
1153 | 1153 | from IPython.lib import demo |
|
1154 | 1154 | |
|
1155 | 1155 | demo? |
|
1156 | 1156 | |
|
1157 | 1157 | Limitations: It is important to note that these demos are limited to |
|
1158 | 1158 | fairly simple uses. In particular, you cannot break up sections within |
|
1159 | 1159 | indented code (loops, if statements, function definitions, etc.) |
|
1160 | 1160 | Supporting something like this would basically require tracking the |
|
1161 | 1161 | internal execution state of the Python interpreter, so only top-level |
|
1162 | 1162 | divisions are allowed. If you want to be able to open an IPython |
|
1163 | 1163 | instance at an arbitrary point in a program, you can use IPython's |
|
1164 | 1164 | embedding facilities, see :func:`IPython.embed` for details. |
|
1165 | 1165 |
@@ -1,150 +1,150 b'' | |||
|
1 | 1 | .. _parallel_asyncresult: |
|
2 | 2 | |
|
3 | 3 | ====================== |
|
4 | 4 | The AsyncResult object |
|
5 | 5 | ====================== |
|
6 | 6 | |
|
7 | 7 | In non-blocking mode, :meth:`apply` submits the command to be executed and |
|
8 | 8 | then returns a :class:`~.AsyncResult` object immediately. The |
|
9 | 9 | AsyncResult object gives you a way of getting a result at a later |
|
10 | 10 | time through its :meth:`get` method, but it also collects metadata |
|
11 | 11 | on execution. |
|
12 | 12 | |
|
13 | 13 | |
|
14 | 14 | Beyond multiprocessing's AsyncResult |
|
15 | 15 | ==================================== |
|
16 | 16 | |
|
17 | 17 | .. Note:: |
|
18 | 18 | |
|
19 | 19 | The :class:`~.AsyncResult` object provides a superset of the interface in |
|
20 | 20 | :py:class:`multiprocessing.pool.AsyncResult`. See the |
|
21 | 21 | `official Python documentation <http://docs.python.org/library/multiprocessing#multiprocessing.pool.AsyncResult>`_ |
|
22 | 22 | for more on the basics of this interface. |
|
23 | 23 | |
|
24 | 24 | Our AsyncResult objects add a number of convenient features for working with |
|
25 | 25 | parallel results, beyond what is provided by the original AsyncResult. |
|
26 | 26 | |
|
27 | 27 | |
|
28 | 28 | get_dict |
|
29 | 29 | -------- |
|
30 | 30 | |
|
31 | 31 | First, is :meth:`.AsyncResult.get_dict`, which pulls results as a dictionary |
|
32 | 32 | keyed by engine_id, rather than a flat list. This is useful for quickly |
|
33 | 33 | coordinating or distributing information about all of the engines. |
|
34 | 34 | |
|
35 | 35 | As an example, here is a quick call that gives every engine a dict showing |
|
36 | 36 | the PID of every other engine: |
|
37 | 37 | |
|
38 | 38 | .. sourcecode:: ipython |
|
39 | 39 | |
|
40 | 40 | In [10]: ar = rc[:].apply_async(os.getpid) |
|
41 | 41 | In [11]: pids = ar.get_dict() |
|
42 | 42 | In [12]: rc[:]['pid_map'] = pids |
|
43 | 43 | |
|
44 | 44 | This trick is particularly useful when setting up inter-engine communication, |
|
45 | 45 | as in IPython's :file:`examples/parallel/interengine` examples. |
|
46 | 46 | |
|
47 | 47 | |
|
48 | 48 | Metadata |
|
49 | 49 | ======== |
|
50 | 50 | |
|
51 | 51 | IPython.parallel tracks some metadata about the tasks, which is stored |
|
52 | 52 | in the :attr:`.Client.metadata` dict. The AsyncResult object gives you an |
|
53 | 53 | interface for this information as well, including timestamps stdout/err, |
|
54 | 54 | and engine IDs. |
|
55 | 55 | |
|
56 | 56 | |
|
57 | 57 | Timing |
|
58 | 58 | ------ |
|
59 | 59 | |
|
60 | 60 | IPython tracks various timestamps as :py:class:`.datetime` objects, |
|
61 | 61 | and the AsyncResult object has a few properties that turn these into useful |
|
62 | 62 | times (in seconds as floats). |
|
63 | 63 | |
|
64 | 64 | For use while the tasks are still pending: |
|
65 | 65 | |
|
66 | 66 | * :attr:`ar.elapsed` is just the elapsed seconds since submission, for use |
|
67 | 67 | before the AsyncResult is complete. |
|
68 | 68 | * :attr:`ar.progress` is the number of tasks that have completed. Fractional progress |
|
69 | 69 | would be:: |
|
70 | 70 | |
|
71 | 71 | 1.0 * ar.progress / len(ar) |
|
72 | 72 | |
|
73 | 73 | * :meth:`AsyncResult.wait_interactive` will wait for the result to finish, but |
|
74 | 74 | print out status updates on progress and elapsed time while it waits. |
|
75 | 75 | |
|
76 | 76 | For use after the tasks are done: |
|
77 | 77 | |
|
78 | 78 | * :attr:`ar.serial_time` is the sum of the computation time of all of the tasks |
|
79 | 79 | done in parallel. |
|
80 | 80 | * :attr:`ar.wall_time` is the time between the first task submitted and last result |
|
81 | 81 | received. This is the actual cost of computation, including IPython overhead. |
|
82 | 82 | |
|
83 | 83 | |
|
84 | 84 | .. note:: |
|
85 | 85 | |
|
86 | 86 | wall_time is only precise if the Client is waiting for results when |
|
87 | 87 | the task finished, because the `received` timestamp is made when the result is |
|
88 | 88 | unpacked by the Client, triggered by the :meth:`~Client.spin` call. If you |
|
89 | 89 | are doing work in the Client, and not waiting/spinning, then `received` might |
|
90 | 90 | be artificially high. |
|
91 | 91 | |
|
92 | 92 | An often interesting metric is the time it actually cost to do the work in parallel |
|
93 | 93 | relative to the serial computation, and this can be given simply with |
|
94 | 94 | |
|
95 | 95 | .. sourcecode:: python |
|
96 | 96 | |
|
97 | 97 | speedup = ar.serial_time / ar.wall_time |
|
98 | 98 | |
|
99 | 99 | |
|
100 | 100 | Map results are iterable! |
|
101 | 101 | ========================= |
|
102 | 102 | |
|
103 | 103 | When an AsyncResult object has multiple results (e.g. the :class:`~AsyncMapResult` |
|
104 | 104 | object), you can actually iterate through results themselves, and act on them as they arrive: |
|
105 | 105 | |
|
106 | 106 | .. literalinclude:: ../../../examples/parallel/itermapresult.py |
|
107 | 107 | :language: python |
|
108 | 108 | :lines: 20-67 |
|
109 | 109 | |
|
110 | 110 | That is to say, if you treat an AsyncMapResult as if it were a list of your actual |
|
111 | 111 | results, it should behave as you would expect, with the only difference being |
|
112 | 112 | that you can start iterating through the results before they have even been computed. |
|
113 | 113 | |
|
114 | 114 | This lets you do a dumb version of map/reduce with the builtin Python functions, |
|
115 | 115 | and the only difference between doing this locally and doing it remotely in parallel |
|
116 | 116 | is using the asynchronous view.map instead of the builtin map. |
|
117 | 117 | |
|
118 | 118 | |
|
119 | 119 | Here is a simple one-line RMS (root-mean-square) implemented with Python's builtin map/reduce. |
|
120 | 120 | |
|
121 | 121 | .. sourcecode:: ipython |
|
122 | 122 | |
|
123 | 123 | In [38]: X = np.linspace(0,100) |
|
124 | 124 | |
|
125 | 125 | In [39]: from math import sqrt |
|
126 | 126 | |
|
127 | 127 | In [40]: add = lambda a,b: a+b |
|
128 | 128 | |
|
129 | 129 | In [41]: sq = lambda x: x*x |
|
130 | 130 | |
|
131 | 131 | In [42]: sqrt(reduce(add, map(sq, X)) / len(X)) |
|
132 | 132 | Out[42]: 58.028845747399714 |
|
133 | 133 | |
|
134 | 134 | In [43]: sqrt(reduce(add, view.map(sq, X)) / len(X)) |
|
135 | 135 | Out[43]: 58.028845747399714 |
|
136 | 136 | |
|
137 | 137 | To break that down: |
|
138 | 138 | |
|
139 | 139 | 1. ``map(sq, X)`` Compute the square of each element in the list (locally, or in parallel) |
|
140 | 140 | 2. ``reduce(add, sqX) / len(X)`` compute the mean by summing over the list (or AsyncMapResult) |
|
141 | 141 | and dividing by the size |
|
142 | 142 | 3. take the square root of the resulting number |
|
143 | 143 | |
|
144 | 144 | .. seealso:: |
|
145 | 145 | |
|
146 | 146 | When AsyncResult or the AsyncMapResult don't provide what you need (for instance, |
|
147 | 147 | handling individual results as they arrive, but with metadata), you can always |
|
148 | 148 | just split the original result's ``msg_ids`` attribute, and handle them as you like. |
|
149 | 149 | |
|
150 |
For an example of this, see :file:` |
|
|
150 | For an example of this, see :file:`examples/parallel/customresult.py` |
@@ -1,177 +1,177 b'' | |||
|
1 | 1 | .. _dag_dependencies: |
|
2 | 2 | |
|
3 | 3 | ================ |
|
4 | 4 | DAG Dependencies |
|
5 | 5 | ================ |
|
6 | 6 | |
|
7 | 7 | Often, parallel workflow is described in terms of a `Directed Acyclic Graph |
|
8 | 8 | <http://en.wikipedia.org/wiki/Directed_acyclic_graph>`_ or DAG. A popular library |
|
9 | 9 | for working with Graphs is NetworkX_. Here, we will walk through a demo mapping |
|
10 | 10 | a nx DAG to task dependencies. |
|
11 | 11 | |
|
12 | 12 | The full script that runs this demo can be found in |
|
13 |
:file:` |
|
|
13 | :file:`examples/parallel/dagdeps.py`. | |
|
14 | 14 | |
|
15 | 15 | Why are DAGs good for task dependencies? |
|
16 | 16 | ---------------------------------------- |
|
17 | 17 | |
|
18 | 18 | The 'G' in DAG is 'Graph'. A Graph is a collection of **nodes** and **edges** that connect |
|
19 | 19 | the nodes. For our purposes, each node would be a task, and each edge would be a |
|
20 | 20 | dependency. The 'D' in DAG stands for 'Directed'. This means that each edge has a |
|
21 | 21 | direction associated with it. So we can interpret the edge (a,b) as meaning that b depends |
|
22 | 22 | on a, whereas the edge (b,a) would mean a depends on b. The 'A' is 'Acyclic', meaning that |
|
23 | 23 | there must not be any closed loops in the graph. This is important for dependencies, |
|
24 | 24 | because if a loop were closed, then a task could ultimately depend on itself, and never be |
|
25 | 25 | able to run. If your workflow can be described as a DAG, then it is impossible for your |
|
26 | 26 | dependencies to cause a deadlock. |
|
27 | 27 | |
|
28 | 28 | A Sample DAG |
|
29 | 29 | ------------ |
|
30 | 30 | |
|
31 | 31 | Here, we have a very simple 5-node DAG: |
|
32 | 32 | |
|
33 | 33 | .. figure:: figs/simpledag.* |
|
34 | 34 | :width: 600px |
|
35 | 35 | |
|
36 | 36 | With NetworkX, an arrow is just a fattened bit on the edge. Here, we can see that task 0 |
|
37 | 37 | depends on nothing, and can run immediately. 1 and 2 depend on 0; 3 depends on |
|
38 | 38 | 1 and 2; and 4 depends only on 1. |
|
39 | 39 | |
|
40 | 40 | A possible sequence of events for this workflow: |
|
41 | 41 | |
|
42 | 42 | 0. Task 0 can run right away |
|
43 | 43 | 1. 0 finishes, so 1,2 can start |
|
44 | 44 | 2. 1 finishes, 3 is still waiting on 2, but 4 can start right away |
|
45 | 45 | 3. 2 finishes, and 3 can finally start |
|
46 | 46 | |
|
47 | 47 | |
|
48 | 48 | Further, taking failures into account, assuming all dependencies are run with the default |
|
49 | 49 | `success=True,failure=False`, the following cases would occur for each node's failure: |
|
50 | 50 | |
|
51 | 51 | 0. fails: all other tasks fail as Impossible |
|
52 | 52 | 1. 2 can still succeed, but 3,4 are unreachable |
|
53 | 53 | 2. 3 becomes unreachable, but 4 is unaffected |
|
54 | 54 | 3. and 4. are terminal, and can have no effect on other nodes |
|
55 | 55 | |
|
56 | 56 | The code to generate the simple DAG: |
|
57 | 57 | |
|
58 | 58 | .. sourcecode:: python |
|
59 | 59 | |
|
60 | 60 | import networkx as nx |
|
61 | 61 | |
|
62 | 62 | G = nx.DiGraph() |
|
63 | 63 | |
|
64 | 64 | # add 5 nodes, labeled 0-4: |
|
65 | 65 | map(G.add_node, range(5)) |
|
66 | 66 | # 1,2 depend on 0: |
|
67 | 67 | G.add_edge(0,1) |
|
68 | 68 | G.add_edge(0,2) |
|
69 | 69 | # 3 depends on 1,2 |
|
70 | 70 | G.add_edge(1,3) |
|
71 | 71 | G.add_edge(2,3) |
|
72 | 72 | # 4 depends on 1 |
|
73 | 73 | G.add_edge(1,4) |
|
74 | 74 | |
|
75 | 75 | # now draw the graph: |
|
76 | 76 | pos = { 0 : (0,0), 1 : (1,1), 2 : (-1,1), |
|
77 | 77 | 3 : (0,2), 4 : (2,2)} |
|
78 | 78 | nx.draw(G, pos, edge_color='r') |
|
79 | 79 | |
|
80 | 80 | |
|
81 | 81 | For demonstration purposes, we have a function that generates a random DAG with a given |
|
82 | 82 | number of nodes and edges. |
|
83 | 83 | |
|
84 | 84 | .. literalinclude:: ../../../examples/parallel/dagdeps.py |
|
85 | 85 | :language: python |
|
86 | 86 | :lines: 20-36 |
|
87 | 87 | |
|
88 | 88 | So first, we start with a graph of 32 nodes, with 128 edges: |
|
89 | 89 | |
|
90 | 90 | .. sourcecode:: ipython |
|
91 | 91 | |
|
92 | 92 | In [2]: G = random_dag(32,128) |
|
93 | 93 | |
|
94 | 94 | Now, we need to build our dict of jobs corresponding to the nodes on the graph: |
|
95 | 95 | |
|
96 | 96 | .. sourcecode:: ipython |
|
97 | 97 | |
|
98 | 98 | In [3]: jobs = {} |
|
99 | 99 | |
|
100 | 100 | # in reality, each job would presumably be different |
|
101 | 101 | # randomwait is just a function that sleeps for a random interval |
|
102 | 102 | In [4]: for node in G: |
|
103 | 103 | ...: jobs[node] = randomwait |
|
104 | 104 | |
|
105 | 105 | Once we have a dict of jobs matching the nodes on the graph, we can start submitting jobs, |
|
106 | 106 | and linking up the dependencies. Since we don't know a job's msg_id until it is submitted, |
|
107 | 107 | which is necessary for building dependencies, it is critical that we don't submit any jobs |
|
108 | 108 | before other jobs it may depend on. Fortunately, NetworkX provides a |
|
109 | 109 | :meth:`topological_sort` method which ensures exactly this. It presents an iterable, that |
|
110 | 110 | guarantees that when you arrive at a node, you have already visited all the nodes it |
|
111 | 111 | on which it depends: |
|
112 | 112 | |
|
113 | 113 | .. sourcecode:: ipython |
|
114 | 114 | |
|
115 | 115 | In [5]: rc = Client() |
|
116 | 116 | In [5]: view = rc.load_balanced_view() |
|
117 | 117 | |
|
118 | 118 | In [6]: results = {} |
|
119 | 119 | |
|
120 | 120 | In [7]: for node in G.topological_sort(): |
|
121 | 121 | ...: # get list of AsyncResult objects from nodes |
|
122 | 122 | ...: # leading into this one as dependencies |
|
123 | 123 | ...: deps = [ results[n] for n in G.predecessors(node) ] |
|
124 | 124 | ...: # submit and store AsyncResult object |
|
125 | 125 | ...: with view.temp_flags(after=deps, block=False): |
|
126 | 126 | ...: results[node] = view.apply_with_flags(jobs[node]) |
|
127 | 127 | |
|
128 | 128 | |
|
129 | 129 | Now that we have submitted all the jobs, we can wait for the results: |
|
130 | 130 | |
|
131 | 131 | .. sourcecode:: ipython |
|
132 | 132 | |
|
133 | 133 | In [8]: view.wait(results.values()) |
|
134 | 134 | |
|
135 | 135 | Now, at least we know that all the jobs ran and did not fail (``r.get()`` would have |
|
136 | 136 | raised an error if a task failed). But we don't know that the ordering was properly |
|
137 | 137 | respected. For this, we can use the :attr:`metadata` attribute of each AsyncResult. |
|
138 | 138 | |
|
139 | 139 | These objects store a variety of metadata about each task, including various timestamps. |
|
140 | 140 | We can validate that the dependencies were respected by checking that each task was |
|
141 | 141 | started after all of its predecessors were completed: |
|
142 | 142 | |
|
143 | 143 | .. literalinclude:: ../../../examples/parallel/dagdeps.py |
|
144 | 144 | :language: python |
|
145 | 145 | :lines: 64-70 |
|
146 | 146 | |
|
147 | 147 | We can also validate the graph visually. By drawing the graph with each node's x-position |
|
148 | 148 | as its start time, all arrows must be pointing to the right if dependencies were respected. |
|
149 | 149 | For spreading, the y-position will be the runtime of the task, so long tasks |
|
150 | 150 | will be at the top, and quick, small tasks will be at the bottom. |
|
151 | 151 | |
|
152 | 152 | .. sourcecode:: ipython |
|
153 | 153 | |
|
154 | 154 | In [10]: from matplotlib.dates import date2num |
|
155 | 155 | |
|
156 | 156 | In [11]: from matplotlib.cm import gist_rainbow |
|
157 | 157 | |
|
158 | 158 | In [12]: pos = {}; colors = {} |
|
159 | 159 | |
|
160 | 160 | In [12]: for node in G: |
|
161 | 161 | ....: md = results[node].metadata |
|
162 | 162 | ....: start = date2num(md.started) |
|
163 | 163 | ....: runtime = date2num(md.completed) - start |
|
164 | 164 | ....: pos[node] = (start, runtime) |
|
165 | 165 | ....: colors[node] = md.engine_id |
|
166 | 166 | |
|
167 | 167 | In [13]: nx.draw(G, pos, node_list=colors.keys(), node_color=colors.values(), |
|
168 | 168 | ....: cmap=gist_rainbow) |
|
169 | 169 | |
|
170 | 170 | .. figure:: figs/dagdeps.* |
|
171 | 171 | :width: 600px |
|
172 | 172 | |
|
173 | 173 | Time started on x, runtime on y, and color-coded by engine-id (in this case there |
|
174 | 174 | were four engines). Edges denote dependencies. |
|
175 | 175 | |
|
176 | 176 | |
|
177 | 177 | .. _NetworkX: http://networkx.lanl.gov/ |
@@ -1,208 +1,208 b'' | |||
|
1 | 1 | .. _parallel_examples: |
|
2 | 2 | |
|
3 | 3 | ================= |
|
4 | 4 | Parallel examples |
|
5 | 5 | ================= |
|
6 | 6 | |
|
7 | 7 | In this section we describe two more involved examples of using an IPython |
|
8 | 8 | cluster to perform a parallel computation. We will be doing some plotting, |
|
9 | 9 | so we start IPython with matplotlib integration by typing:: |
|
10 | 10 | |
|
11 | 11 | ipython --matplotlib |
|
12 | 12 | |
|
13 | 13 | at the system command line. |
|
14 | 14 | Or you can enable matplotlib integration at any point with: |
|
15 | 15 | |
|
16 | 16 | .. sourcecode:: ipython |
|
17 | 17 | |
|
18 | 18 | In [1]: %matplotlib |
|
19 | 19 | |
|
20 | 20 | |
|
21 | 21 | 150 million digits of pi |
|
22 | 22 | ======================== |
|
23 | 23 | |
|
24 | 24 | In this example we would like to study the distribution of digits in the |
|
25 | 25 | number pi (in base 10). While it is not known if pi is a normal number (a |
|
26 | 26 | number is normal in base 10 if 0-9 occur with equal likelihood) numerical |
|
27 | 27 | investigations suggest that it is. We will begin with a serial calculation on |
|
28 | 28 | 10,000 digits of pi and then perform a parallel calculation involving 150 |
|
29 | 29 | million digits. |
|
30 | 30 | |
|
31 | 31 | In both the serial and parallel calculation we will be using functions defined |
|
32 | 32 | in the :file:`pidigits.py` file, which is available in the |
|
33 |
:file:` |
|
|
33 | :file:`examples/parallel` directory of the IPython source distribution. | |
|
34 | 34 | These functions provide basic facilities for working with the digits of pi and |
|
35 | 35 | can be loaded into IPython by putting :file:`pidigits.py` in your current |
|
36 | 36 | working directory and then doing: |
|
37 | 37 | |
|
38 | 38 | .. sourcecode:: ipython |
|
39 | 39 | |
|
40 | 40 | In [1]: run pidigits.py |
|
41 | 41 | |
|
42 | 42 | Serial calculation |
|
43 | 43 | ------------------ |
|
44 | 44 | |
|
45 | 45 | For the serial calculation, we will use `SymPy <http://www.sympy.org>`_ to |
|
46 | 46 | calculate 10,000 digits of pi and then look at the frequencies of the digits |
|
47 | 47 | 0-9. Out of 10,000 digits, we expect each digit to occur 1,000 times. While |
|
48 | 48 | SymPy is capable of calculating many more digits of pi, our purpose here is to |
|
49 | 49 | set the stage for the much larger parallel calculation. |
|
50 | 50 | |
|
51 | 51 | In this example, we use two functions from :file:`pidigits.py`: |
|
52 | 52 | :func:`one_digit_freqs` (which calculates how many times each digit occurs) |
|
53 | 53 | and :func:`plot_one_digit_freqs` (which uses Matplotlib to plot the result). |
|
54 | 54 | Here is an interactive IPython session that uses these functions with |
|
55 | 55 | SymPy: |
|
56 | 56 | |
|
57 | 57 | .. sourcecode:: ipython |
|
58 | 58 | |
|
59 | 59 | In [7]: import sympy |
|
60 | 60 | |
|
61 | 61 | In [8]: pi = sympy.pi.evalf(40) |
|
62 | 62 | |
|
63 | 63 | In [9]: pi |
|
64 | 64 | Out[9]: 3.141592653589793238462643383279502884197 |
|
65 | 65 | |
|
66 | 66 | In [10]: pi = sympy.pi.evalf(10000) |
|
67 | 67 | |
|
68 | 68 | In [11]: digits = (d for d in str(pi)[2:]) # create a sequence of digits |
|
69 | 69 | |
|
70 | 70 | In [13]: freqs = one_digit_freqs(digits) |
|
71 | 71 | |
|
72 | 72 | In [14]: plot_one_digit_freqs(freqs) |
|
73 | 73 | Out[14]: [<matplotlib.lines.Line2D object at 0x18a55290>] |
|
74 | 74 | |
|
75 | 75 | The resulting plot of the single digit counts shows that each digit occurs |
|
76 | 76 | approximately 1,000 times, but that with only 10,000 digits the |
|
77 | 77 | statistical fluctuations are still rather large: |
|
78 | 78 | |
|
79 | 79 | .. image:: figs/single_digits.* |
|
80 | 80 | |
|
81 | 81 | It is clear that to reduce the relative fluctuations in the counts, we need |
|
82 | 82 | to look at many more digits of pi. That brings us to the parallel calculation. |
|
83 | 83 | |
|
84 | 84 | Parallel calculation |
|
85 | 85 | -------------------- |
|
86 | 86 | |
|
87 | 87 | Calculating many digits of pi is a challenging computational problem in itself. |
|
88 | 88 | Because we want to focus on the distribution of digits in this example, we |
|
89 | 89 | will use pre-computed digit of pi from the website of Professor Yasumasa |
|
90 | 90 | Kanada at the University of Tokyo (http://www.super-computing.org). These |
|
91 | 91 | digits come in a set of text files (ftp://pi.super-computing.org/.2/pi200m/) |
|
92 | 92 | that each have 10 million digits of pi. |
|
93 | 93 | |
|
94 | 94 | For the parallel calculation, we have copied these files to the local hard |
|
95 | 95 | drives of the compute nodes. A total of 15 of these files will be used, for a |
|
96 | 96 | total of 150 million digits of pi. To make things a little more interesting we |
|
97 | 97 | will calculate the frequencies of all 2 digits sequences (00-99) and then plot |
|
98 | 98 | the result using a 2D matrix in Matplotlib. |
|
99 | 99 | |
|
100 | 100 | The overall idea of the calculation is simple: each IPython engine will |
|
101 | 101 | compute the two digit counts for the digits in a single file. Then in a final |
|
102 | 102 | step the counts from each engine will be added up. To perform this |
|
103 | 103 | calculation, we will need two top-level functions from :file:`pidigits.py`: |
|
104 | 104 | |
|
105 | 105 | .. literalinclude:: ../../../examples/parallel/pi/pidigits.py |
|
106 | 106 | :language: python |
|
107 | 107 | :lines: 47-62 |
|
108 | 108 | |
|
109 | 109 | We will also use the :func:`plot_two_digit_freqs` function to plot the |
|
110 | 110 | results. The code to run this calculation in parallel is contained in |
|
111 |
:file:` |
|
|
111 | :file:`examples/parallel/parallelpi.py`. This code can be run in parallel | |
|
112 | 112 | using IPython by following these steps: |
|
113 | 113 | |
|
114 | 114 | 1. Use :command:`ipcluster` to start 15 engines. We used 16 cores of an SGE linux |
|
115 | 115 | cluster (1 controller + 15 engines). |
|
116 | 116 | 2. With the file :file:`parallelpi.py` in your current working directory, open |
|
117 | 117 | up IPython, enable matplotlib, and type ``run parallelpi.py``. This will download |
|
118 | 118 | the pi files via ftp the first time you run it, if they are not |
|
119 | 119 | present in the Engines' working directory. |
|
120 | 120 | |
|
121 | 121 | When run on our 16 cores, we observe a speedup of 14.2x. This is slightly |
|
122 | 122 | less than linear scaling (16x) because the controller is also running on one of |
|
123 | 123 | the cores. |
|
124 | 124 | |
|
125 | 125 | To emphasize the interactive nature of IPython, we now show how the |
|
126 | 126 | calculation can also be run by simply typing the commands from |
|
127 | 127 | :file:`parallelpi.py` interactively into IPython: |
|
128 | 128 | |
|
129 | 129 | .. sourcecode:: ipython |
|
130 | 130 | |
|
131 | 131 | In [1]: from IPython.parallel import Client |
|
132 | 132 | |
|
133 | 133 | # The Client allows us to use the engines interactively. |
|
134 | 134 | # We simply pass Client the name of the cluster profile we |
|
135 | 135 | # are using. |
|
136 | 136 | In [2]: c = Client(profile='mycluster') |
|
137 | 137 | In [3]: v = c[:] |
|
138 | 138 | |
|
139 | 139 | In [3]: c.ids |
|
140 | 140 | Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] |
|
141 | 141 | |
|
142 | 142 | In [4]: run pidigits.py |
|
143 | 143 | |
|
144 | 144 | In [5]: filestring = 'pi200m.ascii.%(i)02dof20' |
|
145 | 145 | |
|
146 | 146 | # Create the list of files to process. |
|
147 | 147 | In [6]: files = [filestring % {'i':i} for i in range(1,16)] |
|
148 | 148 | |
|
149 | 149 | In [7]: files |
|
150 | 150 | Out[7]: |
|
151 | 151 | ['pi200m.ascii.01of20', |
|
152 | 152 | 'pi200m.ascii.02of20', |
|
153 | 153 | 'pi200m.ascii.03of20', |
|
154 | 154 | 'pi200m.ascii.04of20', |
|
155 | 155 | 'pi200m.ascii.05of20', |
|
156 | 156 | 'pi200m.ascii.06of20', |
|
157 | 157 | 'pi200m.ascii.07of20', |
|
158 | 158 | 'pi200m.ascii.08of20', |
|
159 | 159 | 'pi200m.ascii.09of20', |
|
160 | 160 | 'pi200m.ascii.10of20', |
|
161 | 161 | 'pi200m.ascii.11of20', |
|
162 | 162 | 'pi200m.ascii.12of20', |
|
163 | 163 | 'pi200m.ascii.13of20', |
|
164 | 164 | 'pi200m.ascii.14of20', |
|
165 | 165 | 'pi200m.ascii.15of20'] |
|
166 | 166 | |
|
167 | 167 | # download the data files if they don't already exist: |
|
168 | 168 | In [8]: v.map(fetch_pi_file, files) |
|
169 | 169 | |
|
170 | 170 | # This is the parallel calculation using the Client.map method |
|
171 | 171 | # which applies compute_two_digit_freqs to each file in files in parallel. |
|
172 | 172 | In [9]: freqs_all = v.map(compute_two_digit_freqs, files) |
|
173 | 173 | |
|
174 | 174 | # Add up the frequencies from each engine. |
|
175 | 175 | In [10]: freqs = reduce_freqs(freqs_all) |
|
176 | 176 | |
|
177 | 177 | In [11]: plot_two_digit_freqs(freqs) |
|
178 | 178 | Out[11]: <matplotlib.image.AxesImage object at 0x18beb110> |
|
179 | 179 | |
|
180 | 180 | In [12]: plt.title('2 digit counts of 150m digits of pi') |
|
181 | 181 | Out[12]: <matplotlib.text.Text object at 0x18d1f9b0> |
|
182 | 182 | |
|
183 | 183 | The resulting plot generated by Matplotlib is shown below. The colors indicate |
|
184 | 184 | which two digit sequences are more (red) or less (blue) likely to occur in the |
|
185 | 185 | first 150 million digits of pi. We clearly see that the sequence "41" is |
|
186 | 186 | most likely and that "06" and "07" are least likely. Further analysis would |
|
187 | 187 | show that the relative size of the statistical fluctuations have decreased |
|
188 | 188 | compared to the 10,000 digit calculation. |
|
189 | 189 | |
|
190 | 190 | .. image:: figs/two_digit_counts.* |
|
191 | 191 | |
|
192 | 192 | Conclusion |
|
193 | 193 | ========== |
|
194 | 194 | |
|
195 | 195 | To conclude these examples, we summarize the key features of IPython's |
|
196 | 196 | parallel architecture that have been demonstrated: |
|
197 | 197 | |
|
198 | 198 | * Serial code can be parallelized often with only a few extra lines of code. |
|
199 | 199 | We have used the :class:`DirectView` and :class:`LoadBalancedView` classes |
|
200 | 200 | for this purpose. |
|
201 | 201 | * The resulting parallel code can be run without ever leaving the IPython's |
|
202 | 202 | interactive shell. |
|
203 | 203 | * Any data computed in parallel can be explored interactively through |
|
204 | 204 | visualization or further numerical calculations. |
|
205 | 205 | * We have run these examples on a cluster running RHEL 5 and Sun GridEngine. |
|
206 | 206 | IPython's built in support for SGE (and other batch systems) makes it easy |
|
207 | 207 | to get started with IPython's parallel capabilities. |
|
208 | 208 |
@@ -1,306 +1,306 b'' | |||
|
1 | 1 | .. _parallel_overview: |
|
2 | 2 | |
|
3 | 3 | ============================ |
|
4 | 4 | Overview and getting started |
|
5 | 5 | ============================ |
|
6 | 6 | |
|
7 | 7 | |
|
8 | 8 | Examples |
|
9 | 9 | ======== |
|
10 | 10 | |
|
11 | 11 | We have various example scripts and notebooks for using IPython.parallel in our |
|
12 |
:file:` |
|
|
12 | :file:`examples/parallel` directory, or they can be found `on GitHub`__. | |
|
13 | 13 | Some of these are covered in more detail in the :ref:`examples |
|
14 | 14 | <parallel_examples>` section. |
|
15 | 15 | |
|
16 |
.. __: https://github.com/ipython/ipython/tree/master/ |
|
|
16 | .. __: https://github.com/ipython/ipython/tree/master/examples/parallel | |
|
17 | 17 | |
|
18 | 18 | Introduction |
|
19 | 19 | ============ |
|
20 | 20 | |
|
21 | 21 | This section gives an overview of IPython's sophisticated and powerful |
|
22 | 22 | architecture for parallel and distributed computing. This architecture |
|
23 | 23 | abstracts out parallelism in a very general way, which enables IPython to |
|
24 | 24 | support many different styles of parallelism including: |
|
25 | 25 | |
|
26 | 26 | * Single program, multiple data (SPMD) parallelism. |
|
27 | 27 | * Multiple program, multiple data (MPMD) parallelism. |
|
28 | 28 | * Message passing using MPI. |
|
29 | 29 | * Task farming. |
|
30 | 30 | * Data parallel. |
|
31 | 31 | * Combinations of these approaches. |
|
32 | 32 | * Custom user defined approaches. |
|
33 | 33 | |
|
34 | 34 | Most importantly, IPython enables all types of parallel applications to |
|
35 | 35 | be developed, executed, debugged and monitored *interactively*. Hence, |
|
36 | 36 | the ``I`` in IPython. The following are some example usage cases for IPython: |
|
37 | 37 | |
|
38 | 38 | * Quickly parallelize algorithms that are embarrassingly parallel |
|
39 | 39 | using a number of simple approaches. Many simple things can be |
|
40 | 40 | parallelized interactively in one or two lines of code. |
|
41 | 41 | |
|
42 | 42 | * Steer traditional MPI applications on a supercomputer from an |
|
43 | 43 | IPython session on your laptop. |
|
44 | 44 | |
|
45 | 45 | * Analyze and visualize large datasets (that could be remote and/or |
|
46 | 46 | distributed) interactively using IPython and tools like |
|
47 | 47 | matplotlib/TVTK. |
|
48 | 48 | |
|
49 | 49 | * Develop, test and debug new parallel algorithms |
|
50 | 50 | (that may use MPI) interactively. |
|
51 | 51 | |
|
52 | 52 | * Tie together multiple MPI jobs running on different systems into |
|
53 | 53 | one giant distributed and parallel system. |
|
54 | 54 | |
|
55 | 55 | * Start a parallel job on your cluster and then have a remote |
|
56 | 56 | collaborator connect to it and pull back data into their |
|
57 | 57 | local IPython session for plotting and analysis. |
|
58 | 58 | |
|
59 | 59 | * Run a set of tasks on a set of CPUs using dynamic load balancing. |
|
60 | 60 | |
|
61 | 61 | .. tip:: |
|
62 | 62 | |
|
63 | 63 | At the SciPy 2011 conference in Austin, Min Ragan-Kelley presented a |
|
64 | 64 | complete 4-hour tutorial on the use of these features, and all the materials |
|
65 | 65 | for the tutorial are now `available online`__. That tutorial provides an |
|
66 | 66 | excellent, hands-on oriented complement to the reference documentation |
|
67 | 67 | presented here. |
|
68 | 68 | |
|
69 | 69 | .. __: http://minrk.github.com/scipy-tutorial-2011 |
|
70 | 70 | |
|
71 | 71 | Architecture overview |
|
72 | 72 | ===================== |
|
73 | 73 | |
|
74 | 74 | .. figure:: figs/wideView.png |
|
75 | 75 | :width: 300px |
|
76 | 76 | |
|
77 | 77 | |
|
78 | 78 | The IPython architecture consists of four components: |
|
79 | 79 | |
|
80 | 80 | * The IPython engine. |
|
81 | 81 | * The IPython hub. |
|
82 | 82 | * The IPython schedulers. |
|
83 | 83 | * The controller client. |
|
84 | 84 | |
|
85 | 85 | These components live in the :mod:`IPython.parallel` package and are |
|
86 | 86 | installed with IPython. They do, however, have additional dependencies |
|
87 | 87 | that must be installed. For more information, see our |
|
88 | 88 | :ref:`installation documentation <install_index>`. |
|
89 | 89 | |
|
90 | 90 | .. TODO: include zmq in install_index |
|
91 | 91 | |
|
92 | 92 | IPython engine |
|
93 | 93 | --------------- |
|
94 | 94 | |
|
95 | 95 | The IPython engine is a Python instance that takes Python commands over a |
|
96 | 96 | network connection. Eventually, the IPython engine will be a full IPython |
|
97 | 97 | interpreter, but for now, it is a regular Python interpreter. The engine |
|
98 | 98 | can also handle incoming and outgoing Python objects sent over a network |
|
99 | 99 | connection. When multiple engines are started, parallel and distributed |
|
100 | 100 | computing becomes possible. An important feature of an IPython engine is |
|
101 | 101 | that it blocks while user code is being executed. Read on for how the |
|
102 | 102 | IPython controller solves this problem to expose a clean asynchronous API |
|
103 | 103 | to the user. |
|
104 | 104 | |
|
105 | 105 | IPython controller |
|
106 | 106 | ------------------ |
|
107 | 107 | |
|
108 | 108 | The IPython controller processes provide an interface for working with a set of engines. |
|
109 | 109 | At a general level, the controller is a collection of processes to which IPython engines |
|
110 | 110 | and clients can connect. The controller is composed of a :class:`Hub` and a collection of |
|
111 | 111 | :class:`Schedulers`. These Schedulers are typically run in separate processes but on the |
|
112 | 112 | same machine as the Hub, but can be run anywhere from local threads or on remote machines. |
|
113 | 113 | |
|
114 | 114 | The controller also provides a single point of contact for users who wish to |
|
115 | 115 | utilize the engines connected to the controller. There are different ways of |
|
116 | 116 | working with a controller. In IPython, all of these models are implemented via |
|
117 | 117 | the :meth:`.View.apply` method, after |
|
118 | 118 | constructing :class:`.View` objects to represent subsets of engines. The two |
|
119 | 119 | primary models for interacting with engines are: |
|
120 | 120 | |
|
121 | 121 | * A **Direct** interface, where engines are addressed explicitly. |
|
122 | 122 | * A **LoadBalanced** interface, where the Scheduler is trusted with assigning work to |
|
123 | 123 | appropriate engines. |
|
124 | 124 | |
|
125 | 125 | Advanced users can readily extend the View models to enable other |
|
126 | 126 | styles of parallelism. |
|
127 | 127 | |
|
128 | 128 | .. note:: |
|
129 | 129 | |
|
130 | 130 | A single controller and set of engines can be used with multiple models |
|
131 | 131 | simultaneously. This opens the door for lots of interesting things. |
|
132 | 132 | |
|
133 | 133 | |
|
134 | 134 | The Hub |
|
135 | 135 | ******* |
|
136 | 136 | |
|
137 | 137 | The center of an IPython cluster is the Hub. This is the process that keeps |
|
138 | 138 | track of engine connections, schedulers, clients, as well as all task requests and |
|
139 | 139 | results. The primary role of the Hub is to facilitate queries of the cluster state, and |
|
140 | 140 | minimize the necessary information required to establish the many connections involved in |
|
141 | 141 | connecting new clients and engines. |
|
142 | 142 | |
|
143 | 143 | |
|
144 | 144 | Schedulers |
|
145 | 145 | ********** |
|
146 | 146 | |
|
147 | 147 | All actions that can be performed on the engine go through a Scheduler. While the engines |
|
148 | 148 | themselves block when user code is run, the schedulers hide that from the user to provide |
|
149 | 149 | a fully asynchronous interface to a set of engines. |
|
150 | 150 | |
|
151 | 151 | |
|
152 | 152 | IPython client and views |
|
153 | 153 | ------------------------ |
|
154 | 154 | |
|
155 | 155 | There is one primary object, the :class:`~.parallel.Client`, for connecting to a cluster. |
|
156 | 156 | For each execution model, there is a corresponding :class:`~.parallel.View`. These views |
|
157 | 157 | allow users to interact with a set of engines through the interface. Here are the two default |
|
158 | 158 | views: |
|
159 | 159 | |
|
160 | 160 | * The :class:`DirectView` class for explicit addressing. |
|
161 | 161 | * The :class:`LoadBalancedView` class for destination-agnostic scheduling. |
|
162 | 162 | |
|
163 | 163 | Security |
|
164 | 164 | -------- |
|
165 | 165 | |
|
166 | 166 | IPython uses ZeroMQ for networking, which has provided many advantages, but |
|
167 | 167 | one of the setbacks is its utter lack of security [ZeroMQ]_. By default, no IPython |
|
168 | 168 | connections are encrypted, but open ports only listen on localhost. The only |
|
169 | 169 | source of security for IPython is via ssh-tunnel. IPython supports both shell |
|
170 | 170 | (`openssh`) and `paramiko` based tunnels for connections. There is a key necessary |
|
171 | 171 | to submit requests, but due to the lack of encryption, it does not provide |
|
172 | 172 | significant security if loopback traffic is compromised. |
|
173 | 173 | |
|
174 | 174 | In our architecture, the controller is the only process that listens on |
|
175 | 175 | network ports, and is thus the main point of vulnerability. The standard model |
|
176 | 176 | for secure connections is to designate that the controller listen on |
|
177 | 177 | localhost, and use ssh-tunnels to connect clients and/or |
|
178 | 178 | engines. |
|
179 | 179 | |
|
180 | 180 | To connect and authenticate to the controller an engine or client needs |
|
181 | 181 | some information that the controller has stored in a JSON file. |
|
182 | 182 | Thus, the JSON files need to be copied to a location where |
|
183 | 183 | the clients and engines can find them. Typically, this is the |
|
184 | 184 | :file:`~/.ipython/profile_default/security` directory on the host where the |
|
185 | 185 | client/engine is running (which could be a different host than the controller). |
|
186 | 186 | Once the JSON files are copied over, everything should work fine. |
|
187 | 187 | |
|
188 | 188 | Currently, there are two JSON files that the controller creates: |
|
189 | 189 | |
|
190 | 190 | ipcontroller-engine.json |
|
191 | 191 | This JSON file has the information necessary for an engine to connect |
|
192 | 192 | to a controller. |
|
193 | 193 | |
|
194 | 194 | ipcontroller-client.json |
|
195 | 195 | The client's connection information. This may not differ from the engine's, |
|
196 | 196 | but since the controller may listen on different ports for clients and |
|
197 | 197 | engines, it is stored separately. |
|
198 | 198 | |
|
199 | 199 | ipcontroller-client.json will look something like this, under default localhost |
|
200 | 200 | circumstances: |
|
201 | 201 | |
|
202 | 202 | .. sourcecode:: python |
|
203 | 203 | |
|
204 | 204 | { |
|
205 | 205 | "url":"tcp:\/\/127.0.0.1:54424", |
|
206 | 206 | "exec_key":"a361fe89-92fc-4762-9767-e2f0a05e3130", |
|
207 | 207 | "ssh":"", |
|
208 | 208 | "location":"10.19.1.135" |
|
209 | 209 | } |
|
210 | 210 | |
|
211 | 211 | If, however, you are running the controller on a work node on a cluster, you will likely |
|
212 | 212 | need to use ssh tunnels to connect clients from your laptop to it. You will also |
|
213 | 213 | probably need to instruct the controller to listen for engines coming from other work nodes |
|
214 | 214 | on the cluster. An example of ipcontroller-client.json, as created by:: |
|
215 | 215 | |
|
216 | 216 | $> ipcontroller --ip=* --ssh=login.mycluster.com |
|
217 | 217 | |
|
218 | 218 | |
|
219 | 219 | .. sourcecode:: python |
|
220 | 220 | |
|
221 | 221 | { |
|
222 | 222 | "url":"tcp:\/\/*:54424", |
|
223 | 223 | "exec_key":"a361fe89-92fc-4762-9767-e2f0a05e3130", |
|
224 | 224 | "ssh":"login.mycluster.com", |
|
225 | 225 | "location":"10.0.0.2" |
|
226 | 226 | } |
|
227 | 227 | More details of how these JSON files are used are given below. |
|
228 | 228 | |
|
229 | 229 | A detailed description of the security model and its implementation in IPython |
|
230 | 230 | can be found :ref:`here <parallelsecurity>`. |
|
231 | 231 | |
|
232 | 232 | .. warning:: |
|
233 | 233 | |
|
234 | 234 | Even at its most secure, the Controller listens on ports on localhost, and |
|
235 | 235 | every time you make a tunnel, you open a localhost port on the connecting |
|
236 | 236 | machine that points to the Controller. If localhost on the Controller's |
|
237 | 237 | machine, or the machine of any client or engine, is untrusted, then your |
|
238 | 238 | Controller is insecure. There is no way around this with ZeroMQ. |
|
239 | 239 | |
|
240 | 240 | |
|
241 | 241 | |
|
242 | 242 | Getting Started |
|
243 | 243 | =============== |
|
244 | 244 | |
|
245 | 245 | To use IPython for parallel computing, you need to start one instance of the |
|
246 | 246 | controller and one or more instances of the engine. Initially, it is best to |
|
247 | 247 | simply start a controller and engines on a single host using the |
|
248 | 248 | :command:`ipcluster` command. To start a controller and 4 engines on your |
|
249 | 249 | localhost, just do:: |
|
250 | 250 | |
|
251 | 251 | $ ipcluster start -n 4 |
|
252 | 252 | |
|
253 | 253 | More details about starting the IPython controller and engines can be found |
|
254 | 254 | :ref:`here <parallel_process>` |
|
255 | 255 | |
|
256 | 256 | Once you have started the IPython controller and one or more engines, you |
|
257 | 257 | are ready to use the engines to do something useful. To make sure |
|
258 | 258 | everything is working correctly, try the following commands: |
|
259 | 259 | |
|
260 | 260 | .. sourcecode:: ipython |
|
261 | 261 | |
|
262 | 262 | In [1]: from IPython.parallel import Client |
|
263 | 263 | |
|
264 | 264 | In [2]: c = Client() |
|
265 | 265 | |
|
266 | 266 | In [4]: c.ids |
|
267 | 267 | Out[4]: set([0, 1, 2, 3]) |
|
268 | 268 | |
|
269 | 269 | In [5]: c[:].apply_sync(lambda : "Hello, World") |
|
270 | 270 | Out[5]: [ 'Hello, World', 'Hello, World', 'Hello, World', 'Hello, World' ] |
|
271 | 271 | |
|
272 | 272 | |
|
273 | 273 | When a client is created with no arguments, the client tries to find the corresponding JSON file |
|
274 | 274 | in the local `~/.ipython/profile_default/security` directory. Or if you specified a profile, |
|
275 | 275 | you can use that with the Client. This should cover most cases: |
|
276 | 276 | |
|
277 | 277 | .. sourcecode:: ipython |
|
278 | 278 | |
|
279 | 279 | In [2]: c = Client(profile='myprofile') |
|
280 | 280 | |
|
281 | 281 | If you have put the JSON file in a different location or it has a different name, create the |
|
282 | 282 | client like this: |
|
283 | 283 | |
|
284 | 284 | .. sourcecode:: ipython |
|
285 | 285 | |
|
286 | 286 | In [2]: c = Client('/path/to/my/ipcontroller-client.json') |
|
287 | 287 | |
|
288 | 288 | Remember, a client needs to be able to see the Hub's ports to connect. So if they are on a |
|
289 | 289 | different machine, you may need to use an ssh server to tunnel access to that machine, |
|
290 | 290 | then you would connect to it with: |
|
291 | 291 | |
|
292 | 292 | .. sourcecode:: ipython |
|
293 | 293 | |
|
294 | 294 | In [2]: c = Client('/path/to/my/ipcontroller-client.json', sshserver='me@myhub.example.com') |
|
295 | 295 | |
|
296 | 296 | Where 'myhub.example.com' is the url or IP address of the machine on |
|
297 | 297 | which the Hub process is running (or another machine that has direct access to the Hub's ports). |
|
298 | 298 | |
|
299 | 299 | The SSH server may already be specified in ipcontroller-client.json, if the controller was |
|
300 | 300 | instructed at its launch time. |
|
301 | 301 | |
|
302 | 302 | You are now ready to learn more about the :ref:`Direct |
|
303 | 303 | <parallel_multiengine>` and :ref:`LoadBalanced <parallel_task>` interfaces to the |
|
304 | 304 | controller. |
|
305 | 305 | |
|
306 | 306 | .. [ZeroMQ] ZeroMQ. http://www.zeromq.org |
@@ -1,765 +1,765 b'' | |||
|
1 | 1 | ============= |
|
2 | 2 | 0.11 Series |
|
3 | 3 | ============= |
|
4 | 4 | |
|
5 | 5 | Release 0.11 |
|
6 | 6 | ============ |
|
7 | 7 | |
|
8 | 8 | IPython 0.11 is a *major* overhaul of IPython, two years in the making. Most |
|
9 | 9 | of the code base has been rewritten or at least reorganized, breaking backward |
|
10 | 10 | compatibility with several APIs in previous versions. It is the first major |
|
11 | 11 | release in two years, and probably the most significant change to IPython since |
|
12 | 12 | its inception. We plan to have a relatively quick succession of releases, as |
|
13 | 13 | people discover new bugs and regressions. Once we iron out any significant |
|
14 | 14 | bugs in this process and settle down the new APIs, this series will become |
|
15 | 15 | IPython 1.0. We encourage feedback now on the core APIs, which we hope to |
|
16 | 16 | maintain stable during the 1.0 series. |
|
17 | 17 | |
|
18 | 18 | Since the internal APIs have changed so much, projects using IPython as a |
|
19 | 19 | library (as opposed to end-users of the application) are the most likely to |
|
20 | 20 | encounter regressions or changes that break their existing use patterns. We |
|
21 | 21 | will make every effort to provide updated versions of the APIs to facilitate |
|
22 | 22 | the transition, and we encourage you to contact us on the `development mailing |
|
23 | 23 | list`__ with questions and feedback. |
|
24 | 24 | |
|
25 | 25 | .. __: http://mail.scipy.org/mailman/listinfo/ipython-dev |
|
26 | 26 | |
|
27 | 27 | Chris Fonnesbeck recently wrote an `excellent post`__ that highlights some of |
|
28 | 28 | our major new features, with examples and screenshots. We encourage you to |
|
29 | 29 | read it as it provides an illustrated, high-level overview complementing the |
|
30 | 30 | detailed feature breakdown in this document. |
|
31 | 31 | |
|
32 | 32 | .. __: http://fonnesbeck.calepin.co/innovations-in-ipython.html |
|
33 | 33 | |
|
34 | 34 | A quick summary of the major changes (see below for details): |
|
35 | 35 | |
|
36 | 36 | * **Standalone Qt console**: a new rich console has been added to IPython, |
|
37 | 37 | started with `ipython qtconsole`. In this application we have tried to |
|
38 | 38 | retain the feel of a terminal for fast and efficient workflows, while adding |
|
39 | 39 | many features that a line-oriented terminal simply can not support, such as |
|
40 | 40 | inline figures, full multiline editing with syntax highlighting, graphical |
|
41 | 41 | tooltips for function calls and much more. This development was sponsored by |
|
42 | 42 | `Enthought Inc.`__. See :ref:`below <qtconsole_011>` for details. |
|
43 | 43 | |
|
44 | 44 | .. __: http://enthought.com |
|
45 | 45 | |
|
46 | 46 | * **High-level parallel computing with ZeroMQ**. Using the same architecture |
|
47 | 47 | that our Qt console is based on, we have completely rewritten our high-level |
|
48 | 48 | parallel computing machinery that in prior versions used the Twisted |
|
49 | 49 | networking framework. While this change will require users to update their |
|
50 | 50 | codes, the improvements in performance, memory control and internal |
|
51 | 51 | consistency across our codebase convinced us it was a price worth paying. We |
|
52 | 52 | have tried to explain how to best proceed with this update, and will be happy |
|
53 | 53 | to answer questions that may arise. A full tutorial describing these |
|
54 | 54 | features `was presented at SciPy'11`__, more details :ref:`below |
|
55 | 55 | <parallel_011>`. |
|
56 | 56 | |
|
57 | 57 | .. __: http://minrk.github.com/scipy-tutorial-2011 |
|
58 | 58 | |
|
59 | 59 | * **New model for GUI/plotting support in the terminal**. Now instead of the |
|
60 | 60 | various `-Xthread` flags we had before, GUI support is provided without the |
|
61 | 61 | use of any threads, by directly integrating GUI event loops with Python's |
|
62 | 62 | `PyOS_InputHook` API. A new command-line flag `--gui` controls GUI support, |
|
63 | 63 | and it can also be enabled after IPython startup via the new `%gui` magic. |
|
64 | 64 | This requires some changes if you want to execute GUI-using scripts inside |
|
65 | 65 | IPython, see :ref:`the GUI support section <gui_support>` for more details. |
|
66 | 66 | |
|
67 | 67 | * **A two-process architecture.** The Qt console is the first use of a new |
|
68 | 68 | model that splits IPython between a kernel process where code is executed and |
|
69 | 69 | a client that handles user interaction. We plan on also providing terminal |
|
70 | 70 | and web-browser based clients using this infrastructure in future releases. |
|
71 | 71 | This model allows multiple clients to interact with an IPython process |
|
72 | 72 | through a :ref:`well-documented messaging protocol <messaging>` using the |
|
73 | 73 | ZeroMQ networking library. |
|
74 | 74 | |
|
75 | 75 | * **Refactoring.** the entire codebase has been refactored, in order to make it |
|
76 | 76 | more modular and easier to contribute to. IPython has traditionally been a |
|
77 | 77 | hard project to participate because the old codebase was very monolithic. We |
|
78 | 78 | hope this (ongoing) restructuring will make it easier for new developers to |
|
79 | 79 | join us. |
|
80 | 80 | |
|
81 | 81 | * **Vim integration**. Vim can be configured to seamlessly control an IPython |
|
82 | 82 | kernel, see the files in :file:`docs/examples/vim` for the full details. |
|
83 | 83 | This work was done by Paul Ivanov, who prepared a nice `video |
|
84 | 84 | demonstration`__ of the features it provides. |
|
85 | 85 | |
|
86 | 86 | .. __: http://pirsquared.org/blog/2011/07/28/vim-ipython/ |
|
87 | 87 | |
|
88 | 88 | * **Integration into Microsoft Visual Studio**. Thanks to the work of the |
|
89 | 89 | Microsoft `Python Tools for Visual Studio`__ team, this version of IPython |
|
90 | 90 | has been integrated into Microsoft Visual Studio's Python tools open source |
|
91 | 91 | plug-in. `Details below`_ |
|
92 | 92 | |
|
93 | 93 | .. __: http://pytools.codeplex.com |
|
94 | 94 | .. _details below: ms_visual_studio_011_ |
|
95 | 95 | |
|
96 | 96 | * **Improved unicode support**. We closed many bugs related to unicode input. |
|
97 | 97 | |
|
98 | 98 | * **Python 3**. IPython now runs on Python 3.x. See :ref:`python3_011` for |
|
99 | 99 | details. |
|
100 | 100 | |
|
101 | 101 | * **New profile model**. Profiles are now directories that contain all relevant |
|
102 | 102 | information for that session, and thus better isolate IPython use-cases. |
|
103 | 103 | |
|
104 | 104 | * **SQLite storage for history**. All history is now stored in a SQLite |
|
105 | 105 | database, providing support for multiple simultaneous sessions that won't |
|
106 | 106 | clobber each other as well as the ability to perform queries on all stored |
|
107 | 107 | data. |
|
108 | 108 | |
|
109 | 109 | * **New configuration system**. All parts of IPython are now configured via a |
|
110 | 110 | mechanism inspired by the Enthought Traits library. Any configurable element |
|
111 | 111 | can have its attributes set either via files that now use real Python syntax |
|
112 | 112 | or from the command-line. |
|
113 | 113 | |
|
114 | 114 | * **Pasting of code with prompts**. IPython now intelligently strips out input |
|
115 | 115 | prompts , be they plain Python ones (``>>>`` and ``...``) or IPython ones |
|
116 | 116 | (``In [N]:`` and ``...:``). More details :ref:`here <pasting_with_prompts>`. |
|
117 | 117 | |
|
118 | 118 | |
|
119 | 119 | Authors and support |
|
120 | 120 | ------------------- |
|
121 | 121 | |
|
122 | 122 | Over 60 separate authors have contributed to this release, see :ref:`below |
|
123 | 123 | <credits_011>` for a full list. In particular, we want to highlight the |
|
124 | 124 | extremely active participation of two new core team members: Evan Patterson |
|
125 | 125 | implemented the Qt console, and Thomas Kluyver started with our Python 3 port |
|
126 | 126 | and by now has made major contributions to just about every area of IPython. |
|
127 | 127 | |
|
128 | 128 | We are also grateful for the support we have received during this development |
|
129 | 129 | cycle from several institutions: |
|
130 | 130 | |
|
131 | 131 | - `Enthought Inc`__ funded the development of our new Qt console, an effort that |
|
132 | 132 | required developing major pieces of underlying infrastructure, which now |
|
133 | 133 | power not only the Qt console but also our new parallel machinery. We'd like |
|
134 | 134 | to thank Eric Jones and Travis Oliphant for their support, as well as Ilan |
|
135 | 135 | Schnell for his tireless work integrating and testing IPython in the |
|
136 | 136 | `Enthought Python Distribution`_. |
|
137 | 137 | |
|
138 | 138 | .. __: http://enthought.com |
|
139 | 139 | .. _Enthought Python Distribution: http://www.enthought.com/products/epd.php |
|
140 | 140 | |
|
141 | 141 | - Nipy/NIH: funding via the `NiPy project`__ (NIH grant 5R01MH081909-02) helped |
|
142 | 142 | us jumpstart the development of this series by restructuring the entire |
|
143 | 143 | codebase two years ago in a way that would make modular development and |
|
144 | 144 | testing more approachable. Without this initial groundwork, all the new |
|
145 | 145 | features we have added would have been impossible to develop. |
|
146 | 146 | |
|
147 | 147 | .. __: http://nipy.org |
|
148 | 148 | |
|
149 | 149 | - Sage/NSF: funding via the grant `Sage: Unifying Mathematical Software for |
|
150 | 150 | Scientists, Engineers, and Mathematicians`__ (NSF grant DMS-1015114) |
|
151 | 151 | supported a meeting in spring 2011 of several of the core IPython developers |
|
152 | 152 | where major progress was made integrating the last key pieces leading to this |
|
153 | 153 | release. |
|
154 | 154 | |
|
155 | 155 | .. __: http://modular.math.washington.edu/grants/compmath09 |
|
156 | 156 | |
|
157 | 157 | - Microsoft's team working on `Python Tools for Visual Studio`__ developed the |
|
158 | 158 | integraton of IPython into the Python plugin for Visual Studio 2010. |
|
159 | 159 | |
|
160 | 160 | .. __: http://pytools.codeplex.com |
|
161 | 161 | |
|
162 | 162 | - Google Summer of Code: in 2010, we had two students developing prototypes of |
|
163 | 163 | the new machinery that is now maturing in this release: `Omar Zapata`_ and |
|
164 | 164 | `Gerardo GutiΓ©rrez`_. |
|
165 | 165 | |
|
166 | 166 | .. _Omar Zapata: http://ipythonzmq.blogspot.com/2010/08/ipython-zmq-status.html |
|
167 | 167 | .. _Gerardo GutiΓ©rrez: http://ipythonqt.blogspot.com/2010/04/ipython-qt-interface-gsoc-2010-proposal.html> |
|
168 | 168 | |
|
169 | 169 | |
|
170 | 170 | Development summary: moving to Git and Github |
|
171 | 171 | --------------------------------------------- |
|
172 | 172 | |
|
173 | 173 | In April 2010, after `one breakage too many with bzr`__, we decided to move our |
|
174 | 174 | entire development process to Git and Github.com. This has proven to be one of |
|
175 | 175 | the best decisions in the project's history, as the combination of git and |
|
176 | 176 | github have made us far, far more productive than we could be with our previous |
|
177 | 177 | tools. We first converted our bzr repo to a git one without losing history, |
|
178 | 178 | and a few weeks later ported all open Launchpad bugs to github issues with |
|
179 | 179 | their comments mostly intact (modulo some formatting changes). This ensured a |
|
180 | 180 | smooth transition where no development history or submitted bugs were lost. |
|
181 | 181 | Feel free to use our little Launchpad to Github issues `porting script`_ if you |
|
182 | 182 | need to make a similar transition. |
|
183 | 183 | |
|
184 | 184 | .. __: http://mail.scipy.org/pipermail/ipython-dev/2010-April/005944.html |
|
185 | 185 | .. _porting script: https://gist.github.com/835577 |
|
186 | 186 | |
|
187 | 187 | These simple statistics show how much work has been done on the new release, by |
|
188 | 188 | comparing the current code to the last point it had in common with the 0.10 |
|
189 | 189 | series. A huge diff and ~2200 commits make up this cycle:: |
|
190 | 190 | |
|
191 | 191 | git diff $(git merge-base 0.10.2 HEAD) | wc -l |
|
192 | 192 | 288019 |
|
193 | 193 | |
|
194 | 194 | git log $(git merge-base 0.10.2 HEAD)..HEAD --oneline | wc -l |
|
195 | 195 | 2200 |
|
196 | 196 | |
|
197 | 197 | Since our move to github, 511 issues were closed, 226 of which were pull |
|
198 | 198 | requests and 285 regular issues (:ref:`a full list with links |
|
199 | 199 | <issues_list_011>` is available for those interested in the details). Github's |
|
200 | 200 | pull requests are a fantastic mechanism for reviewing code and building a |
|
201 | 201 | shared ownership of the project, and we are making enthusiastic use of it. |
|
202 | 202 | |
|
203 | 203 | .. Note:: |
|
204 | 204 | |
|
205 | 205 | This undercounts the number of issues closed in this development cycle, |
|
206 | 206 | since we only moved to github for issue tracking in May 2010, but we have no |
|
207 | 207 | way of collecting statistics on the number of issues closed in the old |
|
208 | 208 | Launchpad bug tracker prior to that. |
|
209 | 209 | |
|
210 | 210 | |
|
211 | 211 | .. _qtconsole_011: |
|
212 | 212 | |
|
213 | 213 | Qt Console |
|
214 | 214 | ---------- |
|
215 | 215 | |
|
216 | 216 | IPython now ships with a Qt application that feels very much like a terminal, |
|
217 | 217 | but is in fact a rich GUI that runs an IPython client but supports inline |
|
218 | 218 | figures, saving sessions to PDF and HTML, multiline editing with syntax |
|
219 | 219 | highlighting, graphical calltips and much more: |
|
220 | 220 | |
|
221 | 221 | .. figure:: ../_images/qtconsole.png |
|
222 | 222 | :width: 400px |
|
223 | 223 | :alt: IPython Qt console with embedded plots |
|
224 | 224 | :align: center |
|
225 | 225 | :target: ../_images/qtconsole.png |
|
226 | 226 | |
|
227 | 227 | The Qt console for IPython, using inline matplotlib plots. |
|
228 | 228 | |
|
229 | 229 | We hope that many projects will embed this widget, which we've kept |
|
230 | 230 | deliberately very lightweight, into their own environments. In the future we |
|
231 | 231 | may also offer a slightly more featureful application (with menus and other GUI |
|
232 | 232 | elements), but we remain committed to always shipping this easy to embed |
|
233 | 233 | widget. |
|
234 | 234 | |
|
235 | 235 | See the :ref:`Qt console section <qtconsole>` of the docs for a detailed |
|
236 | 236 | description of the console's features and use. |
|
237 | 237 | |
|
238 | 238 | |
|
239 | 239 | .. _parallel_011: |
|
240 | 240 | |
|
241 | 241 | High-level parallel computing with ZeroMQ |
|
242 | 242 | ----------------------------------------- |
|
243 | 243 | |
|
244 | 244 | We have completely rewritten the Twisted-based code for high-level parallel |
|
245 | 245 | computing to work atop our new ZeroMQ architecture. While we realize this will |
|
246 | 246 | break compatibility for a number of users, we hope to make the transition as |
|
247 | 247 | easy as possible with our docs, and we are convinced the change is worth it. |
|
248 | 248 | ZeroMQ provides us with much tighter control over memory, higher performance, |
|
249 | 249 | and its communications are impervious to the Python Global Interpreter Lock |
|
250 | 250 | because they take place in a system-level C++ thread. The impact of the GIL in |
|
251 | 251 | our previous code was something we could simply not work around, given that |
|
252 | 252 | Twisted is itself a Python library. So while Twisted is a very capable |
|
253 | 253 | framework, we think ZeroMQ fits our needs much better and we hope you will find |
|
254 | 254 | the change to be a significant improvement in the long run. |
|
255 | 255 | |
|
256 | 256 | Our manual contains :ref:`a full description of how to use IPython for parallel |
|
257 | 257 | computing <parallel_overview>`, and the `tutorial`__ presented by Min |
|
258 | 258 | Ragan-Kelley at the SciPy 2011 conference provides a hands-on complement to the |
|
259 | 259 | reference docs. |
|
260 | 260 | |
|
261 | 261 | .. __: http://minrk.github.com/scipy-tutorial-2011 |
|
262 | 262 | |
|
263 | 263 | |
|
264 | 264 | Refactoring |
|
265 | 265 | ----------- |
|
266 | 266 | |
|
267 | 267 | As of this release, a signifiant portion of IPython has been refactored. This |
|
268 | 268 | refactoring is founded on a number of new abstractions. The main new classes |
|
269 | 269 | that implement these abstractions are: |
|
270 | 270 | |
|
271 | 271 | * :class:`IPython.utils.traitlets.HasTraits`. |
|
272 | 272 | * :class:`IPython.config.configurable.Configurable`. |
|
273 | 273 | * :class:`IPython.config.application.Application`. |
|
274 | 274 | * :class:`IPython.config.loader.ConfigLoader`. |
|
275 | 275 | * :class:`IPython.config.loader.Config` |
|
276 | 276 | |
|
277 | 277 | We are still in the process of writing developer focused documentation about |
|
278 | 278 | these classes, but for now our :ref:`configuration documentation |
|
279 | 279 | <config_overview>` contains a high level overview of the concepts that these |
|
280 | 280 | classes express. |
|
281 | 281 | |
|
282 | 282 | The biggest user-visible change is likely the move to using the config system |
|
283 | 283 | to determine the command-line arguments for IPython applications. The benefit |
|
284 | 284 | of this is that *all* configurable values in IPython are exposed on the |
|
285 | 285 | command-line, but the syntax for specifying values has changed. The gist is |
|
286 | 286 | that assigning values is pure Python assignment. Simple flags exist for |
|
287 | 287 | commonly used options, these are always prefixed with '--'. |
|
288 | 288 | |
|
289 | 289 | The IPython command-line help has the details of all the options (via |
|
290 | 290 | ``ipythyon --help``), but a simple example should clarify things; the ``pylab`` |
|
291 | 291 | flag can be used to start in pylab mode with the qt4 backend:: |
|
292 | 292 | |
|
293 | 293 | ipython --pylab=qt |
|
294 | 294 | |
|
295 | 295 | which is equivalent to using the fully qualified form:: |
|
296 | 296 | |
|
297 | 297 | ipython --TerminalIPythonApp.pylab=qt |
|
298 | 298 | |
|
299 | 299 | The long-form options can be listed via ``ipython --help-all``. |
|
300 | 300 | |
|
301 | 301 | |
|
302 | 302 | ZeroMQ architecture |
|
303 | 303 | ------------------- |
|
304 | 304 | |
|
305 | 305 | There is a new GUI framework for IPython, based on a client-server model in |
|
306 | 306 | which multiple clients can communicate with one IPython kernel, using the |
|
307 | 307 | ZeroMQ messaging framework. There is already a Qt console client, which can |
|
308 | 308 | be started by calling ``ipython qtconsole``. The protocol is :ref:`documented |
|
309 | 309 | <messaging>`. |
|
310 | 310 | |
|
311 | 311 | The parallel computing framework has also been rewritten using ZMQ. The |
|
312 | 312 | protocol is described :ref:`here <parallel_messages>`, and the code is in the |
|
313 | 313 | new :mod:`IPython.parallel` module. |
|
314 | 314 | |
|
315 | 315 | .. _python3_011: |
|
316 | 316 | |
|
317 | 317 | Python 3 support |
|
318 | 318 | ---------------- |
|
319 | 319 | |
|
320 | 320 | A Python 3 version of IPython has been prepared. For the time being, this is |
|
321 | 321 | maintained separately and updated from the main codebase. Its code can be found |
|
322 | 322 | `here <https://github.com/ipython/ipython-py3k>`_. The parallel computing |
|
323 | 323 | components are not perfect on Python3, but most functionality appears to be |
|
324 | 324 | working. As this work is evolving quickly, the best place to find updated |
|
325 | 325 | information about it is our `Python 3 wiki page`__. |
|
326 | 326 | |
|
327 | 327 | .. __: http://wiki.ipython.org/index.php?title=Python_3 |
|
328 | 328 | |
|
329 | 329 | |
|
330 | 330 | Unicode |
|
331 | 331 | ------- |
|
332 | 332 | |
|
333 | 333 | Entering non-ascii characters in unicode literals (``u"β¬ΓΈ"``) now works |
|
334 | 334 | properly on all platforms. However, entering these in byte/string literals |
|
335 | 335 | (``"β¬ΓΈ"``) will not work as expected on Windows (or any platform where the |
|
336 | 336 | terminal encoding is not UTF-8, as it typically is for Linux & Mac OS X). You |
|
337 | 337 | can use escape sequences (``"\xe9\x82"``) to get bytes above 128, or use |
|
338 | 338 | unicode literals and encode them. This is a limitation of Python 2 which we |
|
339 | 339 | cannot easily work around. |
|
340 | 340 | |
|
341 | 341 | .. _ms_visual_studio_011: |
|
342 | 342 | |
|
343 | 343 | Integration with Microsoft Visual Studio |
|
344 | 344 | ---------------------------------------- |
|
345 | 345 | |
|
346 | 346 | IPython can be used as the interactive shell in the `Python plugin for |
|
347 | 347 | Microsoft Visual Studio`__, as seen here: |
|
348 | 348 | |
|
349 | 349 | .. figure:: ../_images/ms_visual_studio.png |
|
350 | 350 | :width: 500px |
|
351 | 351 | :alt: IPython console embedded in Microsoft Visual Studio. |
|
352 | 352 | :align: center |
|
353 | 353 | :target: ../_images/ms_visual_studio.png |
|
354 | 354 | |
|
355 | 355 | IPython console embedded in Microsoft Visual Studio. |
|
356 | 356 | |
|
357 | 357 | The Microsoft team developing this currently has a release candidate out using |
|
358 | 358 | IPython 0.11. We will continue to collaborate with them to ensure that as they |
|
359 | 359 | approach their final release date, the integration with IPython remains smooth. |
|
360 | 360 | We'd like to thank Dino Viehland and Shahrokh Mortazavi for the work they have |
|
361 | 361 | done towards this feature, as well as Wenming Ye for his support of our WinHPC |
|
362 | 362 | capabilities. |
|
363 | 363 | |
|
364 | 364 | .. __: http://pytools.codeplex.com |
|
365 | 365 | |
|
366 | 366 | |
|
367 | 367 | Additional new features |
|
368 | 368 | ----------------------- |
|
369 | 369 | |
|
370 | 370 | * Added ``Bytes`` traitlet, removing ``Str``. All 'string' traitlets should |
|
371 | 371 | either be ``Unicode`` if a real string, or ``Bytes`` if a C-string. This |
|
372 | 372 | removes ambiguity and helps the Python 3 transition. |
|
373 | 373 | |
|
374 | 374 | * New magic ``%loadpy`` loads a python file from disk or web URL into |
|
375 | 375 | the current input buffer. |
|
376 | 376 | |
|
377 | 377 | * New magic ``%pastebin`` for sharing code via the 'Lodge it' pastebin. |
|
378 | 378 | |
|
379 | 379 | * New magic ``%precision`` for controlling float and numpy pretty printing. |
|
380 | 380 | |
|
381 | 381 | * IPython applications initiate logging, so any object can gain access to |
|
382 | 382 | a the logger of the currently running Application with: |
|
383 | 383 | |
|
384 | 384 | .. sourcecode:: python |
|
385 | 385 | |
|
386 | 386 | from IPython.config.application import Application |
|
387 | 387 | logger = Application.instance().log |
|
388 | 388 | |
|
389 | 389 | * You can now get help on an object halfway through typing a command. For |
|
390 | 390 | instance, typing ``a = zip?`` shows the details of :func:`zip`. It also |
|
391 | 391 | leaves the command at the next prompt so you can carry on with it. |
|
392 | 392 | |
|
393 | 393 | * The input history is now written to an SQLite database. The API for |
|
394 | 394 | retrieving items from the history has also been redesigned. |
|
395 | 395 | |
|
396 | 396 | * The :mod:`IPython.extensions.pretty` extension has been moved out of |
|
397 | 397 | quarantine and fully updated to the new extension API. |
|
398 | 398 | |
|
399 | 399 | * New magics for loading/unloading/reloading extensions have been added: |
|
400 | 400 | ``%load_ext``, ``%unload_ext`` and ``%reload_ext``. |
|
401 | 401 | |
|
402 | 402 | * The configuration system and configuration files are brand new. See the |
|
403 | 403 | configuration system :ref:`documentation <config_index>` for more details. |
|
404 | 404 | |
|
405 | 405 | * The :class:`~IPython.core.interactiveshell.InteractiveShell` class is now a |
|
406 | 406 | :class:`~IPython.config.configurable.Configurable` subclass and has traitlets |
|
407 | 407 | that determine the defaults and runtime environment. The ``__init__`` method |
|
408 | 408 | has also been refactored so this class can be instantiated and run without |
|
409 | 409 | the old :mod:`ipmaker` module. |
|
410 | 410 | |
|
411 | 411 | * The methods of :class:`~IPython.core.interactiveshell.InteractiveShell` have |
|
412 | 412 | been organized into sections to make it easier to turn more sections |
|
413 | 413 | of functionality into components. |
|
414 | 414 | |
|
415 | 415 | * The embedded shell has been refactored into a truly standalone subclass of |
|
416 | 416 | :class:`InteractiveShell` called :class:`InteractiveShellEmbed`. All |
|
417 | 417 | embedding logic has been taken out of the base class and put into the |
|
418 | 418 | embedded subclass. |
|
419 | 419 | |
|
420 | 420 | * Added methods of :class:`~IPython.core.interactiveshell.InteractiveShell` to |
|
421 | 421 | help it cleanup after itself. The :meth:`cleanup` method controls this. We |
|
422 | 422 | couldn't do this in :meth:`__del__` because we have cycles in our object |
|
423 | 423 | graph that prevent it from being called. |
|
424 | 424 | |
|
425 | 425 | * Created a new module :mod:`IPython.utils.importstring` for resolving |
|
426 | 426 | strings like ``foo.bar.Bar`` to the actual class. |
|
427 | 427 | |
|
428 | 428 | * Completely refactored the :mod:`IPython.core.prefilter` module into |
|
429 | 429 | :class:`~IPython.config.configurable.Configurable` subclasses. Added a new |
|
430 | 430 | layer into the prefilter system, called "transformations" that all new |
|
431 | 431 | prefilter logic should use (rather than the older "checker/handler" |
|
432 | 432 | approach). |
|
433 | 433 | |
|
434 | 434 | * Aliases are now components (:mod:`IPython.core.alias`). |
|
435 | 435 | |
|
436 | 436 | * New top level :func:`~IPython.frontend.terminal.embed.embed` function that can |
|
437 | 437 | be called to embed IPython at any place in user's code. On the first call it |
|
438 | 438 | will create an :class:`~IPython.frontend.terminal.embed.InteractiveShellEmbed` |
|
439 | 439 | instance and call it. In later calls, it just calls the previously created |
|
440 | 440 | :class:`~IPython.frontend.terminal.embed.InteractiveShellEmbed`. |
|
441 | 441 | |
|
442 | 442 | * Created a configuration system (:mod:`IPython.config.configurable`) that is |
|
443 | 443 | based on :mod:`IPython.utils.traitlets`. Configurables are arranged into a |
|
444 | 444 | runtime containment tree (not inheritance) that i) automatically propagates |
|
445 | 445 | configuration information and ii) allows singletons to discover each other in |
|
446 | 446 | a loosely coupled manner. In the future all parts of IPython will be |
|
447 | 447 | subclasses of :class:`~IPython.config.configurable.Configurable`. All IPython |
|
448 | 448 | developers should become familiar with the config system. |
|
449 | 449 | |
|
450 | 450 | * Created a new :class:`~IPython.config.loader.Config` for holding |
|
451 | 451 | configuration information. This is a dict like class with a few extras: i) |
|
452 | 452 | it supports attribute style access, ii) it has a merge function that merges |
|
453 | 453 | two :class:`~IPython.config.loader.Config` instances recursively and iii) it |
|
454 | 454 | will automatically create sub-:class:`~IPython.config.loader.Config` |
|
455 | 455 | instances for attributes that start with an uppercase character. |
|
456 | 456 | |
|
457 | 457 | * Created new configuration loaders in :mod:`IPython.config.loader`. These |
|
458 | 458 | loaders provide a unified loading interface for all configuration |
|
459 | 459 | information including command line arguments and configuration files. We |
|
460 | 460 | have two default implementations based on :mod:`argparse` and plain python |
|
461 | 461 | files. These are used to implement the new configuration system. |
|
462 | 462 | |
|
463 | 463 | * Created a top-level :class:`Application` class in |
|
464 | 464 | :mod:`IPython.core.application` that is designed to encapsulate the starting |
|
465 | 465 | of any basic Python program. An application loads and merges all the |
|
466 | 466 | configuration objects, constructs the main application, configures and |
|
467 | 467 | initiates logging, and creates and configures any :class:`Configurable` |
|
468 | 468 | instances and then starts the application running. An extended |
|
469 | 469 | :class:`BaseIPythonApplication` class adds logic for handling the |
|
470 | 470 | IPython directory as well as profiles, and all IPython entry points |
|
471 | 471 | extend it. |
|
472 | 472 | |
|
473 | 473 | * The :class:`Type` and :class:`Instance` traitlets now handle classes given |
|
474 | 474 | as strings, like ``foo.bar.Bar``. This is needed for forward declarations. |
|
475 | 475 | But, this was implemented in a careful way so that string to class |
|
476 | 476 | resolution is done at a single point, when the parent |
|
477 | 477 | :class:`~IPython.utils.traitlets.HasTraitlets` is instantiated. |
|
478 | 478 | |
|
479 | 479 | * :mod:`IPython.utils.ipstruct` has been refactored to be a subclass of |
|
480 | 480 | dict. It also now has full docstrings and doctests. |
|
481 | 481 | |
|
482 | 482 | * Created a Traits like implementation in :mod:`IPython.utils.traitlets`. This |
|
483 | 483 | is a pure Python, lightweight version of a library that is similar to |
|
484 | 484 | Enthought's Traits project, but has no dependencies on Enthought's code. We |
|
485 | 485 | are using this for validation, defaults and notification in our new component |
|
486 | 486 | system. Although it is not 100% API compatible with Enthought's Traits, we |
|
487 | 487 | plan on moving in this direction so that eventually our implementation could |
|
488 | 488 | be replaced by a (yet to exist) pure Python version of Enthought Traits. |
|
489 | 489 | |
|
490 | 490 | * Added a new module :mod:`IPython.lib.inputhook` to manage the integration |
|
491 | 491 | with GUI event loops using `PyOS_InputHook`. See the docstrings in this |
|
492 | 492 | module or the main IPython docs for details. |
|
493 | 493 | |
|
494 | 494 | * For users, GUI event loop integration is now handled through the new |
|
495 | 495 | :command:`%gui` magic command. Type ``%gui?`` at an IPython prompt for |
|
496 | 496 | documentation. |
|
497 | 497 | |
|
498 | 498 | * For developers :mod:`IPython.lib.inputhook` provides a simple interface |
|
499 | 499 | for managing the event loops in their interactive GUI applications. |
|
500 |
Examples can be found in our :file:` |
|
|
500 | Examples can be found in our :file:`examples/lib` directory. | |
|
501 | 501 | |
|
502 | 502 | Backwards incompatible changes |
|
503 | 503 | ------------------------------ |
|
504 | 504 | |
|
505 | 505 | * The Twisted-based :mod:`IPython.kernel` has been removed, and completely |
|
506 | 506 | rewritten as :mod:`IPython.parallel`, using ZeroMQ. |
|
507 | 507 | |
|
508 | 508 | * Profiles are now directories. Instead of a profile being a single config file, |
|
509 | 509 | profiles are now self-contained directories. By default, profiles get their |
|
510 | 510 | own IPython history, log files, and everything. To create a new profile, do |
|
511 | 511 | ``ipython profile create <name>``. |
|
512 | 512 | |
|
513 | 513 | * All IPython applications have been rewritten to use |
|
514 | 514 | :class:`~IPython.config.loader.KeyValueConfigLoader`. This means that |
|
515 | 515 | command-line options have changed. Now, all configurable values are accessible |
|
516 | 516 | from the command-line with the same syntax as in a configuration file. |
|
517 | 517 | |
|
518 | 518 | * The command line options ``-wthread``, ``-qthread`` and |
|
519 | 519 | ``-gthread`` have been removed. Use ``--gui=wx``, ``--gui=qt``, ``--gui=gtk`` |
|
520 | 520 | instead. |
|
521 | 521 | |
|
522 | 522 | * The extension loading functions have been renamed to |
|
523 | 523 | :func:`load_ipython_extension` and :func:`unload_ipython_extension`. |
|
524 | 524 | |
|
525 | 525 | * :class:`~IPython.core.interactiveshell.InteractiveShell` no longer takes an |
|
526 | 526 | ``embedded`` argument. Instead just use the |
|
527 | 527 | :class:`~IPython.core.interactiveshell.InteractiveShellEmbed` class. |
|
528 | 528 | |
|
529 | 529 | * ``__IPYTHON__`` is no longer injected into ``__builtin__``. |
|
530 | 530 | |
|
531 | 531 | * :meth:`Struct.__init__` no longer takes `None` as its first argument. It |
|
532 | 532 | must be a :class:`dict` or :class:`Struct`. |
|
533 | 533 | |
|
534 | 534 | * :meth:`~IPython.core.interactiveshell.InteractiveShell.ipmagic` has been |
|
535 | 535 | renamed :meth:`~IPython.core.interactiveshell.InteractiveShell.magic.` |
|
536 | 536 | |
|
537 | 537 | * The functions :func:`ipmagic` and :func:`ipalias` have been removed from |
|
538 | 538 | :mod:`__builtins__`. |
|
539 | 539 | |
|
540 | 540 | * The references to the global |
|
541 | 541 | :class:`~IPython.core.interactivehell.InteractiveShell` instance (``_ip``, and |
|
542 | 542 | ``__IP``) have been removed from the user's namespace. They are replaced by a |
|
543 | 543 | new function called :func:`get_ipython` that returns the current |
|
544 | 544 | :class:`~IPython.core.interactiveshell.InteractiveShell` instance. This |
|
545 | 545 | function is injected into the user's namespace and is now the main way of |
|
546 | 546 | accessing the running IPython. |
|
547 | 547 | |
|
548 | 548 | * Old style configuration files :file:`ipythonrc` and :file:`ipy_user_conf.py` |
|
549 | 549 | are no longer supported. Users should migrate there configuration files to |
|
550 | 550 | the new format described :ref:`here <config_overview>` and :ref:`here |
|
551 | 551 | <configuring_ipython>`. |
|
552 | 552 | |
|
553 | 553 | * The old IPython extension API that relied on :func:`ipapi` has been |
|
554 | 554 | completely removed. The new extension API is described :ref:`here |
|
555 | 555 | <configuring_ipython>`. |
|
556 | 556 | |
|
557 | 557 | * Support for ``qt3`` has been dropped. Users who need this should use |
|
558 | 558 | previous versions of IPython. |
|
559 | 559 | |
|
560 | 560 | * Removed :mod:`shellglobals` as it was obsolete. |
|
561 | 561 | |
|
562 | 562 | * Removed all the threaded shells in :mod:`IPython.core.shell`. These are no |
|
563 | 563 | longer needed because of the new capabilities in |
|
564 | 564 | :mod:`IPython.lib.inputhook`. |
|
565 | 565 | |
|
566 | 566 | * New top-level sub-packages have been created: :mod:`IPython.core`, |
|
567 | 567 | :mod:`IPython.lib`, :mod:`IPython.utils`, :mod:`IPython.deathrow`, |
|
568 | 568 | :mod:`IPython.quarantine`. All existing top-level modules have been |
|
569 | 569 | moved to appropriate sub-packages. All internal import statements |
|
570 | 570 | have been updated and tests have been added. The build system (setup.py |
|
571 | 571 | and friends) have been updated. See :ref:`this section <module_reorg>` of the |
|
572 | 572 | documentation for descriptions of these new sub-packages. |
|
573 | 573 | |
|
574 | 574 | * :mod:`IPython.ipapi` has been moved to :mod:`IPython.core.ipapi`. |
|
575 | 575 | :mod:`IPython.Shell` and :mod:`IPython.iplib` have been split and removed as |
|
576 | 576 | part of the refactor. |
|
577 | 577 | |
|
578 | 578 | * :mod:`Extensions` has been moved to :mod:`extensions` and all existing |
|
579 | 579 | extensions have been moved to either :mod:`IPython.quarantine` or |
|
580 | 580 | :mod:`IPython.deathrow`. :mod:`IPython.quarantine` contains modules that we |
|
581 | 581 | plan on keeping but that need to be updated. :mod:`IPython.deathrow` contains |
|
582 | 582 | modules that are either dead or that should be maintained as third party |
|
583 | 583 | libraries. More details about this can be found :ref:`here <module_reorg>`. |
|
584 | 584 | |
|
585 | 585 | * Previous IPython GUIs in :mod:`IPython.frontend` and :mod:`IPython.gui` are |
|
586 | 586 | likely broken, and have been removed to :mod:`IPython.deathrow` because of the |
|
587 | 587 | refactoring in the core. With proper updates, these should still work. |
|
588 | 588 | |
|
589 | 589 | |
|
590 | 590 | Known Regressions |
|
591 | 591 | ----------------- |
|
592 | 592 | |
|
593 | 593 | We do our best to improve IPython, but there are some known regressions in 0.11 |
|
594 | 594 | relative to 0.10.2. First of all, there are features that have yet to be |
|
595 | 595 | ported to the new APIs, and in order to ensure that all of the installed code |
|
596 | 596 | runs for our users, we have moved them to two separate directories in the |
|
597 | 597 | source distribution, `quarantine` and `deathrow`. Finally, we have some other |
|
598 | 598 | miscellaneous regressions that we hope to fix as soon as possible. We now |
|
599 | 599 | describe all of these in more detail. |
|
600 | 600 | |
|
601 | 601 | Quarantine |
|
602 | 602 | ~~~~~~~~~~ |
|
603 | 603 | |
|
604 | 604 | These are tools and extensions that we consider relatively easy to update to |
|
605 | 605 | the new classes and APIs, but that we simply haven't had time for. Any user |
|
606 | 606 | who is interested in one of these is encouraged to help us by porting it and |
|
607 | 607 | submitting a pull request on our `development site`_. |
|
608 | 608 | |
|
609 | 609 | .. _development site: http://github.com/ipython/ipython |
|
610 | 610 | |
|
611 | 611 | Currently, the quarantine directory contains:: |
|
612 | 612 | |
|
613 | 613 | clearcmd.py ipy_fsops.py ipy_signals.py |
|
614 | 614 | envpersist.py ipy_gnuglobal.py ipy_synchronize_with.py |
|
615 | 615 | ext_rescapture.py ipy_greedycompleter.py ipy_system_conf.py |
|
616 | 616 | InterpreterExec.py ipy_jot.py ipy_which.py |
|
617 | 617 | ipy_app_completers.py ipy_lookfor.py ipy_winpdb.py |
|
618 | 618 | ipy_autoreload.py ipy_profile_doctest.py ipy_workdir.py |
|
619 | 619 | ipy_completers.py ipy_pydb.py jobctrl.py |
|
620 | 620 | ipy_editors.py ipy_rehashdir.py ledit.py |
|
621 | 621 | ipy_exportdb.py ipy_render.py pspersistence.py |
|
622 | 622 | ipy_extutil.py ipy_server.py win32clip.py |
|
623 | 623 | |
|
624 | 624 | Deathrow |
|
625 | 625 | ~~~~~~~~ |
|
626 | 626 | |
|
627 | 627 | These packages may be harder to update or make most sense as third-party |
|
628 | 628 | libraries. Some of them are completely obsolete and have been already replaced |
|
629 | 629 | by better functionality (we simply haven't had the time to carefully weed them |
|
630 | 630 | out so they are kept here for now). Others simply require fixes to code that |
|
631 | 631 | the current core team may not be familiar with. If a tool you were used to is |
|
632 | 632 | included here, we encourage you to contact the dev list and we can discuss |
|
633 | 633 | whether it makes sense to keep it in IPython (if it can be maintained). |
|
634 | 634 | |
|
635 | 635 | Currently, the deathrow directory contains:: |
|
636 | 636 | |
|
637 | 637 | astyle.py ipy_defaults.py ipy_vimserver.py |
|
638 | 638 | dtutils.py ipy_kitcfg.py numeric_formats.py |
|
639 | 639 | Gnuplot2.py ipy_legacy.py numutils.py |
|
640 | 640 | GnuplotInteractive.py ipy_p4.py outputtrap.py |
|
641 | 641 | GnuplotRuntime.py ipy_profile_none.py PhysicalQInput.py |
|
642 | 642 | ibrowse.py ipy_profile_numpy.py PhysicalQInteractive.py |
|
643 | 643 | igrid.py ipy_profile_scipy.py quitter.py* |
|
644 | 644 | ipipe.py ipy_profile_sh.py scitedirector.py |
|
645 | 645 | iplib.py ipy_profile_zope.py Shell.py |
|
646 | 646 | ipy_constants.py ipy_traits_completer.py twshell.py |
|
647 | 647 | |
|
648 | 648 | |
|
649 | 649 | Other regressions |
|
650 | 650 | ~~~~~~~~~~~~~~~~~ |
|
651 | 651 | |
|
652 | 652 | * The machinery that adds functionality to the 'sh' profile for using IPython |
|
653 | 653 | as your system shell has not been updated to use the new APIs. As a result, |
|
654 | 654 | only the aesthetic (prompt) changes are still implemented. We intend to fix |
|
655 | 655 | this by 0.12. Tracked as issue 547_. |
|
656 | 656 | |
|
657 | 657 | .. _547: https://github.com/ipython/ipython/issues/547 |
|
658 | 658 | |
|
659 | 659 | * The installation of scripts on Windows was broken without setuptools, so we |
|
660 | 660 | now depend on setuptools on Windows. We hope to fix setuptools-less |
|
661 | 661 | installation, and then remove the setuptools dependency. Issue 539_. |
|
662 | 662 | |
|
663 | 663 | .. _539: https://github.com/ipython/ipython/issues/539 |
|
664 | 664 | |
|
665 | 665 | * The directory history `_dh` is not saved between sessions. Issue 634_. |
|
666 | 666 | |
|
667 | 667 | .. _634: https://github.com/ipython/ipython/issues/634 |
|
668 | 668 | |
|
669 | 669 | |
|
670 | 670 | Removed Features |
|
671 | 671 | ---------------- |
|
672 | 672 | |
|
673 | 673 | As part of the updating of IPython, we have removed a few features for the |
|
674 | 674 | purposes of cleaning up the codebase and interfaces. These removals are |
|
675 | 675 | permanent, but for any item listed below, equivalent functionality is |
|
676 | 676 | available. |
|
677 | 677 | |
|
678 | 678 | * The magics Exit and Quit have been dropped as ways to exit IPython. Instead, |
|
679 | 679 | the lowercase forms of both work either as a bare name (``exit``) or a |
|
680 | 680 | function call (``exit()``). You can assign these to other names using |
|
681 | 681 | exec_lines in the config file. |
|
682 | 682 | |
|
683 | 683 | |
|
684 | 684 | .. _credits_011: |
|
685 | 685 | |
|
686 | 686 | Credits |
|
687 | 687 | ------- |
|
688 | 688 | |
|
689 | 689 | Many users and developers contributed code, features, bug reports and ideas to |
|
690 | 690 | this release. Please do not hesitate in contacting us if we've failed to |
|
691 | 691 | acknowledge your contribution here. In particular, for this release we have |
|
692 | 692 | contribution from the following people, a mix of new and regular names (in |
|
693 | 693 | alphabetical order by first name): |
|
694 | 694 | |
|
695 | 695 | * Aenugu Sai Kiran Reddy <saikrn08-at-gmail.com> |
|
696 | 696 | * andy wilson <wilson.andrew.j+github-at-gmail.com> |
|
697 | 697 | * Antonio Cuni <antocuni> |
|
698 | 698 | * Barry Wark <barrywark-at-gmail.com> |
|
699 | 699 | * Beetoju Anuradha <anu.beethoju-at-gmail.com> |
|
700 | 700 | * Benjamin Ragan-Kelley <minrk-at-Mercury.local> |
|
701 | 701 | * Brad Reisfeld |
|
702 | 702 | * Brian E. Granger <ellisonbg-at-gmail.com> |
|
703 | 703 | * Christoph Gohlke <cgohlke-at-uci.edu> |
|
704 | 704 | * Cody Precord |
|
705 | 705 | * dan.milstein |
|
706 | 706 | * Darren Dale <dsdale24-at-gmail.com> |
|
707 | 707 | * Dav Clark <davclark-at-berkeley.edu> |
|
708 | 708 | * David Warde-Farley <wardefar-at-iro.umontreal.ca> |
|
709 | 709 | * epatters <ejpatters-at-gmail.com> |
|
710 | 710 | * epatters <epatters-at-caltech.edu> |
|
711 | 711 | * epatters <epatters-at-enthought.com> |
|
712 | 712 | * Eric Firing <efiring-at-hawaii.edu> |
|
713 | 713 | * Erik Tollerud <erik.tollerud-at-gmail.com> |
|
714 | 714 | * Evan Patterson <epatters-at-enthought.com> |
|
715 | 715 | * Fernando Perez <Fernando.Perez-at-berkeley.edu> |
|
716 | 716 | * Gael Varoquaux <gael.varoquaux-at-normalesup.org> |
|
717 | 717 | * Gerardo <muzgash-at-Muzpelheim> |
|
718 | 718 | * Jason Grout <jason.grout-at-drake.edu> |
|
719 | 719 | * John Hunter <jdh2358-at-gmail.com> |
|
720 | 720 | * Jens Hedegaard Nielsen <jenshnielsen-at-gmail.com> |
|
721 | 721 | * Johann Cohen-Tanugi <johann.cohentanugi-at-gmail.com> |
|
722 | 722 | * JΓΆrgen Stenarson <jorgen.stenarson-at-bostream.nu> |
|
723 | 723 | * Justin Riley <justin.t.riley-at-gmail.com> |
|
724 | 724 | * Kiorky |
|
725 | 725 | * Laurent Dufrechou <laurent.dufrechou-at-gmail.com> |
|
726 | 726 | * Luis Pedro Coelho <lpc-at-cmu.edu> |
|
727 | 727 | * Mani chandra <mchandra-at-iitk.ac.in> |
|
728 | 728 | * Mark E. Smith |
|
729 | 729 | * Mark Voorhies <mark.voorhies-at-ucsf.edu> |
|
730 | 730 | * Martin Spacek <git-at-mspacek.mm.st> |
|
731 | 731 | * Michael Droettboom <mdroe-at-stsci.edu> |
|
732 | 732 | * MinRK <benjaminrk-at-gmail.com> |
|
733 | 733 | * muzuiget <muzuiget-at-gmail.com> |
|
734 | 734 | * Nick Tarleton <nick-at-quixey.com> |
|
735 | 735 | * Nicolas Rougier <Nicolas.rougier-at-inria.fr> |
|
736 | 736 | * Omar Andres Zapata Mesa <andresete.chaos-at-gmail.com> |
|
737 | 737 | * Paul Ivanov <pivanov314-at-gmail.com> |
|
738 | 738 | * Pauli Virtanen <pauli.virtanen-at-iki.fi> |
|
739 | 739 | * Prabhu Ramachandran |
|
740 | 740 | * Ramana <sramana9-at-gmail.com> |
|
741 | 741 | * Robert Kern <robert.kern-at-gmail.com> |
|
742 | 742 | * Sathesh Chandra <satheshchandra88-at-gmail.com> |
|
743 | 743 | * Satrajit Ghosh <satra-at-mit.edu> |
|
744 | 744 | * Sebastian Busch |
|
745 | 745 | * Skipper Seabold <jsseabold-at-gmail.com> |
|
746 | 746 | * Stefan van der Walt <bzr-at-mentat.za.net> |
|
747 | 747 | * Stephan Peijnik <debian-at-sp.or.at> |
|
748 | 748 | * Steven Bethard |
|
749 | 749 | * Thomas Kluyver <takowl-at-gmail.com> |
|
750 | 750 | * Thomas Spura <tomspur-at-fedoraproject.org> |
|
751 | 751 | * Tom Fetherston <tfetherston-at-aol.com> |
|
752 | 752 | * Tom MacWright |
|
753 | 753 | * tzanko |
|
754 | 754 | * vankayala sowjanya <hai.sowjanya-at-gmail.com> |
|
755 | 755 | * Vivian De Smedt <vds2212-at-VIVIAN> |
|
756 | 756 | * Ville M. Vainio <vivainio-at-gmail.com> |
|
757 | 757 | * Vishal Vatsa <vishal.vatsa-at-gmail.com> |
|
758 | 758 | * Vishnu S G <sgvishnu777-at-gmail.com> |
|
759 | 759 | * Walter Doerwald <walter-at-livinglogic.de> |
|
760 | 760 | |
|
761 | 761 | .. note:: |
|
762 | 762 | |
|
763 | 763 | This list was generated with the output of |
|
764 | 764 | ``git log dev-0.11 HEAD --format='* %aN <%aE>' | sed 's/@/\-at\-/' | sed 's/<>//' | sort -u`` |
|
765 | 765 | after some cleanup. If you should be on this list, please add yourself. |
@@ -1,673 +1,673 b'' | |||
|
1 | 1 | ============= |
|
2 | 2 | 0.13 Series |
|
3 | 3 | ============= |
|
4 | 4 | |
|
5 | 5 | Release 0.13 |
|
6 | 6 | ============ |
|
7 | 7 | |
|
8 | 8 | IPython 0.13 contains several major new features, as well as a large amount of |
|
9 | 9 | bug and regression fixes. The previous version (0.12) was released on December |
|
10 | 10 | 19 2011, and in this development cycle we had: |
|
11 | 11 | |
|
12 | 12 | - ~6 months of work. |
|
13 | 13 | - 373 pull requests merged. |
|
14 | 14 | - 742 issues closed (non-pull requests). |
|
15 | 15 | - contributions from 62 authors. |
|
16 | 16 | - 1760 commits. |
|
17 | 17 | - a diff of 114226 lines. |
|
18 | 18 | |
|
19 | 19 | The amount of work included in this release is so large, that we can only cover |
|
20 | 20 | here the main highlights; please see our :ref:`detailed release statistics |
|
21 | 21 | <issues_list_013>` for links to every issue and pull request closed on GitHub |
|
22 | 22 | as well as a full list of individual contributors. |
|
23 | 23 | |
|
24 | 24 | |
|
25 | 25 | Major Notebook improvements: new user interface and more |
|
26 | 26 | -------------------------------------------------------- |
|
27 | 27 | |
|
28 | 28 | The IPython Notebook, which has proven since its release to be wildly popular, |
|
29 | 29 | has seen a massive amount of work in this release cycle, leading to a |
|
30 | 30 | significantly improved user experience as well as many new features. |
|
31 | 31 | |
|
32 | 32 | The first user-visible change is a reorganization of the user interface; the |
|
33 | 33 | left panel has been removed and was replaced by a real menu system and a |
|
34 | 34 | toolbar with icons. Both the toolbar and the header above the menu can be |
|
35 | 35 | collapsed to leave an unobstructed working area: |
|
36 | 36 | |
|
37 | 37 | .. image:: ../_images/ipy_013_notebook_spectrogram.png |
|
38 | 38 | :width: 460px |
|
39 | 39 | :alt: New user interface for Notebook |
|
40 | 40 | :align: center |
|
41 | 41 | :target: ../_images/ipy_013_notebook_spectrogram.png |
|
42 | 42 | |
|
43 | 43 | The notebook handles very long outputs much better than before (this was a |
|
44 | 44 | serious usability issue when running processes that generated massive amounts |
|
45 | 45 | of output). Now, in the presence of outputs longer than ~100 lines, the |
|
46 | 46 | notebook will automatically collapse to a scrollable area and the entire left |
|
47 | 47 | part of this area controls the display: one click in this area will expand the |
|
48 | 48 | output region completely, and a double-click will hide it completely. This |
|
49 | 49 | figure shows both the scrolled and hidden modes: |
|
50 | 50 | |
|
51 | 51 | .. image:: ../_images/ipy_013_notebook_long_out.png |
|
52 | 52 | :width: 460px |
|
53 | 53 | :alt: Scrolling and hiding of long output in the notebook. |
|
54 | 54 | :align: center |
|
55 | 55 | :target: ../_images/ipy_013_notebook_long_out.png |
|
56 | 56 | |
|
57 | 57 | .. note:: |
|
58 | 58 | |
|
59 | 59 | The auto-folding of long outputs is disabled in Firefox due to bugs in its |
|
60 | 60 | scrolling behavior. See :ghpull:`2047` for details. |
|
61 | 61 | |
|
62 | 62 | Uploading notebooks to the dashboard is now easier: in addition to drag and |
|
63 | 63 | drop (which can be finicky sometimes), you can now click on the upload text and |
|
64 | 64 | use a regular file dialog box to select notebooks to upload. Furthermore, the |
|
65 | 65 | notebook dashboard now auto-refreshes its contents and offers buttons to shut |
|
66 | 66 | down any running kernels (:ghpull:`1739`): |
|
67 | 67 | |
|
68 | 68 | .. image:: ../_images/ipy_013_dashboard.png |
|
69 | 69 | :width: 460px |
|
70 | 70 | :alt: Improved dashboard |
|
71 | 71 | :align: center |
|
72 | 72 | :target: ../_images/ipy_013_dashboard.png |
|
73 | 73 | |
|
74 | 74 | |
|
75 | 75 | Cluster management |
|
76 | 76 | ~~~~~~~~~~~~~~~~~~ |
|
77 | 77 | |
|
78 | 78 | The notebook dashboard can now also start and stop clusters, thanks to a new |
|
79 | 79 | tab in the dashboard user interface: |
|
80 | 80 | |
|
81 | 81 | .. image:: ../_images/ipy_013_dashboard_cluster.png |
|
82 | 82 | :width: 460px |
|
83 | 83 | :alt: Cluster management from the notebook dashboard |
|
84 | 84 | :align: center |
|
85 | 85 | :target: ../_images/ipy_013_dashboard_cluster.png |
|
86 | 86 | |
|
87 | 87 | This interface allows, for each profile you have configured, to start and stop |
|
88 | 88 | a cluster (and optionally override the default number of engines corresponding |
|
89 | 89 | to that configuration). While this hides all error reporting, once you have a |
|
90 | 90 | configuration that you know works smoothly, it is a very convenient interface |
|
91 | 91 | for controlling your parallel resources. |
|
92 | 92 | |
|
93 | 93 | |
|
94 | 94 | New notebook format |
|
95 | 95 | ~~~~~~~~~~~~~~~~~~~ |
|
96 | 96 | |
|
97 | 97 | The notebooks saved now use version 3 of our format, which supports heading |
|
98 | 98 | levels as well as the concept of 'raw' text cells that are not rendered as |
|
99 | 99 | Markdown. These will be useful with converters_ we are developing, to pass raw |
|
100 | 100 | markup (say LaTeX). That conversion code is still under heavy development and |
|
101 | 101 | not quite ready for prime time, but we welcome help on this front so that we |
|
102 | 102 | can merge it for full production use as soon as possible. |
|
103 | 103 | |
|
104 | 104 | .. _converters: https://github.com/ipython/nbconvert |
|
105 | 105 | |
|
106 | 106 | .. note:: |
|
107 | 107 | |
|
108 | 108 | v3 notebooks can *not* be read by older versions of IPython, but we provide |
|
109 | 109 | a `simple script`_ that you can use in case you need to export a v3 |
|
110 | 110 | notebook to share with a v2 user. |
|
111 | 111 | |
|
112 | 112 | .. _simple script: https://gist.github.com/1935808 |
|
113 | 113 | |
|
114 | 114 | |
|
115 | 115 | JavaScript refactoring |
|
116 | 116 | ~~~~~~~~~~~~~~~~~~~~~~ |
|
117 | 117 | |
|
118 | 118 | All the client-side JavaScript has been decoupled to ease reuse of parts of the |
|
119 | 119 | machinery without having to build a full-blown notebook. This will make it much |
|
120 | 120 | easier to communicate with an IPython kernel from existing web pages and to |
|
121 | 121 | integrate single cells into other sites, without loading the full notebook |
|
122 | 122 | document-like UI. :ghpull:`1711`. |
|
123 | 123 | |
|
124 | 124 | This refactoring also enables the possibility of writing dynamic javascript |
|
125 | 125 | widgets that are returned from Python code and that present an interactive view |
|
126 | 126 | to the user, with callbacks in Javascript executing calls to the Kernel. This |
|
127 | 127 | will enable many interactive elements to be added by users in notebooks. |
|
128 | 128 | |
|
129 | 129 | An example of this capability has been provided as a proof of concept in |
|
130 |
:file:` |
|
|
130 | :file:`examples/widgets` that lets you directly communicate with one or more | |
|
131 | 131 | parallel engines, acting as a mini-console for parallel debugging and |
|
132 | 132 | introspection. |
|
133 | 133 | |
|
134 | 134 | |
|
135 | 135 | Improved tooltips |
|
136 | 136 | ~~~~~~~~~~~~~~~~~ |
|
137 | 137 | |
|
138 | 138 | The object tooltips have gained some new functionality. By pressing tab several |
|
139 | 139 | times, you can expand them to see more of a docstring, keep them visible as you |
|
140 | 140 | fill in a function's parameters, or transfer the information to the pager at the |
|
141 | 141 | bottom of the screen. For the details, look at the example notebook |
|
142 | 142 | :file:`01_notebook_introduction.ipynb`. |
|
143 | 143 | |
|
144 | 144 | .. figure:: ../_images/ipy_013_notebook_tooltip.png |
|
145 | 145 | :width: 460px |
|
146 | 146 | :alt: Improved tooltips in the notebook. |
|
147 | 147 | :align: center |
|
148 | 148 | :target: ../_images/ipy_013_notebook_tooltip.png |
|
149 | 149 | |
|
150 | 150 | The new notebook tooltips. |
|
151 | 151 | |
|
152 | 152 | Other improvements to the Notebook |
|
153 | 153 | ---------------------------------- |
|
154 | 154 | |
|
155 | 155 | These are some other notable small improvements to the notebook, in addition to |
|
156 | 156 | many bug fixes and minor changes to add polish and robustness throughout: |
|
157 | 157 | |
|
158 | 158 | * The notebook pager (the area at the bottom) is now resizeable by dragging its |
|
159 | 159 | divider handle, a feature that had been requested many times by just about |
|
160 | 160 | anyone who had used the notebook system. :ghpull:`1705`. |
|
161 | 161 | |
|
162 | 162 | * It is now possible to open notebooks directly from the command line; for |
|
163 | 163 | example: ``ipython notebook path/`` will automatically set ``path/`` as the |
|
164 | 164 | notebook directory, and ``ipython notebook path/foo.ipynb`` will further |
|
165 | 165 | start with the ``foo.ipynb`` notebook opened. :ghpull:`1686`. |
|
166 | 166 | |
|
167 | 167 | * If a notebook directory is specified with ``--notebook-dir`` (or with the |
|
168 | 168 | corresponding configuration flag ``NotebookManager.notebook_dir``), all |
|
169 | 169 | kernels start in this directory. |
|
170 | 170 | |
|
171 | 171 | * Fix codemirror clearing of cells with ``Ctrl-Z``; :ghpull:`1965`. |
|
172 | 172 | |
|
173 | 173 | * Text (markdown) cells now line wrap correctly in the notebook, making them |
|
174 | 174 | much easier to edit :ghpull:`1330`. |
|
175 | 175 | |
|
176 | 176 | * PNG and JPEG figures returned from plots can be interactively resized in the |
|
177 | 177 | notebook, by dragging them from their lower left corner. :ghpull:`1832`. |
|
178 | 178 | |
|
179 | 179 | * Clear ``In []`` prompt numbers on "Clear All Output". For more |
|
180 | 180 | version-control-friendly ``.ipynb`` files, we now strip all prompt numbers |
|
181 | 181 | when doing a "Clear all output". This reduces the amount of noise in |
|
182 | 182 | commit-to-commit diffs that would otherwise show the (highly variable) prompt |
|
183 | 183 | number changes. :ghpull:`1621`. |
|
184 | 184 | |
|
185 | 185 | * The notebook server now requires *two* consecutive ``Ctrl-C`` within 5 |
|
186 | 186 | seconds (or an interactive confirmation) to terminate operation. This makes |
|
187 | 187 | it less likely that you will accidentally kill a long-running server by |
|
188 | 188 | typing ``Ctrl-C`` in the wrong terminal. :ghpull:`1609`. |
|
189 | 189 | |
|
190 | 190 | * Using ``Ctrl-S`` (or ``Cmd-S`` on a Mac) actually saves the notebook rather |
|
191 | 191 | than providing the fairly useless browser html save dialog. :ghpull:`1334`. |
|
192 | 192 | |
|
193 | 193 | * Allow accessing local files from the notebook (in urls), by serving any local |
|
194 | 194 | file as the url ``files/<relativepath>``. This makes it possible to, for |
|
195 | 195 | example, embed local images in a notebook. :ghpull:`1211`. |
|
196 | 196 | |
|
197 | 197 | |
|
198 | 198 | Cell magics |
|
199 | 199 | ----------- |
|
200 | 200 | |
|
201 | 201 | We have completely refactored the magic system, finally moving the magic |
|
202 | 202 | objects to standalone, independent objects instead of being the mixin class |
|
203 | 203 | we'd had since the beginning of IPython (:ghpull:`1732`). Now, a separate base |
|
204 | 204 | class is provided in :class:`IPython.core.magic.Magics` that users can subclass |
|
205 | 205 | to create their own magics. Decorators are also provided to create magics from |
|
206 | 206 | simple functions without the need for object orientation. Please see the |
|
207 | 207 | :ref:`magic` docs for further details. |
|
208 | 208 | |
|
209 | 209 | All builtin magics now exist in a few subclasses that group together related |
|
210 | 210 | functionality, and the new :mod:`IPython.core.magics` package has been created |
|
211 | 211 | to organize this into smaller files. |
|
212 | 212 | |
|
213 | 213 | This cleanup was the last major piece of deep refactoring needed from the |
|
214 | 214 | original 2001 codebase. |
|
215 | 215 | |
|
216 | 216 | We have also introduced a new type of magic function, prefixed with `%%` |
|
217 | 217 | instead of `%`, which operates at the whole-cell level. A cell magic receives |
|
218 | 218 | two arguments: the line it is called on (like a line magic) and the body of the |
|
219 | 219 | cell below it. |
|
220 | 220 | |
|
221 | 221 | Cell magics are most natural in the notebook, but they also work in the |
|
222 | 222 | terminal and qt console, with the usual approach of using a blank line to |
|
223 | 223 | signal cell termination. |
|
224 | 224 | |
|
225 | 225 | For example, to time the execution of several statements:: |
|
226 | 226 | |
|
227 | 227 | %%timeit x = 0 # setup |
|
228 | 228 | for i in range(100000): |
|
229 | 229 | x += i**2 |
|
230 | 230 | |
|
231 | 231 | This is particularly useful to integrate code in another language, and cell |
|
232 | 232 | magics already exist for shell scripts, Cython, R and Octave. Using ``%%script |
|
233 | 233 | /usr/bin/foo``, you can run a cell in any interpreter that accepts code via |
|
234 | 234 | stdin. |
|
235 | 235 | |
|
236 | 236 | Another handy cell magic makes it easy to write short text files: ``%%file |
|
237 | 237 | ~/save/to/here.txt``. |
|
238 | 238 | |
|
239 | 239 | The following cell magics are now included by default; all those that use |
|
240 | 240 | special interpreters (Perl, Ruby, bash, etc.) assume you have the requisite |
|
241 | 241 | interpreter installed: |
|
242 | 242 | |
|
243 | 243 | * ``%%!``: run cell body with the underlying OS shell; this is similar to |
|
244 | 244 | prefixing every line in the cell with ``!``. |
|
245 | 245 | |
|
246 | 246 | * ``%%bash``: run cell body under bash. |
|
247 | 247 | |
|
248 | 248 | * ``%%capture``: capture the output of the code in the cell (and stderr as |
|
249 | 249 | well). Useful to run codes that produce too much output that you don't even |
|
250 | 250 | want scrolled. |
|
251 | 251 | |
|
252 | 252 | * ``%%file``: save cell body as a file. |
|
253 | 253 | |
|
254 | 254 | * ``%%perl``: run cell body using Perl. |
|
255 | 255 | |
|
256 | 256 | * ``%%prun``: run cell body with profiler (cell extension of ``%prun``). |
|
257 | 257 | |
|
258 | 258 | * ``%%python3``: run cell body using Python 3. |
|
259 | 259 | |
|
260 | 260 | * ``%%ruby``: run cell body using Ruby. |
|
261 | 261 | |
|
262 | 262 | * ``%%script``: run cell body with the script specified in the first line. |
|
263 | 263 | |
|
264 | 264 | * ``%%sh``: run cell body using sh. |
|
265 | 265 | |
|
266 | 266 | * ``%%sx``: run cell with system shell and capture process output (cell |
|
267 | 267 | extension of ``%sx``). |
|
268 | 268 | |
|
269 | 269 | * ``%%system``: run cell with system shell (``%%!`` is an alias to this). |
|
270 | 270 | |
|
271 | 271 | * ``%%timeit``: time the execution of the cell (extension of ``%timeit``). |
|
272 | 272 | |
|
273 | 273 | This is what some of the script-related magics look like in action: |
|
274 | 274 | |
|
275 | 275 | .. image:: ../_images/ipy_013_notebook_script_cells.png |
|
276 | 276 | :width: 460px |
|
277 | 277 | :alt: Cluster management from the notebook dashboard |
|
278 | 278 | :align: center |
|
279 | 279 | :target: ../_images/ipy_013_notebook_script_cells.png |
|
280 | 280 | |
|
281 | 281 | In addition, we have also a number of :ref:`extensions <extensions_overview>` |
|
282 | 282 | that provide specialized magics. These typically require additional software |
|
283 | 283 | to run and must be manually loaded via ``%load_ext <extension name>``, but are |
|
284 | 284 | extremely useful. The following extensions are provided: |
|
285 | 285 | |
|
286 | 286 | **Cython magics** (extension :ref:`cythonmagic <extensions_cythonmagic>`) |
|
287 | 287 | This extension provides magics to automatically build and compile Python |
|
288 | 288 | extension modules using the Cython_ language. You must install Cython |
|
289 | 289 | separately, as well as a C compiler, for this to work. The examples |
|
290 | 290 | directory in the source distribution ships with a full notebook |
|
291 | 291 | demonstrating these capabilities: |
|
292 | 292 | |
|
293 | 293 | .. image:: ../_images/ipy_013_notebook_cythonmagic.png |
|
294 | 294 | :width: 460px |
|
295 | 295 | :alt: Cython magic |
|
296 | 296 | :align: center |
|
297 | 297 | :target: ../_images/ipy_013_notebook_cythonmagic.png |
|
298 | 298 | |
|
299 | 299 | .. _cython: http://cython.org |
|
300 | 300 | |
|
301 | 301 | **Octave magics** (extension :ref:`octavemagic <extensions_octavemagic>`) |
|
302 | 302 | This extension provides several magics that support calling code written in |
|
303 | 303 | the Octave_ language for numerical computing. You can execute single-lines |
|
304 | 304 | or whole blocks of Octave code, capture both output and figures inline |
|
305 | 305 | (just like matplotlib plots), and have variables automatically converted |
|
306 | 306 | between the two languages. To use this extension, you must have Octave |
|
307 | 307 | installed as well as the oct2py_ package. The examples |
|
308 | 308 | directory in the source distribution ships with a full notebook |
|
309 | 309 | demonstrating these capabilities: |
|
310 | 310 | |
|
311 | 311 | .. image:: ../_images/ipy_013_notebook_octavemagic.png |
|
312 | 312 | :width: 460px |
|
313 | 313 | :alt: Octave magic |
|
314 | 314 | :align: center |
|
315 | 315 | :target: ../_images/ipy_013_notebook_octavemagic.png |
|
316 | 316 | |
|
317 | 317 | .. _octave: http://www.gnu.org/software/octave |
|
318 | 318 | .. _oct2py: http://pypi.python.org/pypi/oct2py |
|
319 | 319 | |
|
320 | 320 | **R magics** (extension :ref:`rmagic <extensions_rmagic>`) |
|
321 | 321 | This extension provides several magics that support calling code written in |
|
322 | 322 | the R_ language for statistical data analysis. You can execute |
|
323 | 323 | single-lines or whole blocks of R code, capture both output and figures |
|
324 | 324 | inline (just like matplotlib plots), and have variables automatically |
|
325 | 325 | converted between the two languages. To use this extension, you must have |
|
326 | 326 | R installed as well as the rpy2_ package that bridges Python and R. The |
|
327 | 327 | examples directory in the source distribution ships with a full notebook |
|
328 | 328 | demonstrating these capabilities: |
|
329 | 329 | |
|
330 | 330 | .. image:: ../_images/ipy_013_notebook_rmagic.png |
|
331 | 331 | :width: 460px |
|
332 | 332 | :alt: R magic |
|
333 | 333 | :align: center |
|
334 | 334 | :target: ../_images/ipy_013_notebook_rmagic.png |
|
335 | 335 | |
|
336 | 336 | .. _R: http://www.r-project.org |
|
337 | 337 | .. _rpy2: http://rpy.sourceforge.net/rpy2.html |
|
338 | 338 | |
|
339 | 339 | |
|
340 | 340 | Tab completer improvements |
|
341 | 341 | -------------------------- |
|
342 | 342 | |
|
343 | 343 | Useful tab-completion based on live inspection of objects is one of the most |
|
344 | 344 | popular features of IPython. To make this process even more user-friendly, the |
|
345 | 345 | completers of both the Qt console and the Notebook have been reworked. |
|
346 | 346 | |
|
347 | 347 | The Qt console comes with a new ncurses-like tab completer, activated by |
|
348 | 348 | default, which lets you cycle through the available completions by pressing tab, |
|
349 | 349 | or select a completion with the arrow keys (:ghpull:`1851`). |
|
350 | 350 | |
|
351 | 351 | .. figure:: ../_images/ipy_013_qtconsole_completer.png |
|
352 | 352 | :width: 460px |
|
353 | 353 | :alt: ncurses-like completer, with highlighted selection. |
|
354 | 354 | :align: center |
|
355 | 355 | :target: ../_images/ipy_013_qtconsole_completer.png |
|
356 | 356 | |
|
357 | 357 | The new improved Qt console's ncurses-like completer allows to easily |
|
358 | 358 | navigate thought long list of completions. |
|
359 | 359 | |
|
360 | 360 | In the notebook, completions are now sourced both from object introspection and |
|
361 | 361 | analysis of surrounding code, so limited completions can be offered for |
|
362 | 362 | variables defined in the current cell, or while the kernel is busy |
|
363 | 363 | (:ghpull:`1711`). |
|
364 | 364 | |
|
365 | 365 | |
|
366 | 366 | We have implemented a new configurable flag to control tab completion on |
|
367 | 367 | modules that provide the ``__all__`` attribute:: |
|
368 | 368 | |
|
369 | 369 | IPCompleter.limit_to__all__= Boolean |
|
370 | 370 | |
|
371 | 371 | This instructs the completer to honor ``__all__`` for the completion. |
|
372 | 372 | Specifically, when completing on ``object.<tab>``, if True: only those names |
|
373 | 373 | in ``obj.__all__`` will be included. When False [default]: the ``__all__`` |
|
374 | 374 | attribute is ignored. :ghpull:`1529`. |
|
375 | 375 | |
|
376 | 376 | |
|
377 | 377 | Improvements to the Qt console |
|
378 | 378 | ------------------------------ |
|
379 | 379 | |
|
380 | 380 | The Qt console continues to receive improvements and refinements, despite the |
|
381 | 381 | fact that it is by now a fairly mature and robust component. Lots of small |
|
382 | 382 | polish has gone into it, here are a few highlights: |
|
383 | 383 | |
|
384 | 384 | * A number of changes were made to the underlying code for easier integration |
|
385 | 385 | into other projects such as Spyder_ (:ghpull:`2007`, :ghpull:`2024`). |
|
386 | 386 | |
|
387 | 387 | * Improved menus with a new Magic menu that is organized by magic groups (this |
|
388 | 388 | was made possible by the reorganization of the magic system |
|
389 | 389 | internals). :ghpull:`1782`. |
|
390 | 390 | |
|
391 | 391 | * Allow for restarting kernels without clearing the qtconsole, while leaving a |
|
392 | 392 | visible indication that the kernel has restarted. :ghpull:`1681`. |
|
393 | 393 | |
|
394 | 394 | * Allow the native display of jpeg images in the qtconsole. :ghpull:`1643`. |
|
395 | 395 | |
|
396 | 396 | .. _spyder: https://code.google.com/p/spyderlib |
|
397 | 397 | |
|
398 | 398 | |
|
399 | 399 | |
|
400 | 400 | Parallel |
|
401 | 401 | -------- |
|
402 | 402 | |
|
403 | 403 | The parallel tools have been improved and fine-tuned on multiple fronts. Now, |
|
404 | 404 | the creation of an :class:`IPython.parallel.Client` object automatically |
|
405 | 405 | activates a line and cell magic function ``px`` that sends its code to all the |
|
406 | 406 | engines. Further magics can be easily created with the :meth:`.Client.activate` |
|
407 | 407 | method, to conveniently execute code on any subset of engines. :ghpull:`1893`. |
|
408 | 408 | |
|
409 | 409 | The ``%%px`` cell magic can also be given an optional targets argument, as well |
|
410 | 410 | as a ``--out`` argument for storing its output. |
|
411 | 411 | |
|
412 | 412 | A new magic has also been added, ``%pxconfig``, that lets you configure various |
|
413 | 413 | defaults of the parallel magics. As usual, type ``%pxconfig?`` for details. |
|
414 | 414 | |
|
415 | 415 | The exception reporting in parallel contexts has been improved to be easier to |
|
416 | 416 | read. Now, IPython directly reports the remote exceptions without showing any |
|
417 | 417 | of the internal execution parts: |
|
418 | 418 | |
|
419 | 419 | .. image:: ../_images/ipy_013_par_tb.png |
|
420 | 420 | :width: 460px |
|
421 | 421 | :alt: Improved parallel exceptions. |
|
422 | 422 | :align: center |
|
423 | 423 | :target: ../_images/ipy_013_par_tb.png |
|
424 | 424 | |
|
425 | 425 | The parallel tools now default to using ``NoDB`` as the storage backend for |
|
426 | 426 | intermediate results. This means that the default usage case will have a |
|
427 | 427 | significantly reduced memory footprint, though certain advanced features are |
|
428 | 428 | not available with this backend. For more details, see :ref:`parallel_db`. |
|
429 | 429 | |
|
430 | 430 | The parallel magics now display all output, so you can do parallel plotting or |
|
431 | 431 | other actions with complex display. The ``px`` magic has now both line and cell |
|
432 | 432 | modes, and in cell mode finer control has been added about how to collate |
|
433 | 433 | output from multiple engines. :ghpull:`1768`. |
|
434 | 434 | |
|
435 | 435 | There have also been incremental improvements to the SSH launchers: |
|
436 | 436 | |
|
437 | 437 | * add to_send/fetch steps for moving connection files around. |
|
438 | 438 | |
|
439 | 439 | * add SSHProxyEngineSetLauncher, for invoking to `ipcluster engines` on a |
|
440 | 440 | remote host. This can be used to start a set of engines via PBS/SGE/MPI |
|
441 | 441 | *remotely*. |
|
442 | 442 | |
|
443 | 443 | This makes the SSHLauncher usable on machines without shared filesystems. |
|
444 | 444 | |
|
445 | 445 | A number of 'sugar' methods/properties were added to AsyncResult that are |
|
446 | 446 | quite useful (:ghpull:`1548`) for everday work: |
|
447 | 447 | |
|
448 | 448 | * ``ar.wall_time`` = received - submitted |
|
449 | 449 | * ``ar.serial_time`` = sum of serial computation time |
|
450 | 450 | * ``ar.elapsed`` = time since submission (wall_time if done) |
|
451 | 451 | * ``ar.progress`` = (int) number of sub-tasks that have completed |
|
452 | 452 | * ``len(ar)`` = # of tasks |
|
453 | 453 | * ``ar.wait_interactive()``: prints progress |
|
454 | 454 | |
|
455 | 455 | Added :meth:`.Client.spin_thread` / :meth:`~.Client.stop_spin_thread` for |
|
456 | 456 | running spin in a background thread, to keep zmq queue clear. This can be used |
|
457 | 457 | to ensure that timing information is as accurate as possible (at the cost of |
|
458 | 458 | having a background thread active). |
|
459 | 459 | |
|
460 | 460 | Set TaskScheduler.hwm default to 1 instead of 0. 1 has more |
|
461 | 461 | predictable/intuitive behavior, if often slower, and thus a more logical |
|
462 | 462 | default. Users whose workloads require maximum throughput and are largely |
|
463 | 463 | homogeneous in time per task can make the optimization themselves, but now the |
|
464 | 464 | behavior will be less surprising to new users. :ghpull:`1294`. |
|
465 | 465 | |
|
466 | 466 | |
|
467 | 467 | Kernel/Engine unification |
|
468 | 468 | ------------------------- |
|
469 | 469 | |
|
470 | 470 | This is mostly work 'under the hood', but it is actually a *major* achievement |
|
471 | 471 | for the project that has deep implications in the long term: at last, we have |
|
472 | 472 | unified the main object that executes as the user's interactive shell (which we |
|
473 | 473 | refer to as the *IPython kernel*) with the objects that run in all the worker |
|
474 | 474 | nodes of the parallel computing facilities (the *IPython engines*). Ever since |
|
475 | 475 | the first implementation of IPython's parallel code back in 2006, we had wanted |
|
476 | 476 | to have these two roles be played by the same machinery, but a number of |
|
477 | 477 | technical reasons had prevented that from being true. |
|
478 | 478 | |
|
479 | 479 | In this release we have now merged them, and this has a number of important |
|
480 | 480 | consequences: |
|
481 | 481 | |
|
482 | 482 | * It is now possible to connect any of our clients (qtconsole or terminal |
|
483 | 483 | console) to any individual parallel engine, with the *exact* behavior of |
|
484 | 484 | working at a 'regular' IPython console/qtconsole. This makes debugging, |
|
485 | 485 | plotting, etc. in parallel scenarios vastly easier. |
|
486 | 486 | |
|
487 | 487 | * Parallel engines can always execute arbitrary 'IPython code', that is, code |
|
488 | 488 | that has magics, shell extensions, etc. In combination with the ``%%px`` |
|
489 | 489 | magics, it is thus extremely natural for example to send to all engines a |
|
490 | 490 | block of Cython or R code to be executed via the new Cython and R magics. For |
|
491 | 491 | example, this snippet would send the R block to all active engines in a |
|
492 | 492 | cluster:: |
|
493 | 493 | |
|
494 | 494 | %%px |
|
495 | 495 | %%R |
|
496 | 496 | ... R code goes here |
|
497 | 497 | |
|
498 | 498 | * It is possible to embed not only an interactive shell with the |
|
499 | 499 | :func:`IPython.embed` call as always, but now you can also embed a *kernel* |
|
500 | 500 | with :func:`IPython.embed_kernel()`. Embedding an IPython kernel in an |
|
501 | 501 | application is useful when you want to use :func:`IPython.embed` but don't |
|
502 | 502 | have a terminal attached on stdin and stdout. |
|
503 | 503 | |
|
504 | 504 | * The new :func:`IPython.parallel.bind_kernel` allows you to promote Engines to |
|
505 | 505 | listening Kernels, and connect QtConsoles to an Engine and debug it |
|
506 | 506 | directly. |
|
507 | 507 | |
|
508 | 508 | In addition, having a single core object through our entire architecture also |
|
509 | 509 | makes the project conceptually cleaner, easier to maintain and more robust. |
|
510 | 510 | This took a lot of work to get in place, but we are thrilled to have this major |
|
511 | 511 | piece of architecture finally where we'd always wanted it to be. |
|
512 | 512 | |
|
513 | 513 | |
|
514 | 514 | Official Public API |
|
515 | 515 | ------------------- |
|
516 | 516 | |
|
517 | 517 | We have begun organizing our API for easier public use, with an eye towards an |
|
518 | 518 | official IPython 1.0 release which will firmly maintain this API compatible for |
|
519 | 519 | its entire lifecycle. There is now an :mod:`IPython.display` module that |
|
520 | 520 | aggregates all display routines, and the :mod:`IPython.config` namespace has |
|
521 | 521 | all public configuration tools. We will continue improving our public API |
|
522 | 522 | layout so that users only need to import names one level deeper than the main |
|
523 | 523 | ``IPython`` package to access all public namespaces. |
|
524 | 524 | |
|
525 | 525 | |
|
526 | 526 | IPython notebook file icons |
|
527 | 527 | --------------------------- |
|
528 | 528 | |
|
529 | 529 | The directory ``docs/resources`` in the source distribution contains SVG and |
|
530 | 530 | PNG versions of our file icons, as well as an ``Info.plist.example`` file with |
|
531 | 531 | instructions to install them on Mac OSX. This is a first draft of our icons, |
|
532 | 532 | and we encourage contributions from users with graphic talent to improve them |
|
533 | 533 | in the future: |
|
534 | 534 | |
|
535 | 535 | .. image:: ../../resources/ipynb_icon_128x128.png |
|
536 | 536 | :alt: IPython notebook file icon. |
|
537 | 537 | |
|
538 | 538 | |
|
539 | 539 | New top-level `locate` command |
|
540 | 540 | ------------------------------ |
|
541 | 541 | |
|
542 | 542 | Add `locate` entry points; these would be useful for quickly locating IPython |
|
543 | 543 | directories and profiles from other (non-Python) applications. :ghpull:`1762`. |
|
544 | 544 | |
|
545 | 545 | Examples:: |
|
546 | 546 | |
|
547 | 547 | $> ipython locate |
|
548 | 548 | /Users/me/.ipython |
|
549 | 549 | |
|
550 | 550 | $> ipython locate profile foo |
|
551 | 551 | /Users/me/.ipython/profile_foo |
|
552 | 552 | |
|
553 | 553 | $> ipython locate profile |
|
554 | 554 | /Users/me/.ipython/profile_default |
|
555 | 555 | |
|
556 | 556 | $> ipython locate profile dne |
|
557 | 557 | [ProfileLocate] Profile u'dne' not found. |
|
558 | 558 | |
|
559 | 559 | |
|
560 | 560 | Other new features and improvements |
|
561 | 561 | ----------------------------------- |
|
562 | 562 | |
|
563 | 563 | * **%install_ext**: A new magic function to install an IPython extension from |
|
564 | 564 | a URL. E.g. ``%install_ext |
|
565 | 565 | https://bitbucket.org/birkenfeld/ipython-physics/raw/default/physics.py``. |
|
566 | 566 | |
|
567 | 567 | * The ``%loadpy`` magic is no longer restricted to Python files, and has been |
|
568 | 568 | renamed ``%load``. The old name remains as an alias. |
|
569 | 569 | |
|
570 | 570 | * New command line arguments will help external programs find IPython folders: |
|
571 | 571 | ``ipython locate`` finds the user's IPython directory, and ``ipython locate |
|
572 | 572 | profile foo`` finds the folder for the 'foo' profile (if it exists). |
|
573 | 573 | |
|
574 | 574 | * The :envvar:`IPYTHON_DIR` environment variable, introduced in the Great |
|
575 | 575 | Reorganization of 0.11 and existing only in versions 0.11-0.13, has been |
|
576 | 576 | deprecated. As described in :ghpull:`1167`, the complexity and confusion of |
|
577 | 577 | migrating to this variable is not worth the aesthetic improvement. Please use |
|
578 | 578 | the historical :envvar:`IPYTHONDIR` environment variable instead. |
|
579 | 579 | |
|
580 | 580 | * The default value of *interactivity* passed from |
|
581 | 581 | :meth:`~IPython.core.interactiveshell.InteractiveShell.run_cell` to |
|
582 | 582 | :meth:`~IPython.core.interactiveshell.InteractiveShell.run_ast_nodes` |
|
583 | 583 | is now configurable. |
|
584 | 584 | |
|
585 | 585 | * New ``%alias_magic`` function to conveniently create aliases of existing |
|
586 | 586 | magics, if you prefer to have shorter names for personal use. |
|
587 | 587 | |
|
588 | 588 | * We ship unminified versions of the JavaScript libraries we use, to better |
|
589 | 589 | comply with Debian's packaging policies. |
|
590 | 590 | |
|
591 | 591 | * Simplify the information presented by ``obj?/obj??`` to eliminate a few |
|
592 | 592 | redundant fields when possible. :ghpull:`2038`. |
|
593 | 593 | |
|
594 | 594 | * Improved continuous integration for IPython. We now have automated test runs |
|
595 | 595 | on `Shining Panda <https://jenkins.shiningpanda.com/ipython>`_ and `Travis-CI |
|
596 | 596 | <http://travis-ci.org/#!/ipython/ipython>`_, as well as `Tox support |
|
597 | 597 | <http://tox.testrun.org>`_. |
|
598 | 598 | |
|
599 | 599 | * The `vim-ipython`_ functionality (externally developed) has been updated to |
|
600 | 600 | the latest version. |
|
601 | 601 | |
|
602 | 602 | .. _vim-ipython: https://github.com/ivanov/vim-ipython |
|
603 | 603 | |
|
604 | 604 | * The ``%save`` magic now has a ``-f`` flag to force overwriting, which makes |
|
605 | 605 | it much more usable in the notebook where it is not possible to reply to |
|
606 | 606 | interactive questions from the kernel. :ghpull:`1937`. |
|
607 | 607 | |
|
608 | 608 | * Use dvipng to format sympy.Matrix, enabling display of matrices in the Qt |
|
609 | 609 | console with the sympy printing extension. :ghpull:`1861`. |
|
610 | 610 | |
|
611 | 611 | * Our messaging protocol now has a reasonable test suite, helping ensure that |
|
612 | 612 | we don't accidentally deviate from the spec and possibly break third-party |
|
613 | 613 | applications that may have been using it. We encourage users to contribute |
|
614 | 614 | more stringent tests to this part of the test suite. :ghpull:`1627`. |
|
615 | 615 | |
|
616 | 616 | * Use LaTeX to display, on output, various built-in types with the SymPy |
|
617 | 617 | printing extension. :ghpull:`1399`. |
|
618 | 618 | |
|
619 | 619 | * Add Gtk3 event loop integration and example. :ghpull:`1588`. |
|
620 | 620 | |
|
621 | 621 | * ``clear_output`` improvements, which allow things like progress bars and other |
|
622 | 622 | simple animations to work well in the notebook (:ghpull:`1563`): |
|
623 | 623 | |
|
624 | 624 | * `clear_output()` clears the line, even in terminal IPython, the QtConsole |
|
625 | 625 | and plain Python as well, by printing `\r` to streams. |
|
626 | 626 | |
|
627 | 627 | * `clear_output()` avoids the flicker in the notebook by adding a delay, |
|
628 | 628 | and firing immediately upon the next actual display message. |
|
629 | 629 | |
|
630 | 630 | * `display_javascript` hides its `output_area` element, so using display to |
|
631 | 631 | run a bunch of javascript doesn't result in ever-growing vertical space. |
|
632 | 632 | |
|
633 | 633 | * Add simple support for running inside a virtualenv. While this doesn't |
|
634 | 634 | supplant proper installation (as users should do), it helps ad-hoc calling of |
|
635 | 635 | IPython from inside a virtualenv. :ghpull:`1388`. |
|
636 | 636 | |
|
637 | 637 | |
|
638 | 638 | Major Bugs fixed |
|
639 | 639 | ---------------- |
|
640 | 640 | |
|
641 | 641 | In this cycle, we have :ref:`closed over 740 issues <issues_list_013>`, but a |
|
642 | 642 | few major ones merit special mention: |
|
643 | 643 | |
|
644 | 644 | * The ``%pastebin`` magic has been updated to point to gist.github.com, since |
|
645 | 645 | unfortunately http://paste.pocoo.org has closed down. We also added a -d flag |
|
646 | 646 | for the user to provide a gist description string. :ghpull:`1670`. |
|
647 | 647 | |
|
648 | 648 | * Fix ``%paste`` that would reject certain valid inputs. :ghpull:`1258`. |
|
649 | 649 | |
|
650 | 650 | * Fix sending and receiving of Numpy structured arrays (those with composite |
|
651 | 651 | dtypes, often used as recarrays). :ghpull:`2034`. |
|
652 | 652 | |
|
653 | 653 | * Reconnect when the websocket connection closes unexpectedly. :ghpull:`1577`. |
|
654 | 654 | |
|
655 | 655 | * Fix truncated representation of objects in the debugger by showing at least |
|
656 | 656 | 80 characters' worth of information. :ghpull:`1793`. |
|
657 | 657 | |
|
658 | 658 | * Fix logger to be Unicode-aware: logging could crash ipython if there was |
|
659 | 659 | unicode in the input. :ghpull:`1792`. |
|
660 | 660 | |
|
661 | 661 | * Fix images missing from XML/SVG export in the Qt console. :ghpull:`1449`. |
|
662 | 662 | |
|
663 | 663 | * Fix deepreload on Python 3. :ghpull:`1625`, as well as having a much cleaner |
|
664 | 664 | and more robust implementation of deepreload in general. :ghpull:`1457`. |
|
665 | 665 | |
|
666 | 666 | |
|
667 | 667 | Backwards incompatible changes |
|
668 | 668 | ------------------------------ |
|
669 | 669 | |
|
670 | 670 | * The exception :exc:`IPython.core.error.TryNext` previously accepted |
|
671 | 671 | arguments and keyword arguments to be passed to the next implementation |
|
672 | 672 | of the hook. This feature was removed as it made error message propagation |
|
673 | 673 | difficult and violated the principle of loose coupling. |
General Comments 0
You need to be logged in to leave comments.
Login now