Show More
@@ -0,0 +1,55 b'' | |||||
|
1 | ========================= | |||
|
2 | IPython GUI Support Notes | |||
|
3 | ========================= | |||
|
4 | ||||
|
5 | IPython allows GUI event loops to be run in an interactive IPython session. | |||
|
6 | This is done using Python's PyOS_InputHook hook which Python calls | |||
|
7 | when the :func:`raw_input` function is called and waiting for user input. | |||
|
8 | IPython has versions of this hook for wx, pyqt4 and pygtk. | |||
|
9 | ||||
|
10 | When a GUI program is used interactively within IPython, the event loop of | |||
|
11 | the GUI should *not* be started. This is because, the PyOS_Inputhook itself | |||
|
12 | is responsible for iterating the GUI event loop. | |||
|
13 | ||||
|
14 | IPython has facilities for installing the needed input hook for each GUI | |||
|
15 | toolkit and for creating the needed main GUI application object. Usually, | |||
|
16 | these main application objects should be created only once and for some | |||
|
17 | GUI toolkits, special options have to be passed to the application object | |||
|
18 | to enable it to function properly in IPython. | |||
|
19 | ||||
|
20 | We need to answer the following questions: | |||
|
21 | ||||
|
22 | * Who is responsible for creating the main GUI application object, IPython | |||
|
23 | or third parties (matplotlib, enthought.traits, etc.)? | |||
|
24 | ||||
|
25 | * What is the proper way for third party code to detect if a GUI application | |||
|
26 | object has already been created? If one has been created, how should | |||
|
27 | the existing instance be retrieved? | |||
|
28 | ||||
|
29 | * In a GUI application object has been created, how should third party code | |||
|
30 | detect if the GUI event loop is running. It is not sufficient to call the | |||
|
31 | relevant function methods in the GUI toolkits (like ``IsMainLoopRunning``) | |||
|
32 | because those don't know if the GUI event loop is running through the | |||
|
33 | input hook. | |||
|
34 | ||||
|
35 | * We might need a way for third party code to determine if it is running | |||
|
36 | in IPython or not. Currently, the only way of running GUI code in IPython | |||
|
37 | is by using the input hook, but eventually, GUI based versions of IPython | |||
|
38 | will allow the GUI event loop in the more traditional manner. We will need | |||
|
39 | a way for third party code to distinguish between these two cases. | |||
|
40 | ||||
|
41 | Here is some sample code I have been using to debug this issue:: | |||
|
42 | ||||
|
43 | from matplotlib import pyplot as plt | |||
|
44 | ||||
|
45 | from enthought.traits import api as traits | |||
|
46 | ||||
|
47 | class Foo(traits.HasTraits): | |||
|
48 | a = traits.Float() | |||
|
49 | ||||
|
50 | f = Foo() | |||
|
51 | f.configure_traits() | |||
|
52 | ||||
|
53 | plt.plot(range(10)) | |||
|
54 | ||||
|
55 |
@@ -1,149 +1,286 b'' | |||||
1 | ======================================== |
|
1 | ======================================== | |
2 | Design proposal for mod:`IPython.core` |
|
2 | Design proposal for mod:`IPython.core` | |
3 | ======================================== |
|
3 | ======================================== | |
4 |
|
4 | |||
5 | Currently mod:`IPython.core` is not well suited for use in GUI |
|
5 | Currently mod:`IPython.core` is not well suited for use in GUI applications. | |
6 |
|
|
6 | The purpose of this document is to describe a design that will resolve this | |
7 |
|
|
7 | limitation. | |
8 |
|
8 | |||
9 | Process and thread model |
|
9 | Process and thread model | |
10 | ======================== |
|
10 | ======================== | |
11 |
|
11 | |||
12 | The design described here is based on a two process model. These two processes |
|
12 | The design described here is based on a two process model. These two processes | |
13 | are: |
|
13 | are: | |
14 |
|
14 | |||
15 |
1. The IPython engine/kernel. This process contains the user's namespace and |
|
15 | 1. The IPython engine/kernel. This process contains the user's namespace and | |
16 | responsible for executing user code. If user code uses |
|
16 | is responsible for executing user code. If user code uses | |
17 | :mod:`enthought.traits` or uses a GUI toolkit to perform plotting, the GUI |
|
17 | :mod:`enthought.traits` or uses a GUI toolkit to perform plotting, the GUI | |
18 | event loop will run in this process. |
|
18 | event loop will run in this process. | |
19 |
|
19 | |||
20 | 2. The GUI application. The user facing GUI application will run in a second |
|
20 | 2. The GUI application. The user facing GUI application will run in a second | |
21 |
process that communicates directly with the IPython engine using |
|
21 | process that communicates directly with the IPython engine using | |
22 |
|
|
22 | asynchronous messaging. The GUI application will not execute any user code. | |
23 |
canonical example of a GUI application that talks to the IPython |
|
23 | The canonical example of a GUI application that talks to the IPython | |
24 |
would be a GUI based IPython terminal. However, the GUI application |
|
24 | engine, would be a GUI based IPython terminal. However, the GUI application | |
25 | provide a more sophisticated interface such as a notebook. |
|
25 | could provide a more sophisticated interface such as a notebook. | |
26 |
|
26 | |||
27 |
We now describe the treading model of the IPython engine. |
|
27 | We now describe the threading model of the IPython engine. Two threads will be | |
28 |
used to implement the IPython engine: a main thread that executes user code |
|
28 | used to implement the IPython engine: a main thread that executes user code | |
29 |
a networking thread that communicates with the outside world. This |
|
29 | and a networking thread that communicates with the outside world. This | |
30 | design is required by a number of different factors. |
|
30 | specific design is required by a number of different factors. | |
31 |
|
31 | |||
32 | First, The IPython engine must run the GUI event loop if the user wants to |
|
32 | First, The IPython engine must run the GUI event loop if the user wants to | |
33 | perform interactive plotting. Because of the design of most GUIs, this means |
|
33 | perform interactive plotting. Because of the design of most GUIs, this means | |
34 | that the user code (which will make GUI calls) must live in the main thread. |
|
34 | that the user code (which will make GUI calls) must live in the main thread. | |
35 |
|
35 | |||
36 | Second, networking code in the engine (Twisted or otherwise) must be able to |
|
36 | Second, networking code in the engine (Twisted or otherwise) must be able to | |
37 |
communicate with the outside world while user code runs. An example would be |
|
37 | communicate with the outside world while user code runs. An example would be | |
38 | user code does the following:: |
|
38 | if user code does the following:: | |
39 |
|
39 | |||
40 | import time |
|
40 | import time | |
41 | for i in range(10): |
|
41 | for i in range(10): | |
42 | print i |
|
42 | print i | |
43 | time.sleep(2) |
|
43 | time.sleep(2) | |
44 |
|
44 | |||
45 | We would like to result of each ``print i`` to be seen by the GUI application |
|
45 | We would like to result of each ``print i`` to be seen by the GUI application | |
46 | before the entire code block completes. We call this asynchronous printing. |
|
46 | before the entire code block completes. We call this asynchronous printing. | |
47 | For this to be possible, the networking code has to be able to be able to |
|
47 | For this to be possible, the networking code has to be able to be able to | |
48 |
communicate the value of ``stdout`` to the GUI application while |
|
48 | communicate the current value of ``sys.stdout`` to the GUI application while | |
49 |
run. Another example is using :mod:`IPython.kernel.client` in |
|
49 | user code is run. Another example is using :mod:`IPython.kernel.client` in | |
50 |
perform a parallel computation by talking to an IPython |
|
50 | user code to perform a parallel computation by talking to an IPython | |
51 |
engines (these engines are separate from the one we |
|
51 | controller and a set of engines (these engines are separate from the one we | |
52 |
module requires the Twisted event loop to be run in |
|
52 | are discussing here). This module requires the Twisted event loop to be run in | |
53 | user code. |
|
53 | a different thread than user code. | |
54 |
|
54 | |||
55 | For the GUI application, threads are optional. However, the GUI application |
|
55 | For the GUI application, threads are optional. However, the GUI application | |
56 | does need to be able to perform network communications asynchronously (without |
|
56 | does need to be able to perform network communications asynchronously (without | |
57 | blocking the GUI itself). With this in mind, there are two options: |
|
57 | blocking the GUI itself). With this in mind, there are two options: | |
58 |
|
58 | |||
59 | * Use Twisted (or another non-blocking socket library) in the same thread as |
|
59 | * Use Twisted (or another non-blocking socket library) in the same thread as | |
60 | the GUI event loop. |
|
60 | the GUI event loop. | |
61 |
|
61 | |||
62 | * Don't use Twisted, but instead run networking code in the GUI application |
|
62 | * Don't use Twisted, but instead run networking code in the GUI application | |
63 | using blocking sockets in threads. This would require the usage of polling |
|
63 | using blocking sockets in threads. This would require the usage of polling | |
64 | and queues to manage the networking in the GUI application. |
|
64 | and queues to manage the networking in the GUI application. | |
65 |
|
65 | |||
66 | Thus, for the GUI application, there is a choice between non-blocking sockets |
|
66 | Thus, for the GUI application, there is a choice between non-blocking sockets | |
67 | (Twisted) or threads. |
|
67 | (Twisted) or threads. | |
68 |
|
68 | |||
69 | Interprocess communication |
|
69 | Asynchronous messaging | |
70 |
====================== |
|
70 | ====================== | |
71 |
|
71 | |||
72 |
The GUI application will use |
|
72 | The GUI application will use asynchronous message queues to communicate with | |
73 |
|
|
73 | the networking thread of the engine. Because this communication will typically | |
74 |
|
|
74 | happen over localhost, a simple, one way, network protocol like XML-RPC or | |
75 |
|
|
75 | JSON-RPC can be used to implement this messaging. These options will also make | |
76 |
implement the required networking in the GUI application using the |
|
76 | it easy to implement the required networking in the GUI application using the | |
77 |
library. In applications where secure communications are required, |
|
77 | standard library. In applications where secure communications are required, | |
78 |
Foolscap will probably be the best way to go for now |
|
78 | Twisted and Foolscap will probably be the best way to go for now, but HTTP is | |
|
79 | also an option. | |||
79 |
|
80 | |||
80 | Using this communication channel, the GUI application will be able to perform |
|
81 | There is some flexibility as to where the message queues are located. One | |
81 | the following actions with the engine: |
|
82 | option is that we could create a third process (like the IPython controller) | |
|
83 | that only manages the message queues. This is attractive, but does require | |||
|
84 | an additional process. | |||
|
85 | ||||
|
86 | Using this communication channel, the GUI application and kernel/engine will | |||
|
87 | be able to send messages back and forth. For the most part, these messages | |||
|
88 | will have a request/reply form, but it will be possible for the kernel/engine | |||
|
89 | to send multiple replies for a single request. | |||
|
90 | ||||
|
91 | The GUI application will use these messages to control the engine/kernel. | |||
|
92 | Examples of the types of things that will be possible are: | |||
82 |
|
93 | |||
83 | * Pass code (as a string) to be executed by the engine in the user's namespace |
|
94 | * Pass code (as a string) to be executed by the engine in the user's namespace | |
84 | as a string. |
|
95 | as a string. | |
85 |
|
96 | |||
86 | * Get the current value of stdout and stderr. |
|
97 | * Get the current value of stdout and stderr. | |
87 |
|
98 | |||
|
99 | * Get the ``repr`` of an object returned (Out []:). | |||
|
100 | ||||
88 | * Pass a string to the engine to be completed when the GUI application |
|
101 | * Pass a string to the engine to be completed when the GUI application | |
89 | receives a tab completion event. |
|
102 | receives a tab completion event. | |
90 |
|
103 | |||
91 | * Get a list of all variable names in the user's namespace. |
|
104 | * Get a list of all variable names in the user's namespace. | |
92 |
|
105 | |||
93 | * Other similar actions. |
|
106 | The in memory format of a message should be a Python dictionary, as this | |
|
107 | will be easy to serialize using virtually any network protocol. The | |||
|
108 | message dict should only contain basic types, such as strings, floats, | |||
|
109 | ints, lists, tuples and other dicts. | |||
|
110 | ||||
|
111 | Each message will have a unique id and will probably be determined by the | |||
|
112 | messaging system and returned when something is queued in the message | |||
|
113 | system. This unique id will be used to pair replies with requests. | |||
|
114 | ||||
|
115 | Each message should have a header of key value pairs that can be introspected | |||
|
116 | by the message system and a body, or payload, that is opaque. The queues | |||
|
117 | themselves will be purpose agnostic, so the purpose of the message will have | |||
|
118 | to be encoded in the message itself. While we are getting started, we | |||
|
119 | probably don't need to distinguish between the header and body. | |||
|
120 | ||||
|
121 | Here are some examples:: | |||
|
122 | ||||
|
123 | m1 = dict( | |||
|
124 | method='execute', | |||
|
125 | id=24, # added by the message system | |||
|
126 | parent=None # not a reply, | |||
|
127 | source_code='a=my_func()' | |||
|
128 | ) | |||
|
129 | ||||
|
130 | This single message could generate a number of reply messages:: | |||
|
131 | ||||
|
132 | m2 = dict( | |||
|
133 | method='stdout' | |||
|
134 | id=25, # my id, added by the message system | |||
|
135 | parent_id=24, # The message id of the request | |||
|
136 | value='This was printed by my_func()' | |||
|
137 | ) | |||
|
138 | ||||
|
139 | m3 = dict( | |||
|
140 | method='stdout' | |||
|
141 | id=26, # my id, added by the message system | |||
|
142 | parent_id=24, # The message id of the request | |||
|
143 | value='This too was printed by my_func() at a later time.' | |||
|
144 | ) | |||
|
145 | ||||
|
146 | m4 = dict( | |||
|
147 | method='execute_finished', | |||
|
148 | id=27, | |||
|
149 | parent_id=24 | |||
|
150 | # not sure what else should come back with this message, | |||
|
151 | # but we will need a way for the GUI app to tell that an execute | |||
|
152 | # is done. | |||
|
153 | ) | |||
|
154 | ||||
|
155 | We should probably use flags for the method and other purposes: | |||
|
156 | ||||
|
157 | EXECUTE='0' | |||
|
158 | EXECUTE_REPLY='1' | |||
|
159 | ||||
|
160 | This will keep out network traffic down and enable us to easily change the | |||
|
161 | actual value that is sent. | |||
94 |
|
162 | |||
95 | Engine details |
|
163 | Engine details | |
96 | ============== |
|
164 | ============== | |
97 |
|
165 | |||
98 |
As discussed above, the engine will consist of two threads: a main thread and |
|
166 | As discussed above, the engine will consist of two threads: a main thread and | |
99 |
networking thread. These two threads will communicate using a pair of |
|
167 | a networking thread. These two threads will communicate using a pair of | |
100 |
one for data and requests passing to the main thread (the main |
|
168 | queues: one for data and requests passing to the main thread (the main | |
101 |
queue") and another for data and requests passing out of the |
|
169 | thread's "input queue") and another for data and requests passing out of the | |
102 |
main thread's "output queue"). Both threads will have an |
|
170 | main thread (the main thread's "output queue"). Both threads will have an | |
103 |
enqueue elements on one queue and dequeue elements on the |
|
171 | event loop that will enqueue elements on one queue and dequeue elements on the | |
|
172 | other queue. | |||
104 |
|
173 | |||
105 |
The event loop of the main thread will be of a different nature depending on |
|
174 | The event loop of the main thread will be of a different nature depending on | |
106 | the user wants to perform interactive plotting. If they do want to perform |
|
175 | if the user wants to perform interactive plotting. If they do want to perform | |
107 | interactive plotting, the main threads event loop will simply be the GUI event |
|
176 | interactive plotting, the main threads event loop will simply be the GUI event | |
108 | loop. In that case, GUI timers will be used to monitor the main threads input |
|
177 | loop. In that case, GUI timers will be used to monitor the main threads input | |
109 | queue. When elements appear on that queue, the main thread will respond |
|
178 | queue. When elements appear on that queue, the main thread will respond | |
110 | appropriately. For example, if the queue contains an element that consists of |
|
179 | appropriately. For example, if the queue contains an element that consists of | |
111 | user code to execute, the main thread will call the appropriate method of its |
|
180 | user code to execute, the main thread will call the appropriate method of its | |
112 | IPython instance. If the user does not want to perform interactive plotting, |
|
181 | IPython instance. If the user does not want to perform interactive plotting, | |
113 | the main thread will have a simpler event loop that will simply block on the |
|
182 | the main thread will have a simpler event loop that will simply block on the | |
114 | input queue. When something appears on that queue, the main thread will awake |
|
183 | input queue. When something appears on that queue, the main thread will awake | |
115 | and handle the request. |
|
184 | and handle the request. | |
116 |
|
185 | |||
117 | The event loop of the networking thread will typically be the Twisted event |
|
186 | The event loop of the networking thread will typically be the Twisted event | |
118 |
loop. |
|
187 | loop. While it is possible to implement the engine's networking without using | |
119 | Twisted, at this point, Twisted provides the best solution. Note that the GUI |
|
188 | Twisted, at this point, Twisted provides the best solution. Note that the GUI | |
120 | application does not need to use Twisted in this case. The Twisted event loop |
|
189 | application does not need to use Twisted in this case. The Twisted event loop | |
121 |
will contain an XML-RPC or JSON-RPC server that takes requests over the |
|
190 | will contain an XML-RPC or JSON-RPC server that takes requests over the | |
122 |
and handles those requests by enqueing elements on the main thread's |
|
191 | network and handles those requests by enqueing elements on the main thread's | |
123 | queue or dequeing elements on the main thread's output queue. |
|
192 | input queue or dequeing elements on the main thread's output queue. | |
124 |
|
193 | |||
125 |
Because of the asynchronous nature of the network communication, a single |
|
194 | Because of the asynchronous nature of the network communication, a single | |
126 | and output queue will be used to handle the interaction with the main |
|
195 | input and output queue will be used to handle the interaction with the main | |
127 | thread. It is also possible to use multiple queues to isolate the different |
|
196 | thread. It is also possible to use multiple queues to isolate the different | |
128 | types of requests, but our feeling is that this is more complicated than it |
|
197 | types of requests, but our feeling is that this is more complicated than it | |
129 | needs to be. |
|
198 | needs to be. | |
130 |
|
199 | |||
131 | One of the main issues is how stdout/stderr will be handled. Our idea is to |
|
200 | One of the main issues is how stdout/stderr will be handled. Our idea is to | |
132 | replace sys.stdout/sys.stderr by custom classes that will immediately write |
|
201 | replace sys.stdout/sys.stderr by custom classes that will immediately write | |
133 | data to the main thread's output queue when user code writes to these streams |
|
202 | data to the main thread's output queue when user code writes to these streams | |
134 |
(by doing print). Once on the main thread's output queue, the networking |
|
203 | (by doing print). Once on the main thread's output queue, the networking | |
135 | will make the data available to the GUI application over the network. |
|
204 | thread will make the data available to the GUI application over the network. | |
136 |
|
205 | |||
137 |
One unavoidable limitation in this design is that if user code does a print |
|
206 | One unavoidable limitation in this design is that if user code does a print | |
138 |
then enters non-GIL-releasing extension code, the networking thread will |
|
207 | and then enters non-GIL-releasing extension code, the networking thread will | |
139 |
silent until the GIL is again released. During this time, the networking |
|
208 | go silent until the GIL is again released. During this time, the networking | |
140 |
will not be able to process the GUI application's requests of the |
|
209 | thread will not be able to process the GUI application's requests of the | |
141 |
the values of stdout/stderr will be unavailable during this |
|
210 | engine. Thus, the values of stdout/stderr will be unavailable during this | |
142 |
beyond stdout/stderr, however. |
|
211 | time. This goes beyond stdout/stderr, however. Anytime the main thread is | |
143 |
networking thread will go silent and be unable to handle |
|
212 | holding the GIL, the networking thread will go silent and be unable to handle | |
|
213 | requests. | |||
|
214 | ||||
|
215 | GUI Application details | |||
|
216 | ======================= | |||
|
217 | ||||
|
218 | The GUI application will also have two threads. While this is not a strict | |||
|
219 | requirement, it probably makes sense and is a good place to start. The main | |||
|
220 | thread will be the GUI tread. The other thread will be a networking thread and | |||
|
221 | will handle the messages that are sent to and from the engine process. | |||
|
222 | ||||
|
223 | Like the engine, we will use two queues to control the flow of messages | |||
|
224 | between the main thread and networking thread. One of these queues will be | |||
|
225 | used for messages sent from the GUI application to the engine. When the GUI | |||
|
226 | application needs to send a message to the engine, it will simply enque the | |||
|
227 | appropriate message on this queue. The networking thread will watch this queue | |||
|
228 | and forward messages to the engine using an appropriate network protocol. | |||
|
229 | ||||
|
230 | The other queue will be used for incoming messages from the engine. The | |||
|
231 | networking thread will poll for incoming messages from the engine. When it | |||
|
232 | receives any message, it will simply put that message on this other queue. The | |||
|
233 | GUI application will periodically see if there are any messages on this queue | |||
|
234 | and if there are it will handle them. | |||
|
235 | ||||
|
236 | The GUI application must be prepared to handle any incoming message at any | |||
|
237 | time. Due to a variety of reasons, the one or more reply messages associated | |||
|
238 | with a request, may appear at any time in the future and possible in different | |||
|
239 | orders. It is also possible that a reply might not appear. An example of this | |||
|
240 | would be a request for a tab completion event. If the engine is busy, it won't | |||
|
241 | be possible to fulfill the request for a while. While the tab completion | |||
|
242 | request will eventually be handled, the GUI application has to be prepared to | |||
|
243 | abandon waiting for the reply if the user moves on or a certain timeout | |||
|
244 | expires. | |||
|
245 | ||||
|
246 | Prototype details | |||
|
247 | ================= | |||
|
248 | ||||
|
249 | With this design, it should be possible to develop a relatively complete GUI | |||
|
250 | application, while using a mock engine. This prototype should use the two | |||
|
251 | process design described above, but instead of making actual network calls, | |||
|
252 | the network thread of the GUI application should have an object that fakes the | |||
|
253 | network traffic. This mock object will consume messages off of one queue, | |||
|
254 | pause for a short while (to model network and other latencies) and then place | |||
|
255 | reply messages on the other queue. | |||
|
256 | ||||
|
257 | This simple design will allow us to determine exactly what the message types | |||
|
258 | and formats should be as well as how the GUI application should interact with | |||
|
259 | the two message queues. Note, it is not required that the mock object actually | |||
|
260 | be able to execute Python code or actually complete strings in the users | |||
|
261 | namespace. All of these things can simply be faked. This will also help us to | |||
|
262 | understand what the interface needs to look like that handles the network | |||
|
263 | traffic. This will also help us to understand the design of the engine better. | |||
|
264 | ||||
|
265 | The GUI application should be developed using IPython's component, application | |||
|
266 | and configuration system. It may take some work to see what the best way of | |||
|
267 | integrating these things with PyQt are. | |||
|
268 | ||||
|
269 | After this stage is done, we can move onto creating a real IPython engine for | |||
|
270 | the GUI application to communicate with. This will likely be more work that | |||
|
271 | the GUI application itself, but having a working GUI application will make it | |||
|
272 | *much* easier to design and implement the engine. | |||
|
273 | ||||
|
274 | We also might want to introduce a third process into the mix. Basically, this | |||
|
275 | would be a central messaging hub that both the engine and GUI application | |||
|
276 | would use to send and retrieve messages. This is not required, but it might be | |||
|
277 | a really good idea. | |||
|
278 | ||||
|
279 | Also, I have some ideas on the best way to handle notebook saving and | |||
|
280 | persistence. | |||
144 |
|
281 | |||
145 | Refactoring of IPython.core |
|
282 | Refactoring of IPython.core | |
146 | =========================== |
|
283 | =========================== | |
147 |
|
284 | |||
148 | We need to go through IPython.core and describe what specifically needs to be |
|
285 | We need to go through IPython.core and describe what specifically needs to be | |
149 | done. |
|
286 | done. |
@@ -1,59 +1,61 b'' | |||||
1 | ==================================================== |
|
1 | ==================================================== | |
2 | Notes on code execution in :class:`InteractiveShell` |
|
2 | Notes on code execution in :class:`InteractiveShell` | |
3 | ==================================================== |
|
3 | ==================================================== | |
4 |
|
4 | |||
5 | Overview |
|
5 | Overview | |
6 | ======== |
|
6 | ======== | |
7 |
|
7 | |||
8 | This section contains information and notes about the code execution |
|
8 | This section contains information and notes about the code execution | |
9 | system in :class:`InteractiveShell`. This system needs to be refactored |
|
9 | system in :class:`InteractiveShell`. This system needs to be refactored | |
10 | and we are keeping notes about this process here. |
|
10 | and we are keeping notes about this process here. | |
11 |
|
11 | |||
12 | Current design |
|
12 | Current design | |
13 | ============== |
|
13 | ============== | |
14 |
|
14 | |||
15 | Here is a script that shows the relationships between the various |
|
15 | Here is a script that shows the relationships between the various | |
16 | methods in :class:`InteractiveShell` that manage code execution:: |
|
16 | methods in :class:`InteractiveShell` that manage code execution:: | |
17 |
|
17 | |||
18 | import networkx as nx |
|
18 | import networkx as nx | |
19 | import matplotlib.pyplot as plt |
|
19 | import matplotlib.pyplot as plt | |
20 |
|
20 | |||
21 | exec_init_cmd = 'exec_init_cmd' |
|
21 | exec_init_cmd = 'exec_init_cmd' | |
22 | interact = 'interact' |
|
22 | interact = 'interact' | |
23 | runlines = 'runlines' |
|
23 | runlines = 'runlines' | |
24 | runsource = 'runsource' |
|
24 | runsource = 'runsource' | |
25 | runcode = 'runcode' |
|
25 | runcode = 'runcode' | |
26 | push_line = 'push_line' |
|
26 | push_line = 'push_line' | |
27 | mainloop = 'mainloop' |
|
27 | mainloop = 'mainloop' | |
28 | embed_mainloop = 'embed_mainloop' |
|
28 | embed_mainloop = 'embed_mainloop' | |
29 | ri = 'raw_input' |
|
29 | ri = 'raw_input' | |
30 | prefilter = 'prefilter' |
|
30 | prefilter = 'prefilter' | |
31 |
|
31 | |||
32 | g = nx.DiGraph() |
|
32 | g = nx.DiGraph() | |
33 |
|
33 | |||
34 | g.add_node(exec_init_cmd) |
|
34 | g.add_node(exec_init_cmd) | |
35 | g.add_node(interact) |
|
35 | g.add_node(interact) | |
36 | g.add_node(runlines) |
|
36 | g.add_node(runlines) | |
37 | g.add_node(runsource) |
|
37 | g.add_node(runsource) | |
38 | g.add_node(push_line) |
|
38 | g.add_node(push_line) | |
39 | g.add_node(mainloop) |
|
39 | g.add_node(mainloop) | |
40 | g.add_node(embed_mainloop) |
|
40 | g.add_node(embed_mainloop) | |
41 | g.add_node(ri) |
|
41 | g.add_node(ri) | |
42 | g.add_node(prefilter) |
|
42 | g.add_node(prefilter) | |
43 |
|
43 | |||
44 | g.add_edge(exec_init_cmd, push_line) |
|
44 | g.add_edge(exec_init_cmd, push_line) | |
45 | g.add_edge(exec_init_cmd, prefilter) |
|
45 | g.add_edge(exec_init_cmd, prefilter) | |
46 | g.add_edge(mainloop, exec_init_cmd) |
|
46 | g.add_edge(mainloop, exec_init_cmd) | |
47 | g.add_edge(mainloop, interact) |
|
47 | g.add_edge(mainloop, interact) | |
48 | g.add_edge(embed_mainloop, interact) |
|
48 | g.add_edge(embed_mainloop, interact) | |
49 | g.add_edge(interact, ri) |
|
49 | g.add_edge(interact, ri) | |
50 | g.add_edge(interact, push_line) |
|
50 | g.add_edge(interact, push_line) | |
51 | g.add_edge(push_line, runsource) |
|
51 | g.add_edge(push_line, runsource) | |
52 | g.add_edge(runlines, push_line) |
|
52 | g.add_edge(runlines, push_line) | |
53 | g.add_edge(runlines, prefilter) |
|
53 | g.add_edge(runlines, prefilter) | |
54 | g.add_edge(runsource, runcode) |
|
54 | g.add_edge(runsource, runcode) | |
55 | g.add_edge(ri, prefilter) |
|
55 | g.add_edge(ri, prefilter) | |
56 |
|
56 | |||
57 | nx.draw_spectral(g, node_size=100, alpha=0.6, node_color='r', |
|
57 | nx.draw_spectral(g, node_size=100, alpha=0.6, node_color='r', | |
58 | font_size=10, node_shape='o') |
|
58 | font_size=10, node_shape='o') | |
59 | plt.show() |
|
59 | plt.show() | |
|
60 | ||||
|
61 |
General Comments 0
You need to be logged in to leave comments.
Login now