Show More
@@ -5,7 +5,7 b' porting ipython to a two process model using zeromq' | |||
|
5 | 5 | =================================================== |
|
6 | 6 | |
|
7 | 7 | abstract |
|
8 | -------- | |
|
8 | ---------- | |
|
9 | 9 | |
|
10 | 10 | ipython's execution in a command-line environment will be ported to a two process model using the zeromq library for inter-process communication. this will: |
|
11 | 11 | |
@@ -14,16 +14,9 b" ipython's execution in a command-line environment will be ported to a two proces" | |||
|
14 | 14 | - allow ipython to reuse code for local execution and distributed computing (dc) |
|
15 | 15 | - give us a path for python3 support, since zeromq supports python3 while twisted (what we use today for dc) does not. |
|
16 | 16 | |
|
17 | deliverables | |
|
18 | ------------ | |
|
19 | ||
|
20 | * a user-facing frontend that provides an environment like today's command-line ipython but running over two processes, with the code execution kernel living in a separate process and communicating with the frontend by using the zeromq library. | |
|
21 | ||
|
22 | * a kernel that supports ipython's features (tab-completion, code introspection, exception reporting with different levels of detail, etc), but listening to requests over a network port, and returning results as json-formatted messages over the network. | |
|
23 | ||
|
24 | 17 | |
|
25 | 18 | project description |
|
26 | ------------------- | |
|
19 | --------------------- | |
|
27 | 20 | |
|
28 | 21 | currently ipython provides a command-line client that executes all code in a single process, and a set of tools for distributed and parallel computing that execute code in multiple processes (possibly but not necessarily on different hosts), using the twisted asynchronous framework for communication between nodes. for a number of reasons, it is desirable to unify the architecture of the local execution with that of distributed computing, since ultimately many of the underlying abstractions are similar and should be reused. in particular, we would like to: |
|
29 | 22 | |
@@ -38,12 +31,206 b' as part of the zmq python bindings, the ipython developers have already develope' | |||
|
38 | 31 | once this port is complete, the resulting tools will be the foundation (though as part of this proposal i do not expect to undertake either of these tasks) to allow the distributed computing parts of ipython to use the same code as the command-line client, and for the whole system to be ported to python3. so while i do not intend to tackle here the removal of twisted and the unification of the local and distributed parts of ipython, my proposal is a necessary step before those are possible. |
|
39 | 32 | |
|
40 | 33 | project details |
|
41 | --------------- | |
|
42 | ||
|
43 | As part of the zeromq bindings, the ipython developers have already developed a simple prototype example that provides a python execution kernel (with none of ipython's code or features, just plain code execution) that listens on zmq sockets, and a frontend based on the interactiveconsole class of the code.py module from the python standard library. this example is capable of executing code, propagating errors, performing tab-completion over the network and having multiple frontends connect and disconnect simultaneously to a single kernel, with all inputs and outputs being made available to all connected clients (thanks to zqm's pub sockets that provide multicasting capabilities for the kernel and to which the frontends subscribe via a sub socket). | |
|
44 | ||
|
34 | ------------------- | |
|
45 | 35 | |
|
46 | ** we have all example code in | |
|
36 | as part of the zeromq bindings, the ipython developers have already developed a simple prototype example that provides a python execution kernel (with none of ipython's code or features, just plain code execution) that listens on zmq sockets, and a frontend based on the interactiveconsole class of the code.py module from the python standard library. this example is capable of executing code, propagating errors, performing tab-completion over the network and having multiple frontends connect and disconnect simultaneously to a single kernel, with all inputs and outputs being made available to all connected clients (thanks to zqm's pub sockets that provide multicasting capabilities for the kernel and to which the frontends subscribe via a sub socket). | |
|
37 | ||
|
38 | this simple code shows how the kernel starts: | |
|
39 | ||
|
40 | ||
|
41 | def main(): | |
|
42 | ||
|
43 | c = zmq.context(1, 1) | |
|
44 | ||
|
45 | #ip = '192.168.2.109' | |
|
46 | ||
|
47 | ip = '127.0.0.1' | |
|
48 | ||
|
49 | #ip = '192.168.4.128' | |
|
50 | ||
|
51 | port_base = 5555 | |
|
52 | ||
|
53 | connection = ('tcp://%s' % ip) + ':%i' | |
|
54 | ||
|
55 | rep_conn = connection % port_base | |
|
56 | ||
|
57 | pub_conn = connection % (port_base+1) | |
|
58 | ||
|
59 | print >>sys.__stdout__, "starting the kernel..." | |
|
60 | ||
|
61 | print >>sys.__stdout__, "on:",rep_conn, pub_conn | |
|
62 | ||
|
63 | session = session(username=u'kernel') | |
|
64 | ||
|
65 | reply_socket = c.socket(zmq.xrep) | |
|
66 | ||
|
67 | reply_socket.bind(rep_conn) | |
|
68 | ||
|
69 | pub_socket = c.socket(zmq.pub) | |
|
70 | ||
|
71 | pub_socket.bind(pub_conn) | |
|
72 | ||
|
73 | stdout = outstream(session, pub_socket, u'stdout') | |
|
74 | ||
|
75 | stderr = outstream(session, pub_socket, u'stderr') | |
|
76 | ||
|
77 | sys.stdout = stdout | |
|
78 | ||
|
79 | sys.stderr = stderr | |
|
80 | ||
|
81 | display_hook = displayhook(session, pub_socket) | |
|
82 | ||
|
83 | sys.displayhook = display_hook | |
|
84 | ||
|
85 | kernel = kernel(session, reply_socket, pub_socket) | |
|
86 | ||
|
87 | ||
|
88 | ||
|
89 | and the heart of the kernel is the method to execute code and post back the results over the necessary sockets, using json-formatted messages: | |
|
90 | ||
|
91 | ||
|
92 | def execute_request(self, ident, parent): | |
|
93 | ||
|
94 | try: | |
|
95 | ||
|
96 | code = parent[u'content'][u'code'] | |
|
97 | ||
|
98 | except: | |
|
99 | ||
|
100 | print>>sys.__stderr__, "got bad msg: " | |
|
101 | ||
|
102 | print>>sys.__stderr__, message(parent) | |
|
103 | ||
|
104 | return | |
|
105 | ||
|
106 | pyin_msg = self.session.msg(u'pyin',{u'code':code}, parent=parent) | |
|
107 | ||
|
108 | self.pub_socket.send_json(pyin_msg) | |
|
109 | ||
|
110 | try: | |
|
111 | ||
|
112 | comp_code = self.compiler(code, '<zmq-kernel>') | |
|
113 | ||
|
114 | sys.displayhook.set_parent(parent) | |
|
115 | ||
|
116 | exec comp_code in self.user_ns, self.user_ns | |
|
117 | ||
|
118 | except: | |
|
119 | ||
|
120 | result = u'error' | |
|
121 | ||
|
122 | etype, evalue, tb = sys.exc_info() | |
|
123 | ||
|
124 | tb = traceback.format_exception(etype, evalue, tb) | |
|
125 | ||
|
126 | exc_content = { | |
|
127 | ||
|
128 | u'status' : u'error', | |
|
129 | ||
|
130 | u'traceback' : tb, | |
|
131 | ||
|
132 | u'etype' : unicode(etype), | |
|
133 | ||
|
134 | u'evalue' : unicode(evalue) | |
|
135 | ||
|
136 | } | |
|
137 | ||
|
138 | exc_msg = self.session.msg(u'pyerr', exc_content, parent) | |
|
139 | ||
|
140 | self.pub_socket.send_json(exc_msg) | |
|
141 | ||
|
142 | reply_content = exc_content | |
|
143 | ||
|
144 | else: | |
|
145 | ||
|
146 | reply_content = {'status' : 'ok'} | |
|
147 | ||
|
148 | reply_msg = self.session.msg(u'execute_reply', reply_content, parent) | |
|
149 | ||
|
150 | print>>sys.__stdout__, message(reply_msg) | |
|
151 | ||
|
152 | self.reply_socket.send_json(reply_msg, ident=ident) | |
|
153 | ||
|
154 | if reply_msg['content']['status'] == u'error': | |
|
155 | ||
|
156 | self.abort_queue() | |
|
157 | ||
|
158 | ||
|
159 | ||
|
160 | and a little system to tab-completion | |
|
161 | ||
|
162 | ||
|
163 | def execute_request(self, ident, parent): | |
|
164 | ||
|
165 | try: | |
|
166 | ||
|
167 | code = parent[u'content'][u'code'] | |
|
168 | ||
|
169 | except: | |
|
170 | ||
|
171 | print>>sys.__stderr__, "got bad msg: " | |
|
172 | ||
|
173 | print>>sys.__stderr__, message(parent) | |
|
174 | ||
|
175 | return | |
|
176 | ||
|
177 | pyin_msg = self.session.msg(u'pyin',{u'code':code}, parent=parent) | |
|
178 | ||
|
179 | self.pub_socket.send_json(pyin_msg) | |
|
180 | ||
|
181 | try: | |
|
182 | ||
|
183 | comp_code = self.compiler(code, '<zmq-kernel>') | |
|
184 | ||
|
185 | sys.displayhook.set_parent(parent) | |
|
186 | ||
|
187 | exec comp_code in self.user_ns, self.user_ns | |
|
188 | ||
|
189 | except: | |
|
190 | ||
|
191 | result = u'error' | |
|
192 | ||
|
193 | etype, evalue, tb = sys.exc_info() | |
|
194 | ||
|
195 | tb = traceback.format_exception(etype, evalue, tb) | |
|
196 | ||
|
197 | exc_content = { | |
|
198 | ||
|
199 | u'status' : u'error', | |
|
200 | ||
|
201 | u'traceback' : tb, | |
|
202 | ||
|
203 | u'etype' : unicode(etype), | |
|
204 | ||
|
205 | u'evalue' : unicode(evalue) | |
|
206 | ||
|
207 | } | |
|
208 | ||
|
209 | exc_msg = self.session.msg(u'pyerr', exc_content, parent) | |
|
210 | ||
|
211 | self.pub_socket.send_json(exc_msg) | |
|
212 | ||
|
213 | reply_content = exc_content | |
|
214 | ||
|
215 | else: | |
|
216 | ||
|
217 | reply_content = {'status' : 'ok'} | |
|
218 | ||
|
219 | reply_msg = self.session.msg(u'execute_reply', reply_content, parent) | |
|
220 | ||
|
221 | print>>sys.__stdout__, message(reply_msg) | |
|
222 | ||
|
223 | self.reply_socket.send_json(reply_msg, ident=ident) | |
|
224 | ||
|
225 | if reply_msg['content']['status'] == u'error': | |
|
226 | ||
|
227 | self.abort_queue() | |
|
228 | ||
|
229 | ||
|
230 | ||
|
231 | ||
|
232 | ||
|
233 | **we have all example code in | |
|
47 | 234 | |
|
48 | 235 | * http://github.com/ellisonbg/pyzmq/blob/completer/examples/kernel/kernel.py |
|
49 | 236 | |
@@ -56,7 +243,8 b' all of this code already works, and can be seen in this example directory from t' | |||
|
56 | 243 | |
|
57 | 244 | * http://github.com/ellisonbg/pyzmq/blob/completer/examples/kernel |
|
58 | 245 | |
|
59 | Based on this work, i expect to write a stable system for ipython kernel with ipython standards, error control,crash recovery system and general configuration options, also standardize defaults ports or auth system for remote connection etc. | |
|
60 | 246 | |
|
61 | the crash recovery system, is a ipython kernel module for when it fails unexpectedly, you can retrieve the information from the section, this will be based on a log and a lock file to indicate when the kernel was not closed in a proper way. | |
|
247 | based on this work, i expect to write a stable system for ipython kernel with ipython standards, error control,crash recovery system and general configuration options, also standardize defaults ports or auth system for remote connection etc. | |
|
62 | 248 | |
|
249 | the crash recovery system, is a ipython kernel module for when it fails unexpectedly, you can retrieve the information from the section, this will be based on a log and a lock file to indicate when the kernel was not closed in a proper way. | |
|
250 |
General Comments 0
You need to be logged in to leave comments.
Login now