Show More
@@ -1,148 +1,151 b'' | |||
|
1 | 1 | # -*- coding: utf-8 -*- |
|
2 | 2 | """Release data for the IPython project.""" |
|
3 | 3 | |
|
4 | 4 | #----------------------------------------------------------------------------- |
|
5 | 5 | # Copyright (c) 2008, IPython Development Team. |
|
6 | 6 | # Copyright (c) 2001, Fernando Perez <fernando.perez@colorado.edu> |
|
7 | 7 | # Copyright (c) 2001, Janko Hauser <jhauser@zscout.de> |
|
8 | 8 | # Copyright (c) 2001, Nathaniel Gray <n8gray@caltech.edu> |
|
9 | 9 | # |
|
10 | 10 | # Distributed under the terms of the Modified BSD License. |
|
11 | 11 | # |
|
12 | 12 | # The full license is in the file COPYING.txt, distributed with this software. |
|
13 | 13 | #----------------------------------------------------------------------------- |
|
14 | 14 | |
|
15 | 15 | # Name of the package for release purposes. This is the name which labels |
|
16 | 16 | # the tarballs and RPMs made by distutils, so it's best to lowercase it. |
|
17 | 17 | name = 'ipython' |
|
18 | 18 | |
|
19 | 19 | # IPython version information. An empty _version_extra corresponds to a full |
|
20 | 20 | # release. 'dev' as a _version_extra string means this is a development |
|
21 | 21 | # version |
|
22 | 22 | _version_major = 0 |
|
23 | 23 | _version_minor = 14 |
|
24 | 24 | _version_micro = 0 # use 0 for first of series, number for 1 and above |
|
25 | 25 | _version_extra = 'dev' |
|
26 | 26 | #_version_extra = 'rc1' |
|
27 | 27 | # _version_extra = '' # Uncomment this for full releases |
|
28 | 28 | |
|
29 | 29 | # Construct full version string from these. |
|
30 | 30 | _ver = [_version_major, _version_minor] |
|
31 | 31 | if _version_micro: |
|
32 | 32 | _ver.append(_version_micro) |
|
33 | 33 | if _version_extra: |
|
34 | 34 | _ver.append(_version_extra) |
|
35 | 35 | |
|
36 | 36 | __version__ = '.'.join(map(str, _ver)) |
|
37 | 37 | |
|
38 | 38 | version = __version__ # backwards compatibility name |
|
39 | 39 | version_info = (_version_major, _version_minor, _version_micro, _version_extra) |
|
40 | 40 | |
|
41 | # Change this when incrementing the kernel protocol version | |
|
42 | kernel_protocol_version_info = (4, 0) | |
|
43 | ||
|
41 | 44 | description = "IPython: Productive Interactive Computing" |
|
42 | 45 | |
|
43 | 46 | long_description = \ |
|
44 | 47 | """ |
|
45 | 48 | IPython provides a rich toolkit to help you make the most out of using Python |
|
46 | 49 | interactively. Its main components are: |
|
47 | 50 | |
|
48 | 51 | * Powerful interactive Python shells (terminal- and Qt-based). |
|
49 | 52 | * A web-based interactive notebook environment with all shell features plus |
|
50 | 53 | support for embedded figures, animations and rich media. |
|
51 | 54 | * Support for interactive data visualization and use of GUI toolkits. |
|
52 | 55 | * Flexible, embeddable interpreters to load into your own projects. |
|
53 | 56 | * A high-performance library for high level and interactive parallel computing |
|
54 | 57 | that works in multicore systems, clusters, supercomputing and cloud scenarios. |
|
55 | 58 | |
|
56 | 59 | The enhanced interactive Python shells have the following main features: |
|
57 | 60 | |
|
58 | 61 | * Comprehensive object introspection. |
|
59 | 62 | |
|
60 | 63 | * Input history, persistent across sessions. |
|
61 | 64 | |
|
62 | 65 | * Caching of output results during a session with automatically generated |
|
63 | 66 | references. |
|
64 | 67 | |
|
65 | 68 | * Extensible tab completion, with support by default for completion of python |
|
66 | 69 | variables and keywords, filenames and function keywords. |
|
67 | 70 | |
|
68 | 71 | * Extensible system of 'magic' commands for controlling the environment and |
|
69 | 72 | performing many tasks related either to IPython or the operating system. |
|
70 | 73 | |
|
71 | 74 | * A rich configuration system with easy switching between different setups |
|
72 | 75 | (simpler than changing $PYTHONSTARTUP environment variables every time). |
|
73 | 76 | |
|
74 | 77 | * Session logging and reloading. |
|
75 | 78 | |
|
76 | 79 | * Extensible syntax processing for special purpose situations. |
|
77 | 80 | |
|
78 | 81 | * Access to the system shell with user-extensible alias system. |
|
79 | 82 | |
|
80 | 83 | * Easily embeddable in other Python programs and GUIs. |
|
81 | 84 | |
|
82 | 85 | * Integrated access to the pdb debugger and the Python profiler. |
|
83 | 86 | |
|
84 | 87 | The parallel computing architecture has the following main features: |
|
85 | 88 | |
|
86 | 89 | * Quickly parallelize Python code from an interactive Python/IPython session. |
|
87 | 90 | |
|
88 | 91 | * A flexible and dynamic process model that be deployed on anything from |
|
89 | 92 | multicore workstations to supercomputers. |
|
90 | 93 | |
|
91 | 94 | * An architecture that supports many different styles of parallelism, from |
|
92 | 95 | message passing to task farming. |
|
93 | 96 | |
|
94 | 97 | * Both blocking and fully asynchronous interfaces. |
|
95 | 98 | |
|
96 | 99 | * High level APIs that enable many things to be parallelized in a few lines |
|
97 | 100 | of code. |
|
98 | 101 | |
|
99 | 102 | * Share live parallel jobs with other users securely. |
|
100 | 103 | |
|
101 | 104 | * Dynamically load balanced task farming system. |
|
102 | 105 | |
|
103 | 106 | * Robust error handling in parallel code. |
|
104 | 107 | |
|
105 | 108 | The latest development version is always available from IPython's `GitHub |
|
106 | 109 | site <http://github.com/ipython>`_. |
|
107 | 110 | """ |
|
108 | 111 | |
|
109 | 112 | license = 'BSD' |
|
110 | 113 | |
|
111 | 114 | authors = {'Fernando' : ('Fernando Perez','fperez.net@gmail.com'), |
|
112 | 115 | 'Janko' : ('Janko Hauser','jhauser@zscout.de'), |
|
113 | 116 | 'Nathan' : ('Nathaniel Gray','n8gray@caltech.edu'), |
|
114 | 117 | 'Ville' : ('Ville Vainio','vivainio@gmail.com'), |
|
115 | 118 | 'Brian' : ('Brian E Granger', 'ellisonbg@gmail.com'), |
|
116 | 119 | 'Min' : ('Min Ragan-Kelley', 'benjaminrk@gmail.com'), |
|
117 | 120 | 'Thomas' : ('Thomas A. Kluyver', 'takowl@gmail.com'), |
|
118 | 121 | 'Jorgen' : ('Jorgen Stenarson', 'jorgen.stenarson@bostream.nu'), |
|
119 | 122 | 'Matthias' : ('Matthias Bussonnier', 'bussonniermatthias@gmail.com'), |
|
120 | 123 | } |
|
121 | 124 | |
|
122 | 125 | author = 'The IPython Development Team' |
|
123 | 126 | |
|
124 | 127 | author_email = 'ipython-dev@scipy.org' |
|
125 | 128 | |
|
126 | 129 | url = 'http://ipython.org' |
|
127 | 130 | |
|
128 | 131 | download_url = 'https://github.com/ipython/ipython/downloads' |
|
129 | 132 | |
|
130 | 133 | platforms = ['Linux','Mac OSX','Windows XP/2000/NT/Vista/7'] |
|
131 | 134 | |
|
132 | 135 | keywords = ['Interactive','Interpreter','Shell','Parallel','Distributed', |
|
133 | 136 | 'Web-based computing', 'Qt console', 'Embedding'] |
|
134 | 137 | |
|
135 | 138 | classifiers = [ |
|
136 | 139 | 'Intended Audience :: Developers', |
|
137 | 140 | 'Intended Audience :: Science/Research', |
|
138 | 141 | 'License :: OSI Approved :: BSD License', |
|
139 | 142 | 'Programming Language :: Python', |
|
140 | 143 | 'Programming Language :: Python :: 2', |
|
141 | 144 | 'Programming Language :: Python :: 2.6', |
|
142 | 145 | 'Programming Language :: Python :: 2.7', |
|
143 | 146 | 'Programming Language :: Python :: 3', |
|
144 | 147 | 'Programming Language :: Python :: 3.1', |
|
145 | 148 | 'Programming Language :: Python :: 3.2', |
|
146 | 149 | 'Topic :: System :: Distributed Computing', |
|
147 | 150 | 'Topic :: System :: Shells' |
|
148 | 151 | ] |
@@ -1,932 +1,950 b'' | |||
|
1 | 1 | #!/usr/bin/env python |
|
2 | 2 | """A simple interactive kernel that talks to a frontend over 0MQ. |
|
3 | 3 | |
|
4 | 4 | Things to do: |
|
5 | 5 | |
|
6 | 6 | * Implement `set_parent` logic. Right before doing exec, the Kernel should |
|
7 | 7 | call set_parent on all the PUB objects with the message about to be executed. |
|
8 | 8 | * Implement random port and security key logic. |
|
9 | 9 | * Implement control messages. |
|
10 | 10 | * Implement event loop and poll version. |
|
11 | 11 | """ |
|
12 | 12 | |
|
13 | 13 | #----------------------------------------------------------------------------- |
|
14 | 14 | # Imports |
|
15 | 15 | #----------------------------------------------------------------------------- |
|
16 | 16 | from __future__ import print_function |
|
17 | 17 | |
|
18 | 18 | # Standard library imports |
|
19 | 19 | import __builtin__ |
|
20 | 20 | import atexit |
|
21 | 21 | import sys |
|
22 | 22 | import time |
|
23 | 23 | import traceback |
|
24 | 24 | import logging |
|
25 | 25 | import uuid |
|
26 | 26 | |
|
27 | 27 | from datetime import datetime |
|
28 | 28 | from signal import ( |
|
29 | 29 | signal, getsignal, default_int_handler, SIGINT, SIG_IGN |
|
30 | 30 | ) |
|
31 | 31 | |
|
32 | 32 | # System library imports |
|
33 | 33 | import zmq |
|
34 | 34 | from zmq.eventloop import ioloop |
|
35 | 35 | from zmq.eventloop.zmqstream import ZMQStream |
|
36 | 36 | |
|
37 | 37 | # Local imports |
|
38 | 38 | from IPython.config.configurable import Configurable |
|
39 | 39 | from IPython.config.application import boolean_flag, catch_config_error |
|
40 | 40 | from IPython.core.application import ProfileDir |
|
41 | 41 | from IPython.core.error import StdinNotImplementedError |
|
42 | from IPython.core import release | |
|
42 | 43 | from IPython.core.shellapp import ( |
|
43 | 44 | InteractiveShellApp, shell_flags, shell_aliases |
|
44 | 45 | ) |
|
45 | 46 | from IPython.utils import io |
|
46 | 47 | from IPython.utils import py3compat |
|
47 | 48 | from IPython.utils.frame import extract_module_locals |
|
48 | 49 | from IPython.utils.jsonutil import json_clean |
|
49 | 50 | from IPython.utils.traitlets import ( |
|
50 | 51 | Any, Instance, Float, Dict, CaselessStrEnum, List, Set, Integer, Unicode |
|
51 | 52 | ) |
|
52 | 53 | |
|
53 | 54 | from entry_point import base_launch_kernel |
|
54 | 55 | from kernelapp import KernelApp, kernel_flags, kernel_aliases |
|
55 | 56 | from serialize import serialize_object, unpack_apply_message |
|
56 | 57 | from session import Session, Message |
|
57 | 58 | from zmqshell import ZMQInteractiveShell |
|
58 | 59 | |
|
59 | 60 | |
|
60 | 61 | #----------------------------------------------------------------------------- |
|
61 | 62 | # Main kernel class |
|
62 | 63 | #----------------------------------------------------------------------------- |
|
63 | 64 | |
|
65 | protocol_version = list(release.kernel_protocol_version_info) | |
|
66 | ipython_version = list(release.version_info) | |
|
67 | language_version = list(sys.version_info[:3]) | |
|
68 | ||
|
69 | ||
|
64 | 70 | class Kernel(Configurable): |
|
65 | 71 | |
|
66 | 72 | #--------------------------------------------------------------------------- |
|
67 | 73 | # Kernel interface |
|
68 | 74 | #--------------------------------------------------------------------------- |
|
69 | 75 | |
|
70 | 76 | # attribute to override with a GUI |
|
71 | 77 | eventloop = Any(None) |
|
72 | 78 | def _eventloop_changed(self, name, old, new): |
|
73 | 79 | """schedule call to eventloop from IOLoop""" |
|
74 | 80 | loop = ioloop.IOLoop.instance() |
|
75 | 81 | loop.add_timeout(time.time()+0.1, self.enter_eventloop) |
|
76 | 82 | |
|
77 | 83 | shell = Instance('IPython.core.interactiveshell.InteractiveShellABC') |
|
78 | 84 | session = Instance(Session) |
|
79 | 85 | profile_dir = Instance('IPython.core.profiledir.ProfileDir') |
|
80 | 86 | shell_streams = List() |
|
81 | 87 | control_stream = Instance(ZMQStream) |
|
82 | 88 | iopub_socket = Instance(zmq.Socket) |
|
83 | 89 | stdin_socket = Instance(zmq.Socket) |
|
84 | 90 | log = Instance(logging.Logger) |
|
85 | 91 | |
|
86 | 92 | user_module = Any() |
|
87 | 93 | def _user_module_changed(self, name, old, new): |
|
88 | 94 | if self.shell is not None: |
|
89 | 95 | self.shell.user_module = new |
|
90 | 96 | |
|
91 | 97 | user_ns = Dict(default_value=None) |
|
92 | 98 | def _user_ns_changed(self, name, old, new): |
|
93 | 99 | if self.shell is not None: |
|
94 | 100 | self.shell.user_ns = new |
|
95 | 101 | self.shell.init_user_ns() |
|
96 | 102 | |
|
97 | 103 | # identities: |
|
98 | 104 | int_id = Integer(-1) |
|
99 | 105 | ident = Unicode() |
|
100 | 106 | |
|
101 | 107 | def _ident_default(self): |
|
102 | 108 | return unicode(uuid.uuid4()) |
|
103 | 109 | |
|
104 | 110 | |
|
105 | 111 | # Private interface |
|
106 | 112 | |
|
107 | 113 | # Time to sleep after flushing the stdout/err buffers in each execute |
|
108 | 114 | # cycle. While this introduces a hard limit on the minimal latency of the |
|
109 | 115 | # execute cycle, it helps prevent output synchronization problems for |
|
110 | 116 | # clients. |
|
111 | 117 | # Units are in seconds. The minimum zmq latency on local host is probably |
|
112 | 118 | # ~150 microseconds, set this to 500us for now. We may need to increase it |
|
113 | 119 | # a little if it's not enough after more interactive testing. |
|
114 | 120 | _execute_sleep = Float(0.0005, config=True) |
|
115 | 121 | |
|
116 | 122 | # Frequency of the kernel's event loop. |
|
117 | 123 | # Units are in seconds, kernel subclasses for GUI toolkits may need to |
|
118 | 124 | # adapt to milliseconds. |
|
119 | 125 | _poll_interval = Float(0.05, config=True) |
|
120 | 126 | |
|
121 | 127 | # If the shutdown was requested over the network, we leave here the |
|
122 | 128 | # necessary reply message so it can be sent by our registered atexit |
|
123 | 129 | # handler. This ensures that the reply is only sent to clients truly at |
|
124 | 130 | # the end of our shutdown process (which happens after the underlying |
|
125 | 131 | # IPython shell's own shutdown). |
|
126 | 132 | _shutdown_message = None |
|
127 | 133 | |
|
128 | 134 | # This is a dict of port number that the kernel is listening on. It is set |
|
129 | 135 | # by record_ports and used by connect_request. |
|
130 | 136 | _recorded_ports = Dict() |
|
131 | 137 | |
|
132 | 138 | # set of aborted msg_ids |
|
133 | 139 | aborted = Set() |
|
134 | 140 | |
|
135 | 141 | |
|
136 | 142 | def __init__(self, **kwargs): |
|
137 | 143 | super(Kernel, self).__init__(**kwargs) |
|
138 | 144 | |
|
139 | 145 | # Initialize the InteractiveShell subclass |
|
140 | 146 | self.shell = ZMQInteractiveShell.instance(config=self.config, |
|
141 | 147 | profile_dir = self.profile_dir, |
|
142 | 148 | user_module = self.user_module, |
|
143 | 149 | user_ns = self.user_ns, |
|
144 | 150 | ) |
|
145 | 151 | self.shell.displayhook.session = self.session |
|
146 | 152 | self.shell.displayhook.pub_socket = self.iopub_socket |
|
147 | 153 | self.shell.displayhook.topic = self._topic('pyout') |
|
148 | 154 | self.shell.display_pub.session = self.session |
|
149 | 155 | self.shell.display_pub.pub_socket = self.iopub_socket |
|
150 | 156 | self.shell.data_pub.session = self.session |
|
151 | 157 | self.shell.data_pub.pub_socket = self.iopub_socket |
|
152 | 158 | |
|
153 | 159 | # TMP - hack while developing |
|
154 | 160 | self.shell._reply_content = None |
|
155 | 161 | |
|
156 | 162 | # Build dict of handlers for message types |
|
157 | 163 | msg_types = [ 'execute_request', 'complete_request', |
|
158 | 164 | 'object_info_request', 'history_request', |
|
165 | 'kernel_info_request', | |
|
159 | 166 | 'connect_request', 'shutdown_request', |
|
160 | 167 | 'apply_request', |
|
161 | 168 | ] |
|
162 | 169 | self.shell_handlers = {} |
|
163 | 170 | for msg_type in msg_types: |
|
164 | 171 | self.shell_handlers[msg_type] = getattr(self, msg_type) |
|
165 | 172 | |
|
166 | 173 | control_msg_types = msg_types + [ 'clear_request', 'abort_request' ] |
|
167 | 174 | self.control_handlers = {} |
|
168 | 175 | for msg_type in control_msg_types: |
|
169 | 176 | self.control_handlers[msg_type] = getattr(self, msg_type) |
|
170 | 177 | |
|
171 | 178 | def dispatch_control(self, msg): |
|
172 | 179 | """dispatch control requests""" |
|
173 | 180 | idents,msg = self.session.feed_identities(msg, copy=False) |
|
174 | 181 | try: |
|
175 | 182 | msg = self.session.unserialize(msg, content=True, copy=False) |
|
176 | 183 | except: |
|
177 | 184 | self.log.error("Invalid Control Message", exc_info=True) |
|
178 | 185 | return |
|
179 | 186 | |
|
180 | 187 | self.log.debug("Control received: %s", msg) |
|
181 | 188 | |
|
182 | 189 | header = msg['header'] |
|
183 | 190 | msg_id = header['msg_id'] |
|
184 | 191 | msg_type = header['msg_type'] |
|
185 | 192 | |
|
186 | 193 | handler = self.control_handlers.get(msg_type, None) |
|
187 | 194 | if handler is None: |
|
188 | 195 | self.log.error("UNKNOWN CONTROL MESSAGE TYPE: %r", msg_type) |
|
189 | 196 | else: |
|
190 | 197 | try: |
|
191 | 198 | handler(self.control_stream, idents, msg) |
|
192 | 199 | except Exception: |
|
193 | 200 | self.log.error("Exception in control handler:", exc_info=True) |
|
194 | 201 | |
|
195 | 202 | def dispatch_shell(self, stream, msg): |
|
196 | 203 | """dispatch shell requests""" |
|
197 | 204 | # flush control requests first |
|
198 | 205 | if self.control_stream: |
|
199 | 206 | self.control_stream.flush() |
|
200 | 207 | |
|
201 | 208 | idents,msg = self.session.feed_identities(msg, copy=False) |
|
202 | 209 | try: |
|
203 | 210 | msg = self.session.unserialize(msg, content=True, copy=False) |
|
204 | 211 | except: |
|
205 | 212 | self.log.error("Invalid Message", exc_info=True) |
|
206 | 213 | return |
|
207 | 214 | |
|
208 | 215 | header = msg['header'] |
|
209 | 216 | msg_id = header['msg_id'] |
|
210 | 217 | msg_type = msg['header']['msg_type'] |
|
211 | 218 | |
|
212 | 219 | # Print some info about this message and leave a '--->' marker, so it's |
|
213 | 220 | # easier to trace visually the message chain when debugging. Each |
|
214 | 221 | # handler prints its message at the end. |
|
215 | 222 | self.log.debug('\n*** MESSAGE TYPE:%s***', msg_type) |
|
216 | 223 | self.log.debug(' Content: %s\n --->\n ', msg['content']) |
|
217 | 224 | |
|
218 | 225 | if msg_id in self.aborted: |
|
219 | 226 | self.aborted.remove(msg_id) |
|
220 | 227 | # is it safe to assume a msg_id will not be resubmitted? |
|
221 | 228 | reply_type = msg_type.split('_')[0] + '_reply' |
|
222 | 229 | status = {'status' : 'aborted'} |
|
223 | 230 | md = {'engine' : self.ident} |
|
224 | 231 | md.update(status) |
|
225 | 232 | reply_msg = self.session.send(stream, reply_type, metadata=md, |
|
226 | 233 | content=status, parent=msg, ident=idents) |
|
227 | 234 | return |
|
228 | 235 | |
|
229 | 236 | handler = self.shell_handlers.get(msg_type, None) |
|
230 | 237 | if handler is None: |
|
231 | 238 | self.log.error("UNKNOWN MESSAGE TYPE: %r", msg_type) |
|
232 | 239 | else: |
|
233 | 240 | # ensure default_int_handler during handler call |
|
234 | 241 | sig = signal(SIGINT, default_int_handler) |
|
235 | 242 | try: |
|
236 | 243 | handler(stream, idents, msg) |
|
237 | 244 | except Exception: |
|
238 | 245 | self.log.error("Exception in message handler:", exc_info=True) |
|
239 | 246 | finally: |
|
240 | 247 | signal(SIGINT, sig) |
|
241 | 248 | |
|
242 | 249 | def enter_eventloop(self): |
|
243 | 250 | """enter eventloop""" |
|
244 | 251 | self.log.info("entering eventloop") |
|
245 | 252 | # restore default_int_handler |
|
246 | 253 | signal(SIGINT, default_int_handler) |
|
247 | 254 | while self.eventloop is not None: |
|
248 | 255 | try: |
|
249 | 256 | self.eventloop(self) |
|
250 | 257 | except KeyboardInterrupt: |
|
251 | 258 | # Ctrl-C shouldn't crash the kernel |
|
252 | 259 | self.log.error("KeyboardInterrupt caught in kernel") |
|
253 | 260 | continue |
|
254 | 261 | else: |
|
255 | 262 | # eventloop exited cleanly, this means we should stop (right?) |
|
256 | 263 | self.eventloop = None |
|
257 | 264 | break |
|
258 | 265 | self.log.info("exiting eventloop") |
|
259 | 266 | |
|
260 | 267 | def start(self): |
|
261 | 268 | """register dispatchers for streams""" |
|
262 | 269 | self.shell.exit_now = False |
|
263 | 270 | if self.control_stream: |
|
264 | 271 | self.control_stream.on_recv(self.dispatch_control, copy=False) |
|
265 | 272 | |
|
266 | 273 | def make_dispatcher(stream): |
|
267 | 274 | def dispatcher(msg): |
|
268 | 275 | return self.dispatch_shell(stream, msg) |
|
269 | 276 | return dispatcher |
|
270 | 277 | |
|
271 | 278 | for s in self.shell_streams: |
|
272 | 279 | s.on_recv(make_dispatcher(s), copy=False) |
|
273 | 280 | |
|
274 | 281 | def do_one_iteration(self): |
|
275 | 282 | """step eventloop just once""" |
|
276 | 283 | if self.control_stream: |
|
277 | 284 | self.control_stream.flush() |
|
278 | 285 | for stream in self.shell_streams: |
|
279 | 286 | # handle at most one request per iteration |
|
280 | 287 | stream.flush(zmq.POLLIN, 1) |
|
281 | 288 | stream.flush(zmq.POLLOUT) |
|
282 | 289 | |
|
283 | 290 | |
|
284 | 291 | def record_ports(self, ports): |
|
285 | 292 | """Record the ports that this kernel is using. |
|
286 | 293 | |
|
287 | 294 | The creator of the Kernel instance must call this methods if they |
|
288 | 295 | want the :meth:`connect_request` method to return the port numbers. |
|
289 | 296 | """ |
|
290 | 297 | self._recorded_ports = ports |
|
291 | 298 | |
|
292 | 299 | #--------------------------------------------------------------------------- |
|
293 | 300 | # Kernel request handlers |
|
294 | 301 | #--------------------------------------------------------------------------- |
|
295 | 302 | |
|
296 | 303 | def _make_metadata(self, other=None): |
|
297 | 304 | """init metadata dict, for execute/apply_reply""" |
|
298 | 305 | new_md = { |
|
299 | 306 | 'dependencies_met' : True, |
|
300 | 307 | 'engine' : self.ident, |
|
301 | 308 | 'started': datetime.now(), |
|
302 | 309 | } |
|
303 | 310 | if other: |
|
304 | 311 | new_md.update(other) |
|
305 | 312 | return new_md |
|
306 | 313 | |
|
307 | 314 | def _publish_pyin(self, code, parent, execution_count): |
|
308 | 315 | """Publish the code request on the pyin stream.""" |
|
309 | 316 | |
|
310 | 317 | self.session.send(self.iopub_socket, u'pyin', |
|
311 | 318 | {u'code':code, u'execution_count': execution_count}, |
|
312 | 319 | parent=parent, ident=self._topic('pyin') |
|
313 | 320 | ) |
|
314 | 321 | |
|
315 | 322 | def _publish_status(self, status, parent=None): |
|
316 | 323 | """send status (busy/idle) on IOPub""" |
|
317 | 324 | self.session.send(self.iopub_socket, |
|
318 | 325 | u'status', |
|
319 | 326 | {u'execution_state': status}, |
|
320 | 327 | parent=parent, |
|
321 | 328 | ident=self._topic('status'), |
|
322 | 329 | ) |
|
323 | 330 | |
|
324 | 331 | |
|
325 | 332 | def execute_request(self, stream, ident, parent): |
|
326 | 333 | """handle an execute_request""" |
|
327 | 334 | |
|
328 | 335 | self._publish_status(u'busy', parent) |
|
329 | 336 | |
|
330 | 337 | try: |
|
331 | 338 | content = parent[u'content'] |
|
332 | 339 | code = content[u'code'] |
|
333 | 340 | silent = content[u'silent'] |
|
334 | 341 | store_history = content.get(u'store_history', not silent) |
|
335 | 342 | except: |
|
336 | 343 | self.log.error("Got bad msg: ") |
|
337 | 344 | self.log.error("%s", parent) |
|
338 | 345 | return |
|
339 | 346 | |
|
340 | 347 | md = self._make_metadata(parent['metadata']) |
|
341 | 348 | |
|
342 | 349 | shell = self.shell # we'll need this a lot here |
|
343 | 350 | |
|
344 | 351 | # Replace raw_input. Note that is not sufficient to replace |
|
345 | 352 | # raw_input in the user namespace. |
|
346 | 353 | if content.get('allow_stdin', False): |
|
347 | 354 | raw_input = lambda prompt='': self._raw_input(prompt, ident, parent) |
|
348 | 355 | else: |
|
349 | 356 | raw_input = lambda prompt='' : self._no_raw_input() |
|
350 | 357 | |
|
351 | 358 | if py3compat.PY3: |
|
352 | 359 | __builtin__.input = raw_input |
|
353 | 360 | else: |
|
354 | 361 | __builtin__.raw_input = raw_input |
|
355 | 362 | |
|
356 | 363 | # Set the parent message of the display hook and out streams. |
|
357 | 364 | shell.displayhook.set_parent(parent) |
|
358 | 365 | shell.display_pub.set_parent(parent) |
|
359 | 366 | shell.data_pub.set_parent(parent) |
|
360 | 367 | sys.stdout.set_parent(parent) |
|
361 | 368 | sys.stderr.set_parent(parent) |
|
362 | 369 | |
|
363 | 370 | # Re-broadcast our input for the benefit of listening clients, and |
|
364 | 371 | # start computing output |
|
365 | 372 | if not silent: |
|
366 | 373 | self._publish_pyin(code, parent, shell.execution_count) |
|
367 | 374 | |
|
368 | 375 | reply_content = {} |
|
369 | 376 | try: |
|
370 | 377 | # FIXME: the shell calls the exception handler itself. |
|
371 | 378 | shell.run_cell(code, store_history=store_history, silent=silent) |
|
372 | 379 | except: |
|
373 | 380 | status = u'error' |
|
374 | 381 | # FIXME: this code right now isn't being used yet by default, |
|
375 | 382 | # because the run_cell() call above directly fires off exception |
|
376 | 383 | # reporting. This code, therefore, is only active in the scenario |
|
377 | 384 | # where runlines itself has an unhandled exception. We need to |
|
378 | 385 | # uniformize this, for all exception construction to come from a |
|
379 | 386 | # single location in the codbase. |
|
380 | 387 | etype, evalue, tb = sys.exc_info() |
|
381 | 388 | tb_list = traceback.format_exception(etype, evalue, tb) |
|
382 | 389 | reply_content.update(shell._showtraceback(etype, evalue, tb_list)) |
|
383 | 390 | else: |
|
384 | 391 | status = u'ok' |
|
385 | 392 | |
|
386 | 393 | reply_content[u'status'] = status |
|
387 | 394 | |
|
388 | 395 | # Return the execution counter so clients can display prompts |
|
389 | 396 | reply_content['execution_count'] = shell.execution_count - 1 |
|
390 | 397 | |
|
391 | 398 | # FIXME - fish exception info out of shell, possibly left there by |
|
392 | 399 | # runlines. We'll need to clean up this logic later. |
|
393 | 400 | if shell._reply_content is not None: |
|
394 | 401 | reply_content.update(shell._reply_content) |
|
395 | 402 | e_info = dict(engine_uuid=self.ident, engine_id=self.int_id, method='execute') |
|
396 | 403 | reply_content['engine_info'] = e_info |
|
397 | 404 | # reset after use |
|
398 | 405 | shell._reply_content = None |
|
399 | 406 | |
|
400 | 407 | # At this point, we can tell whether the main code execution succeeded |
|
401 | 408 | # or not. If it did, we proceed to evaluate user_variables/expressions |
|
402 | 409 | if reply_content['status'] == 'ok': |
|
403 | 410 | reply_content[u'user_variables'] = \ |
|
404 | 411 | shell.user_variables(content.get(u'user_variables', [])) |
|
405 | 412 | reply_content[u'user_expressions'] = \ |
|
406 | 413 | shell.user_expressions(content.get(u'user_expressions', {})) |
|
407 | 414 | else: |
|
408 | 415 | # If there was an error, don't even try to compute variables or |
|
409 | 416 | # expressions |
|
410 | 417 | reply_content[u'user_variables'] = {} |
|
411 | 418 | reply_content[u'user_expressions'] = {} |
|
412 | 419 | |
|
413 | 420 | # Payloads should be retrieved regardless of outcome, so we can both |
|
414 | 421 | # recover partial output (that could have been generated early in a |
|
415 | 422 | # block, before an error) and clear the payload system always. |
|
416 | 423 | reply_content[u'payload'] = shell.payload_manager.read_payload() |
|
417 | 424 | # Be agressive about clearing the payload because we don't want |
|
418 | 425 | # it to sit in memory until the next execute_request comes in. |
|
419 | 426 | shell.payload_manager.clear_payload() |
|
420 | 427 | |
|
421 | 428 | # Flush output before sending the reply. |
|
422 | 429 | sys.stdout.flush() |
|
423 | 430 | sys.stderr.flush() |
|
424 | 431 | # FIXME: on rare occasions, the flush doesn't seem to make it to the |
|
425 | 432 | # clients... This seems to mitigate the problem, but we definitely need |
|
426 | 433 | # to better understand what's going on. |
|
427 | 434 | if self._execute_sleep: |
|
428 | 435 | time.sleep(self._execute_sleep) |
|
429 | 436 | |
|
430 | 437 | # Send the reply. |
|
431 | 438 | reply_content = json_clean(reply_content) |
|
432 | 439 | |
|
433 | 440 | md['status'] = reply_content['status'] |
|
434 | 441 | if reply_content['status'] == 'error' and \ |
|
435 | 442 | reply_content['ename'] == 'UnmetDependency': |
|
436 | 443 | md['dependencies_met'] = False |
|
437 | 444 | |
|
438 | 445 | reply_msg = self.session.send(stream, u'execute_reply', |
|
439 | 446 | reply_content, parent, metadata=md, |
|
440 | 447 | ident=ident) |
|
441 | 448 | |
|
442 | 449 | self.log.debug("%s", reply_msg) |
|
443 | 450 | |
|
444 | 451 | if not silent and reply_msg['content']['status'] == u'error': |
|
445 | 452 | self._abort_queues() |
|
446 | 453 | |
|
447 | 454 | self._publish_status(u'idle', parent) |
|
448 | 455 | |
|
449 | 456 | def complete_request(self, stream, ident, parent): |
|
450 | 457 | txt, matches = self._complete(parent) |
|
451 | 458 | matches = {'matches' : matches, |
|
452 | 459 | 'matched_text' : txt, |
|
453 | 460 | 'status' : 'ok'} |
|
454 | 461 | matches = json_clean(matches) |
|
455 | 462 | completion_msg = self.session.send(stream, 'complete_reply', |
|
456 | 463 | matches, parent, ident) |
|
457 | 464 | self.log.debug("%s", completion_msg) |
|
458 | 465 | |
|
459 | 466 | def object_info_request(self, stream, ident, parent): |
|
460 | 467 | content = parent['content'] |
|
461 | 468 | object_info = self.shell.object_inspect(content['oname'], |
|
462 | 469 | detail_level = content.get('detail_level', 0) |
|
463 | 470 | ) |
|
464 | 471 | # Before we send this object over, we scrub it for JSON usage |
|
465 | 472 | oinfo = json_clean(object_info) |
|
466 | 473 | msg = self.session.send(stream, 'object_info_reply', |
|
467 | 474 | oinfo, parent, ident) |
|
468 | 475 | self.log.debug("%s", msg) |
|
469 | 476 | |
|
470 | 477 | def history_request(self, stream, ident, parent): |
|
471 | 478 | # We need to pull these out, as passing **kwargs doesn't work with |
|
472 | 479 | # unicode keys before Python 2.6.5. |
|
473 | 480 | hist_access_type = parent['content']['hist_access_type'] |
|
474 | 481 | raw = parent['content']['raw'] |
|
475 | 482 | output = parent['content']['output'] |
|
476 | 483 | if hist_access_type == 'tail': |
|
477 | 484 | n = parent['content']['n'] |
|
478 | 485 | hist = self.shell.history_manager.get_tail(n, raw=raw, output=output, |
|
479 | 486 | include_latest=True) |
|
480 | 487 | |
|
481 | 488 | elif hist_access_type == 'range': |
|
482 | 489 | session = parent['content']['session'] |
|
483 | 490 | start = parent['content']['start'] |
|
484 | 491 | stop = parent['content']['stop'] |
|
485 | 492 | hist = self.shell.history_manager.get_range(session, start, stop, |
|
486 | 493 | raw=raw, output=output) |
|
487 | 494 | |
|
488 | 495 | elif hist_access_type == 'search': |
|
489 | 496 | n = parent['content'].get('n') |
|
490 | 497 | pattern = parent['content']['pattern'] |
|
491 | 498 | hist = self.shell.history_manager.search(pattern, raw=raw, |
|
492 | 499 | output=output, n=n) |
|
493 | 500 | |
|
494 | 501 | else: |
|
495 | 502 | hist = [] |
|
496 | 503 | hist = list(hist) |
|
497 | 504 | content = {'history' : hist} |
|
498 | 505 | content = json_clean(content) |
|
499 | 506 | msg = self.session.send(stream, 'history_reply', |
|
500 | 507 | content, parent, ident) |
|
501 | 508 | self.log.debug("Sending history reply with %i entries", len(hist)) |
|
502 | 509 | |
|
503 | 510 | def connect_request(self, stream, ident, parent): |
|
504 | 511 | if self._recorded_ports is not None: |
|
505 | 512 | content = self._recorded_ports.copy() |
|
506 | 513 | else: |
|
507 | 514 | content = {} |
|
508 | 515 | msg = self.session.send(stream, 'connect_reply', |
|
509 | 516 | content, parent, ident) |
|
510 | 517 | self.log.debug("%s", msg) |
|
511 | 518 | |
|
519 | def kernel_info_request(self, stream, ident, parent): | |
|
520 | vinfo = { | |
|
521 | 'protocol_version': protocol_version, | |
|
522 | 'ipython_version': ipython_version, | |
|
523 | 'language_version': language_version, | |
|
524 | 'language': 'python', | |
|
525 | } | |
|
526 | msg = self.session.send(stream, 'kernel_info_reply', | |
|
527 | vinfo, parent, ident) | |
|
528 | self.log.debug("%s", msg) | |
|
529 | ||
|
512 | 530 | def shutdown_request(self, stream, ident, parent): |
|
513 | 531 | self.shell.exit_now = True |
|
514 | 532 | content = dict(status='ok') |
|
515 | 533 | content.update(parent['content']) |
|
516 | 534 | self.session.send(stream, u'shutdown_reply', content, parent, ident=ident) |
|
517 | 535 | # same content, but different msg_id for broadcasting on IOPub |
|
518 | 536 | self._shutdown_message = self.session.msg(u'shutdown_reply', |
|
519 | 537 | content, parent |
|
520 | 538 | ) |
|
521 | 539 | |
|
522 | 540 | self._at_shutdown() |
|
523 | 541 | # call sys.exit after a short delay |
|
524 | 542 | loop = ioloop.IOLoop.instance() |
|
525 | 543 | loop.add_timeout(time.time()+0.1, loop.stop) |
|
526 | 544 | |
|
527 | 545 | #--------------------------------------------------------------------------- |
|
528 | 546 | # Engine methods |
|
529 | 547 | #--------------------------------------------------------------------------- |
|
530 | 548 | |
|
531 | 549 | def apply_request(self, stream, ident, parent): |
|
532 | 550 | try: |
|
533 | 551 | content = parent[u'content'] |
|
534 | 552 | bufs = parent[u'buffers'] |
|
535 | 553 | msg_id = parent['header']['msg_id'] |
|
536 | 554 | except: |
|
537 | 555 | self.log.error("Got bad msg: %s", parent, exc_info=True) |
|
538 | 556 | return |
|
539 | 557 | |
|
540 | 558 | self._publish_status(u'busy', parent) |
|
541 | 559 | |
|
542 | 560 | # Set the parent message of the display hook and out streams. |
|
543 | 561 | shell = self.shell |
|
544 | 562 | shell.displayhook.set_parent(parent) |
|
545 | 563 | shell.display_pub.set_parent(parent) |
|
546 | 564 | shell.data_pub.set_parent(parent) |
|
547 | 565 | sys.stdout.set_parent(parent) |
|
548 | 566 | sys.stderr.set_parent(parent) |
|
549 | 567 | |
|
550 | 568 | # pyin_msg = self.session.msg(u'pyin',{u'code':code}, parent=parent) |
|
551 | 569 | # self.iopub_socket.send(pyin_msg) |
|
552 | 570 | # self.session.send(self.iopub_socket, u'pyin', {u'code':code},parent=parent) |
|
553 | 571 | md = self._make_metadata(parent['metadata']) |
|
554 | 572 | try: |
|
555 | 573 | working = shell.user_ns |
|
556 | 574 | |
|
557 | 575 | prefix = "_"+str(msg_id).replace("-","")+"_" |
|
558 | 576 | |
|
559 | 577 | f,args,kwargs = unpack_apply_message(bufs, working, copy=False) |
|
560 | 578 | |
|
561 | 579 | fname = getattr(f, '__name__', 'f') |
|
562 | 580 | |
|
563 | 581 | fname = prefix+"f" |
|
564 | 582 | argname = prefix+"args" |
|
565 | 583 | kwargname = prefix+"kwargs" |
|
566 | 584 | resultname = prefix+"result" |
|
567 | 585 | |
|
568 | 586 | ns = { fname : f, argname : args, kwargname : kwargs , resultname : None } |
|
569 | 587 | # print ns |
|
570 | 588 | working.update(ns) |
|
571 | 589 | code = "%s = %s(*%s,**%s)" % (resultname, fname, argname, kwargname) |
|
572 | 590 | try: |
|
573 | 591 | exec code in shell.user_global_ns, shell.user_ns |
|
574 | 592 | result = working.get(resultname) |
|
575 | 593 | finally: |
|
576 | 594 | for key in ns.iterkeys(): |
|
577 | 595 | working.pop(key) |
|
578 | 596 | |
|
579 | 597 | result_buf = serialize_object(result, |
|
580 | 598 | buffer_threshold=self.session.buffer_threshold, |
|
581 | 599 | item_threshold=self.session.item_threshold, |
|
582 | 600 | ) |
|
583 | 601 | |
|
584 | 602 | except: |
|
585 | 603 | # invoke IPython traceback formatting |
|
586 | 604 | shell.showtraceback() |
|
587 | 605 | # FIXME - fish exception info out of shell, possibly left there by |
|
588 | 606 | # run_code. We'll need to clean up this logic later. |
|
589 | 607 | reply_content = {} |
|
590 | 608 | if shell._reply_content is not None: |
|
591 | 609 | reply_content.update(shell._reply_content) |
|
592 | 610 | e_info = dict(engine_uuid=self.ident, engine_id=self.int_id, method='apply') |
|
593 | 611 | reply_content['engine_info'] = e_info |
|
594 | 612 | # reset after use |
|
595 | 613 | shell._reply_content = None |
|
596 | 614 | |
|
597 | 615 | self.session.send(self.iopub_socket, u'pyerr', reply_content, parent=parent, |
|
598 | 616 | ident=self._topic('pyerr')) |
|
599 | 617 | result_buf = [] |
|
600 | 618 | |
|
601 | 619 | if reply_content['ename'] == 'UnmetDependency': |
|
602 | 620 | md['dependencies_met'] = False |
|
603 | 621 | else: |
|
604 | 622 | reply_content = {'status' : 'ok'} |
|
605 | 623 | |
|
606 | 624 | # put 'ok'/'error' status in header, for scheduler introspection: |
|
607 | 625 | md['status'] = reply_content['status'] |
|
608 | 626 | |
|
609 | 627 | # flush i/o |
|
610 | 628 | sys.stdout.flush() |
|
611 | 629 | sys.stderr.flush() |
|
612 | 630 | |
|
613 | 631 | reply_msg = self.session.send(stream, u'apply_reply', reply_content, |
|
614 | 632 | parent=parent, ident=ident,buffers=result_buf, metadata=md) |
|
615 | 633 | |
|
616 | 634 | self._publish_status(u'idle', parent) |
|
617 | 635 | |
|
618 | 636 | #--------------------------------------------------------------------------- |
|
619 | 637 | # Control messages |
|
620 | 638 | #--------------------------------------------------------------------------- |
|
621 | 639 | |
|
622 | 640 | def abort_request(self, stream, ident, parent): |
|
623 | 641 | """abort a specifig msg by id""" |
|
624 | 642 | msg_ids = parent['content'].get('msg_ids', None) |
|
625 | 643 | if isinstance(msg_ids, basestring): |
|
626 | 644 | msg_ids = [msg_ids] |
|
627 | 645 | if not msg_ids: |
|
628 | 646 | self.abort_queues() |
|
629 | 647 | for mid in msg_ids: |
|
630 | 648 | self.aborted.add(str(mid)) |
|
631 | 649 | |
|
632 | 650 | content = dict(status='ok') |
|
633 | 651 | reply_msg = self.session.send(stream, 'abort_reply', content=content, |
|
634 | 652 | parent=parent, ident=ident) |
|
635 | 653 | self.log.debug("%s", reply_msg) |
|
636 | 654 | |
|
637 | 655 | def clear_request(self, stream, idents, parent): |
|
638 | 656 | """Clear our namespace.""" |
|
639 | 657 | self.shell.reset(False) |
|
640 | 658 | msg = self.session.send(stream, 'clear_reply', ident=idents, parent=parent, |
|
641 | 659 | content = dict(status='ok')) |
|
642 | 660 | |
|
643 | 661 | |
|
644 | 662 | #--------------------------------------------------------------------------- |
|
645 | 663 | # Protected interface |
|
646 | 664 | #--------------------------------------------------------------------------- |
|
647 | 665 | |
|
648 | 666 | |
|
649 | 667 | def _wrap_exception(self, method=None): |
|
650 | 668 | # import here, because _wrap_exception is only used in parallel, |
|
651 | 669 | # and parallel has higher min pyzmq version |
|
652 | 670 | from IPython.parallel.error import wrap_exception |
|
653 | 671 | e_info = dict(engine_uuid=self.ident, engine_id=self.int_id, method=method) |
|
654 | 672 | content = wrap_exception(e_info) |
|
655 | 673 | return content |
|
656 | 674 | |
|
657 | 675 | def _topic(self, topic): |
|
658 | 676 | """prefixed topic for IOPub messages""" |
|
659 | 677 | if self.int_id >= 0: |
|
660 | 678 | base = "engine.%i" % self.int_id |
|
661 | 679 | else: |
|
662 | 680 | base = "kernel.%s" % self.ident |
|
663 | 681 | |
|
664 | 682 | return py3compat.cast_bytes("%s.%s" % (base, topic)) |
|
665 | 683 | |
|
666 | 684 | def _abort_queues(self): |
|
667 | 685 | for stream in self.shell_streams: |
|
668 | 686 | if stream: |
|
669 | 687 | self._abort_queue(stream) |
|
670 | 688 | |
|
671 | 689 | def _abort_queue(self, stream): |
|
672 | 690 | poller = zmq.Poller() |
|
673 | 691 | poller.register(stream.socket, zmq.POLLIN) |
|
674 | 692 | while True: |
|
675 | 693 | idents,msg = self.session.recv(stream, zmq.NOBLOCK, content=True) |
|
676 | 694 | if msg is None: |
|
677 | 695 | return |
|
678 | 696 | |
|
679 | 697 | self.log.info("Aborting:") |
|
680 | 698 | self.log.info("%s", msg) |
|
681 | 699 | msg_type = msg['header']['msg_type'] |
|
682 | 700 | reply_type = msg_type.split('_')[0] + '_reply' |
|
683 | 701 | |
|
684 | 702 | status = {'status' : 'aborted'} |
|
685 | 703 | md = {'engine' : self.ident} |
|
686 | 704 | md.update(status) |
|
687 | 705 | reply_msg = self.session.send(stream, reply_type, metadata=md, |
|
688 | 706 | content=status, parent=msg, ident=idents) |
|
689 | 707 | self.log.debug("%s", reply_msg) |
|
690 | 708 | # We need to wait a bit for requests to come in. This can probably |
|
691 | 709 | # be set shorter for true asynchronous clients. |
|
692 | 710 | poller.poll(50) |
|
693 | 711 | |
|
694 | 712 | |
|
695 | 713 | def _no_raw_input(self): |
|
696 | 714 | """Raise StdinNotImplentedError if active frontend doesn't support |
|
697 | 715 | stdin.""" |
|
698 | 716 | raise StdinNotImplementedError("raw_input was called, but this " |
|
699 | 717 | "frontend does not support stdin.") |
|
700 | 718 | |
|
701 | 719 | def _raw_input(self, prompt, ident, parent): |
|
702 | 720 | # Flush output before making the request. |
|
703 | 721 | sys.stderr.flush() |
|
704 | 722 | sys.stdout.flush() |
|
705 | 723 | |
|
706 | 724 | # Send the input request. |
|
707 | 725 | content = json_clean(dict(prompt=prompt)) |
|
708 | 726 | self.session.send(self.stdin_socket, u'input_request', content, parent, |
|
709 | 727 | ident=ident) |
|
710 | 728 | |
|
711 | 729 | # Await a response. |
|
712 | 730 | while True: |
|
713 | 731 | try: |
|
714 | 732 | ident, reply = self.session.recv(self.stdin_socket, 0) |
|
715 | 733 | except Exception: |
|
716 | 734 | self.log.warn("Invalid Message:", exc_info=True) |
|
717 | 735 | else: |
|
718 | 736 | break |
|
719 | 737 | try: |
|
720 | 738 | value = reply['content']['value'] |
|
721 | 739 | except: |
|
722 | 740 | self.log.error("Got bad raw_input reply: ") |
|
723 | 741 | self.log.error("%s", parent) |
|
724 | 742 | value = '' |
|
725 | 743 | if value == '\x04': |
|
726 | 744 | # EOF |
|
727 | 745 | raise EOFError |
|
728 | 746 | return value |
|
729 | 747 | |
|
730 | 748 | def _complete(self, msg): |
|
731 | 749 | c = msg['content'] |
|
732 | 750 | try: |
|
733 | 751 | cpos = int(c['cursor_pos']) |
|
734 | 752 | except: |
|
735 | 753 | # If we don't get something that we can convert to an integer, at |
|
736 | 754 | # least attempt the completion guessing the cursor is at the end of |
|
737 | 755 | # the text, if there's any, and otherwise of the line |
|
738 | 756 | cpos = len(c['text']) |
|
739 | 757 | if cpos==0: |
|
740 | 758 | cpos = len(c['line']) |
|
741 | 759 | return self.shell.complete(c['text'], c['line'], cpos) |
|
742 | 760 | |
|
743 | 761 | def _object_info(self, context): |
|
744 | 762 | symbol, leftover = self._symbol_from_context(context) |
|
745 | 763 | if symbol is not None and not leftover: |
|
746 | 764 | doc = getattr(symbol, '__doc__', '') |
|
747 | 765 | else: |
|
748 | 766 | doc = '' |
|
749 | 767 | object_info = dict(docstring = doc) |
|
750 | 768 | return object_info |
|
751 | 769 | |
|
752 | 770 | def _symbol_from_context(self, context): |
|
753 | 771 | if not context: |
|
754 | 772 | return None, context |
|
755 | 773 | |
|
756 | 774 | base_symbol_string = context[0] |
|
757 | 775 | symbol = self.shell.user_ns.get(base_symbol_string, None) |
|
758 | 776 | if symbol is None: |
|
759 | 777 | symbol = __builtin__.__dict__.get(base_symbol_string, None) |
|
760 | 778 | if symbol is None: |
|
761 | 779 | return None, context |
|
762 | 780 | |
|
763 | 781 | context = context[1:] |
|
764 | 782 | for i, name in enumerate(context): |
|
765 | 783 | new_symbol = getattr(symbol, name, None) |
|
766 | 784 | if new_symbol is None: |
|
767 | 785 | return symbol, context[i:] |
|
768 | 786 | else: |
|
769 | 787 | symbol = new_symbol |
|
770 | 788 | |
|
771 | 789 | return symbol, [] |
|
772 | 790 | |
|
773 | 791 | def _at_shutdown(self): |
|
774 | 792 | """Actions taken at shutdown by the kernel, called by python's atexit. |
|
775 | 793 | """ |
|
776 | 794 | # io.rprint("Kernel at_shutdown") # dbg |
|
777 | 795 | if self._shutdown_message is not None: |
|
778 | 796 | self.session.send(self.iopub_socket, self._shutdown_message, ident=self._topic('shutdown')) |
|
779 | 797 | self.log.debug("%s", self._shutdown_message) |
|
780 | 798 | [ s.flush(zmq.POLLOUT) for s in self.shell_streams ] |
|
781 | 799 | |
|
782 | 800 | #----------------------------------------------------------------------------- |
|
783 | 801 | # Aliases and Flags for the IPKernelApp |
|
784 | 802 | #----------------------------------------------------------------------------- |
|
785 | 803 | |
|
786 | 804 | flags = dict(kernel_flags) |
|
787 | 805 | flags.update(shell_flags) |
|
788 | 806 | |
|
789 | 807 | addflag = lambda *args: flags.update(boolean_flag(*args)) |
|
790 | 808 | |
|
791 | 809 | flags['pylab'] = ( |
|
792 | 810 | {'IPKernelApp' : {'pylab' : 'auto'}}, |
|
793 | 811 | """Pre-load matplotlib and numpy for interactive use with |
|
794 | 812 | the default matplotlib backend.""" |
|
795 | 813 | ) |
|
796 | 814 | |
|
797 | 815 | aliases = dict(kernel_aliases) |
|
798 | 816 | aliases.update(shell_aliases) |
|
799 | 817 | |
|
800 | 818 | #----------------------------------------------------------------------------- |
|
801 | 819 | # The IPKernelApp class |
|
802 | 820 | #----------------------------------------------------------------------------- |
|
803 | 821 | |
|
804 | 822 | class IPKernelApp(KernelApp, InteractiveShellApp): |
|
805 | 823 | name = 'ipkernel' |
|
806 | 824 | |
|
807 | 825 | aliases = Dict(aliases) |
|
808 | 826 | flags = Dict(flags) |
|
809 | 827 | classes = [Kernel, ZMQInteractiveShell, ProfileDir, Session] |
|
810 | 828 | |
|
811 | 829 | @catch_config_error |
|
812 | 830 | def initialize(self, argv=None): |
|
813 | 831 | super(IPKernelApp, self).initialize(argv) |
|
814 | 832 | self.init_path() |
|
815 | 833 | self.init_shell() |
|
816 | 834 | self.init_gui_pylab() |
|
817 | 835 | self.init_extensions() |
|
818 | 836 | self.init_code() |
|
819 | 837 | |
|
820 | 838 | def init_kernel(self): |
|
821 | 839 | |
|
822 | 840 | shell_stream = ZMQStream(self.shell_socket) |
|
823 | 841 | |
|
824 | 842 | kernel = Kernel(config=self.config, session=self.session, |
|
825 | 843 | shell_streams=[shell_stream], |
|
826 | 844 | iopub_socket=self.iopub_socket, |
|
827 | 845 | stdin_socket=self.stdin_socket, |
|
828 | 846 | log=self.log, |
|
829 | 847 | profile_dir=self.profile_dir, |
|
830 | 848 | ) |
|
831 | 849 | self.kernel = kernel |
|
832 | 850 | kernel.record_ports(self.ports) |
|
833 | 851 | shell = kernel.shell |
|
834 | 852 | |
|
835 | 853 | def init_gui_pylab(self): |
|
836 | 854 | """Enable GUI event loop integration, taking pylab into account.""" |
|
837 | 855 | |
|
838 | 856 | # Provide a wrapper for :meth:`InteractiveShellApp.init_gui_pylab` |
|
839 | 857 | # to ensure that any exception is printed straight to stderr. |
|
840 | 858 | # Normally _showtraceback associates the reply with an execution, |
|
841 | 859 | # which means frontends will never draw it, as this exception |
|
842 | 860 | # is not associated with any execute request. |
|
843 | 861 | |
|
844 | 862 | shell = self.shell |
|
845 | 863 | _showtraceback = shell._showtraceback |
|
846 | 864 | try: |
|
847 | 865 | # replace pyerr-sending traceback with stderr |
|
848 | 866 | def print_tb(etype, evalue, stb): |
|
849 | 867 | print ("GUI event loop or pylab initialization failed", |
|
850 | 868 | file=io.stderr) |
|
851 | 869 | print (shell.InteractiveTB.stb2text(stb), file=io.stderr) |
|
852 | 870 | shell._showtraceback = print_tb |
|
853 | 871 | InteractiveShellApp.init_gui_pylab(self) |
|
854 | 872 | finally: |
|
855 | 873 | shell._showtraceback = _showtraceback |
|
856 | 874 | |
|
857 | 875 | def init_shell(self): |
|
858 | 876 | self.shell = self.kernel.shell |
|
859 | 877 | self.shell.configurables.append(self) |
|
860 | 878 | |
|
861 | 879 | |
|
862 | 880 | #----------------------------------------------------------------------------- |
|
863 | 881 | # Kernel main and launch functions |
|
864 | 882 | #----------------------------------------------------------------------------- |
|
865 | 883 | |
|
866 | 884 | def launch_kernel(*args, **kwargs): |
|
867 | 885 | """Launches a localhost IPython kernel, binding to the specified ports. |
|
868 | 886 | |
|
869 | 887 | This function simply calls entry_point.base_launch_kernel with the right |
|
870 | 888 | first command to start an ipkernel. See base_launch_kernel for arguments. |
|
871 | 889 | |
|
872 | 890 | Returns |
|
873 | 891 | ------- |
|
874 | 892 | A tuple of form: |
|
875 | 893 | (kernel_process, shell_port, iopub_port, stdin_port, hb_port) |
|
876 | 894 | where kernel_process is a Popen object and the ports are integers. |
|
877 | 895 | """ |
|
878 | 896 | return base_launch_kernel('from IPython.zmq.ipkernel import main; main()', |
|
879 | 897 | *args, **kwargs) |
|
880 | 898 | |
|
881 | 899 | |
|
882 | 900 | def embed_kernel(module=None, local_ns=None, **kwargs): |
|
883 | 901 | """Embed and start an IPython kernel in a given scope. |
|
884 | 902 | |
|
885 | 903 | Parameters |
|
886 | 904 | ---------- |
|
887 | 905 | module : ModuleType, optional |
|
888 | 906 | The module to load into IPython globals (default: caller) |
|
889 | 907 | local_ns : dict, optional |
|
890 | 908 | The namespace to load into IPython user namespace (default: caller) |
|
891 | 909 | |
|
892 | 910 | kwargs : various, optional |
|
893 | 911 | Further keyword args are relayed to the KernelApp constructor, |
|
894 | 912 | allowing configuration of the Kernel. Will only have an effect |
|
895 | 913 | on the first embed_kernel call for a given process. |
|
896 | 914 | |
|
897 | 915 | """ |
|
898 | 916 | # get the app if it exists, or set it up if it doesn't |
|
899 | 917 | if IPKernelApp.initialized(): |
|
900 | 918 | app = IPKernelApp.instance() |
|
901 | 919 | else: |
|
902 | 920 | app = IPKernelApp.instance(**kwargs) |
|
903 | 921 | app.initialize([]) |
|
904 | 922 | # Undo unnecessary sys module mangling from init_sys_modules. |
|
905 | 923 | # This would not be necessary if we could prevent it |
|
906 | 924 | # in the first place by using a different InteractiveShell |
|
907 | 925 | # subclass, as in the regular embed case. |
|
908 | 926 | main = app.kernel.shell._orig_sys_modules_main_mod |
|
909 | 927 | if main is not None: |
|
910 | 928 | sys.modules[app.kernel.shell._orig_sys_modules_main_name] = main |
|
911 | 929 | |
|
912 | 930 | # load the calling scope if not given |
|
913 | 931 | (caller_module, caller_locals) = extract_module_locals(1) |
|
914 | 932 | if module is None: |
|
915 | 933 | module = caller_module |
|
916 | 934 | if local_ns is None: |
|
917 | 935 | local_ns = caller_locals |
|
918 | 936 | |
|
919 | 937 | app.kernel.user_module = module |
|
920 | 938 | app.kernel.user_ns = local_ns |
|
921 | 939 | app.shell.set_completer_frame() |
|
922 | 940 | app.start() |
|
923 | 941 | |
|
924 | 942 | def main(): |
|
925 | 943 | """Run an IPKernel as an application""" |
|
926 | 944 | app = IPKernelApp.instance() |
|
927 | 945 | app.initialize() |
|
928 | 946 | app.start() |
|
929 | 947 | |
|
930 | 948 | |
|
931 | 949 | if __name__ == '__main__': |
|
932 | 950 | main() |
@@ -1,1042 +1,1048 b'' | |||
|
1 | 1 | """Base classes to manage the interaction with a running kernel. |
|
2 | 2 | |
|
3 | 3 | TODO |
|
4 | 4 | * Create logger to handle debugging and console messages. |
|
5 | 5 | """ |
|
6 | 6 | |
|
7 | 7 | #----------------------------------------------------------------------------- |
|
8 | 8 | # Copyright (C) 2008-2011 The IPython Development Team |
|
9 | 9 | # |
|
10 | 10 | # Distributed under the terms of the BSD License. The full license is in |
|
11 | 11 | # the file COPYING, distributed as part of this software. |
|
12 | 12 | #----------------------------------------------------------------------------- |
|
13 | 13 | |
|
14 | 14 | #----------------------------------------------------------------------------- |
|
15 | 15 | # Imports |
|
16 | 16 | #----------------------------------------------------------------------------- |
|
17 | 17 | |
|
18 | 18 | # Standard library imports. |
|
19 | 19 | import atexit |
|
20 | 20 | import errno |
|
21 | 21 | import json |
|
22 | 22 | from subprocess import Popen |
|
23 | 23 | import os |
|
24 | 24 | import signal |
|
25 | 25 | import sys |
|
26 | 26 | from threading import Thread |
|
27 | 27 | import time |
|
28 | 28 | |
|
29 | 29 | # System library imports. |
|
30 | 30 | import zmq |
|
31 | 31 | # import ZMQError in top-level namespace, to avoid ugly attribute-error messages |
|
32 | 32 | # during garbage collection of threads at exit: |
|
33 | 33 | from zmq import ZMQError |
|
34 | 34 | from zmq.eventloop import ioloop, zmqstream |
|
35 | 35 | |
|
36 | 36 | # Local imports. |
|
37 | 37 | from IPython.config.loader import Config |
|
38 | 38 | from IPython.utils.localinterfaces import LOCALHOST, LOCAL_IPS |
|
39 | 39 | from IPython.utils.traitlets import ( |
|
40 | 40 | HasTraits, Any, Instance, Type, Unicode, Integer, Bool, CaselessStrEnum |
|
41 | 41 | ) |
|
42 | 42 | from IPython.utils.py3compat import str_to_bytes |
|
43 | 43 | from IPython.zmq.entry_point import write_connection_file |
|
44 | 44 | from session import Session |
|
45 | 45 | |
|
46 | 46 | #----------------------------------------------------------------------------- |
|
47 | 47 | # Constants and exceptions |
|
48 | 48 | #----------------------------------------------------------------------------- |
|
49 | 49 | |
|
50 | 50 | class InvalidPortNumber(Exception): |
|
51 | 51 | pass |
|
52 | 52 | |
|
53 | 53 | #----------------------------------------------------------------------------- |
|
54 | 54 | # Utility functions |
|
55 | 55 | #----------------------------------------------------------------------------- |
|
56 | 56 | |
|
57 | 57 | # some utilities to validate message structure, these might get moved elsewhere |
|
58 | 58 | # if they prove to have more generic utility |
|
59 | 59 | |
|
60 | 60 | def validate_string_list(lst): |
|
61 | 61 | """Validate that the input is a list of strings. |
|
62 | 62 | |
|
63 | 63 | Raises ValueError if not.""" |
|
64 | 64 | if not isinstance(lst, list): |
|
65 | 65 | raise ValueError('input %r must be a list' % lst) |
|
66 | 66 | for x in lst: |
|
67 | 67 | if not isinstance(x, basestring): |
|
68 | 68 | raise ValueError('element %r in list must be a string' % x) |
|
69 | 69 | |
|
70 | 70 | |
|
71 | 71 | def validate_string_dict(dct): |
|
72 | 72 | """Validate that the input is a dict with string keys and values. |
|
73 | 73 | |
|
74 | 74 | Raises ValueError if not.""" |
|
75 | 75 | for k,v in dct.iteritems(): |
|
76 | 76 | if not isinstance(k, basestring): |
|
77 | 77 | raise ValueError('key %r in dict must be a string' % k) |
|
78 | 78 | if not isinstance(v, basestring): |
|
79 | 79 | raise ValueError('value %r in dict must be a string' % v) |
|
80 | 80 | |
|
81 | 81 | |
|
82 | 82 | #----------------------------------------------------------------------------- |
|
83 | 83 | # ZMQ Socket Channel classes |
|
84 | 84 | #----------------------------------------------------------------------------- |
|
85 | 85 | |
|
86 | 86 | class ZMQSocketChannel(Thread): |
|
87 | 87 | """The base class for the channels that use ZMQ sockets. |
|
88 | 88 | """ |
|
89 | 89 | context = None |
|
90 | 90 | session = None |
|
91 | 91 | socket = None |
|
92 | 92 | ioloop = None |
|
93 | 93 | stream = None |
|
94 | 94 | _address = None |
|
95 | 95 | _exiting = False |
|
96 | 96 | |
|
97 | 97 | def __init__(self, context, session, address): |
|
98 | 98 | """Create a channel |
|
99 | 99 | |
|
100 | 100 | Parameters |
|
101 | 101 | ---------- |
|
102 | 102 | context : :class:`zmq.Context` |
|
103 | 103 | The ZMQ context to use. |
|
104 | 104 | session : :class:`session.Session` |
|
105 | 105 | The session to use. |
|
106 | 106 | address : zmq url |
|
107 | 107 | Standard (ip, port) tuple that the kernel is listening on. |
|
108 | 108 | """ |
|
109 | 109 | super(ZMQSocketChannel, self).__init__() |
|
110 | 110 | self.daemon = True |
|
111 | 111 | |
|
112 | 112 | self.context = context |
|
113 | 113 | self.session = session |
|
114 | 114 | if isinstance(address, tuple): |
|
115 | 115 | if address[1] == 0: |
|
116 | 116 | message = 'The port number for a channel cannot be 0.' |
|
117 | 117 | raise InvalidPortNumber(message) |
|
118 | 118 | address = "tcp://%s:%i" % address |
|
119 | 119 | self._address = address |
|
120 | 120 | atexit.register(self._notice_exit) |
|
121 | 121 | |
|
122 | 122 | def _notice_exit(self): |
|
123 | 123 | self._exiting = True |
|
124 | 124 | |
|
125 | 125 | def _run_loop(self): |
|
126 | 126 | """Run my loop, ignoring EINTR events in the poller""" |
|
127 | 127 | while True: |
|
128 | 128 | try: |
|
129 | 129 | self.ioloop.start() |
|
130 | 130 | except ZMQError as e: |
|
131 | 131 | if e.errno == errno.EINTR: |
|
132 | 132 | continue |
|
133 | 133 | else: |
|
134 | 134 | raise |
|
135 | 135 | except Exception: |
|
136 | 136 | if self._exiting: |
|
137 | 137 | break |
|
138 | 138 | else: |
|
139 | 139 | raise |
|
140 | 140 | else: |
|
141 | 141 | break |
|
142 | 142 | |
|
143 | 143 | def stop(self): |
|
144 | 144 | """Stop the channel's activity. |
|
145 | 145 | |
|
146 | 146 | This calls :method:`Thread.join` and returns when the thread |
|
147 | 147 | terminates. :class:`RuntimeError` will be raised if |
|
148 | 148 | :method:`self.start` is called again. |
|
149 | 149 | """ |
|
150 | 150 | self.join() |
|
151 | 151 | |
|
152 | 152 | @property |
|
153 | 153 | def address(self): |
|
154 | 154 | """Get the channel's address as a zmq url string ('tcp://127.0.0.1:5555'). |
|
155 | 155 | """ |
|
156 | 156 | return self._address |
|
157 | 157 | |
|
158 | 158 | def _queue_send(self, msg): |
|
159 | 159 | """Queue a message to be sent from the IOLoop's thread. |
|
160 | 160 | |
|
161 | 161 | Parameters |
|
162 | 162 | ---------- |
|
163 | 163 | msg : message to send |
|
164 | 164 | |
|
165 | 165 | This is threadsafe, as it uses IOLoop.add_callback to give the loop's |
|
166 | 166 | thread control of the action. |
|
167 | 167 | """ |
|
168 | 168 | def thread_send(): |
|
169 | 169 | self.session.send(self.stream, msg) |
|
170 | 170 | self.ioloop.add_callback(thread_send) |
|
171 | 171 | |
|
172 | 172 | def _handle_recv(self, msg): |
|
173 | 173 | """callback for stream.on_recv |
|
174 | 174 | |
|
175 | 175 | unpacks message, and calls handlers with it. |
|
176 | 176 | """ |
|
177 | 177 | ident,smsg = self.session.feed_identities(msg) |
|
178 | 178 | self.call_handlers(self.session.unserialize(smsg)) |
|
179 | 179 | |
|
180 | 180 | |
|
181 | 181 | |
|
182 | 182 | class ShellSocketChannel(ZMQSocketChannel): |
|
183 | 183 | """The DEALER channel for issues request/replies to the kernel. |
|
184 | 184 | """ |
|
185 | 185 | |
|
186 | 186 | command_queue = None |
|
187 | 187 | # flag for whether execute requests should be allowed to call raw_input: |
|
188 | 188 | allow_stdin = True |
|
189 | 189 | |
|
190 | 190 | def __init__(self, context, session, address): |
|
191 | 191 | super(ShellSocketChannel, self).__init__(context, session, address) |
|
192 | 192 | self.ioloop = ioloop.IOLoop() |
|
193 | 193 | |
|
194 | 194 | def run(self): |
|
195 | 195 | """The thread's main activity. Call start() instead.""" |
|
196 | 196 | self.socket = self.context.socket(zmq.DEALER) |
|
197 | 197 | self.socket.setsockopt(zmq.IDENTITY, self.session.bsession) |
|
198 | 198 | self.socket.connect(self.address) |
|
199 | 199 | self.stream = zmqstream.ZMQStream(self.socket, self.ioloop) |
|
200 | 200 | self.stream.on_recv(self._handle_recv) |
|
201 | 201 | self._run_loop() |
|
202 | 202 | try: |
|
203 | 203 | self.socket.close() |
|
204 | 204 | except: |
|
205 | 205 | pass |
|
206 | 206 | |
|
207 | 207 | def stop(self): |
|
208 | 208 | self.ioloop.stop() |
|
209 | 209 | super(ShellSocketChannel, self).stop() |
|
210 | 210 | |
|
211 | 211 | def call_handlers(self, msg): |
|
212 | 212 | """This method is called in the ioloop thread when a message arrives. |
|
213 | 213 | |
|
214 | 214 | Subclasses should override this method to handle incoming messages. |
|
215 | 215 | It is important to remember that this method is called in the thread |
|
216 | 216 | so that some logic must be done to ensure that the application leve |
|
217 | 217 | handlers are called in the application thread. |
|
218 | 218 | """ |
|
219 | 219 | raise NotImplementedError('call_handlers must be defined in a subclass.') |
|
220 | 220 | |
|
221 | 221 | def execute(self, code, silent=False, store_history=True, |
|
222 | 222 | user_variables=None, user_expressions=None, allow_stdin=None): |
|
223 | 223 | """Execute code in the kernel. |
|
224 | 224 | |
|
225 | 225 | Parameters |
|
226 | 226 | ---------- |
|
227 | 227 | code : str |
|
228 | 228 | A string of Python code. |
|
229 | 229 | |
|
230 | 230 | silent : bool, optional (default False) |
|
231 | 231 | If set, the kernel will execute the code as quietly possible, and |
|
232 | 232 | will force store_history to be False. |
|
233 | 233 | |
|
234 | 234 | store_history : bool, optional (default True) |
|
235 | 235 | If set, the kernel will store command history. This is forced |
|
236 | 236 | to be False if silent is True. |
|
237 | 237 | |
|
238 | 238 | user_variables : list, optional |
|
239 | 239 | A list of variable names to pull from the user's namespace. They |
|
240 | 240 | will come back as a dict with these names as keys and their |
|
241 | 241 | :func:`repr` as values. |
|
242 | 242 | |
|
243 | 243 | user_expressions : dict, optional |
|
244 | 244 | A dict mapping names to expressions to be evaluated in the user's |
|
245 | 245 | dict. The expression values are returned as strings formatted using |
|
246 | 246 | :func:`repr`. |
|
247 | 247 | |
|
248 | 248 | allow_stdin : bool, optional (default self.allow_stdin) |
|
249 | 249 | Flag for whether the kernel can send stdin requests to frontends. |
|
250 | 250 | |
|
251 | 251 | Some frontends (e.g. the Notebook) do not support stdin requests. |
|
252 | 252 | If raw_input is called from code executed from such a frontend, a |
|
253 | 253 | StdinNotImplementedError will be raised. |
|
254 | 254 | |
|
255 | 255 | Returns |
|
256 | 256 | ------- |
|
257 | 257 | The msg_id of the message sent. |
|
258 | 258 | """ |
|
259 | 259 | if user_variables is None: |
|
260 | 260 | user_variables = [] |
|
261 | 261 | if user_expressions is None: |
|
262 | 262 | user_expressions = {} |
|
263 | 263 | if allow_stdin is None: |
|
264 | 264 | allow_stdin = self.allow_stdin |
|
265 | 265 | |
|
266 | 266 | |
|
267 | 267 | # Don't waste network traffic if inputs are invalid |
|
268 | 268 | if not isinstance(code, basestring): |
|
269 | 269 | raise ValueError('code %r must be a string' % code) |
|
270 | 270 | validate_string_list(user_variables) |
|
271 | 271 | validate_string_dict(user_expressions) |
|
272 | 272 | |
|
273 | 273 | # Create class for content/msg creation. Related to, but possibly |
|
274 | 274 | # not in Session. |
|
275 | 275 | content = dict(code=code, silent=silent, store_history=store_history, |
|
276 | 276 | user_variables=user_variables, |
|
277 | 277 | user_expressions=user_expressions, |
|
278 | 278 | allow_stdin=allow_stdin, |
|
279 | 279 | ) |
|
280 | 280 | msg = self.session.msg('execute_request', content) |
|
281 | 281 | self._queue_send(msg) |
|
282 | 282 | return msg['header']['msg_id'] |
|
283 | 283 | |
|
284 | 284 | def complete(self, text, line, cursor_pos, block=None): |
|
285 | 285 | """Tab complete text in the kernel's namespace. |
|
286 | 286 | |
|
287 | 287 | Parameters |
|
288 | 288 | ---------- |
|
289 | 289 | text : str |
|
290 | 290 | The text to complete. |
|
291 | 291 | line : str |
|
292 | 292 | The full line of text that is the surrounding context for the |
|
293 | 293 | text to complete. |
|
294 | 294 | cursor_pos : int |
|
295 | 295 | The position of the cursor in the line where the completion was |
|
296 | 296 | requested. |
|
297 | 297 | block : str, optional |
|
298 | 298 | The full block of code in which the completion is being requested. |
|
299 | 299 | |
|
300 | 300 | Returns |
|
301 | 301 | ------- |
|
302 | 302 | The msg_id of the message sent. |
|
303 | 303 | """ |
|
304 | 304 | content = dict(text=text, line=line, block=block, cursor_pos=cursor_pos) |
|
305 | 305 | msg = self.session.msg('complete_request', content) |
|
306 | 306 | self._queue_send(msg) |
|
307 | 307 | return msg['header']['msg_id'] |
|
308 | 308 | |
|
309 | 309 | def object_info(self, oname, detail_level=0): |
|
310 | 310 | """Get metadata information about an object. |
|
311 | 311 | |
|
312 | 312 | Parameters |
|
313 | 313 | ---------- |
|
314 | 314 | oname : str |
|
315 | 315 | A string specifying the object name. |
|
316 | 316 | detail_level : int, optional |
|
317 | 317 | The level of detail for the introspection (0-2) |
|
318 | 318 | |
|
319 | 319 | Returns |
|
320 | 320 | ------- |
|
321 | 321 | The msg_id of the message sent. |
|
322 | 322 | """ |
|
323 | 323 | content = dict(oname=oname, detail_level=detail_level) |
|
324 | 324 | msg = self.session.msg('object_info_request', content) |
|
325 | 325 | self._queue_send(msg) |
|
326 | 326 | return msg['header']['msg_id'] |
|
327 | 327 | |
|
328 | 328 | def history(self, raw=True, output=False, hist_access_type='range', **kwargs): |
|
329 | 329 | """Get entries from the history list. |
|
330 | 330 | |
|
331 | 331 | Parameters |
|
332 | 332 | ---------- |
|
333 | 333 | raw : bool |
|
334 | 334 | If True, return the raw input. |
|
335 | 335 | output : bool |
|
336 | 336 | If True, then return the output as well. |
|
337 | 337 | hist_access_type : str |
|
338 | 338 | 'range' (fill in session, start and stop params), 'tail' (fill in n) |
|
339 | 339 | or 'search' (fill in pattern param). |
|
340 | 340 | |
|
341 | 341 | session : int |
|
342 | 342 | For a range request, the session from which to get lines. Session |
|
343 | 343 | numbers are positive integers; negative ones count back from the |
|
344 | 344 | current session. |
|
345 | 345 | start : int |
|
346 | 346 | The first line number of a history range. |
|
347 | 347 | stop : int |
|
348 | 348 | The final (excluded) line number of a history range. |
|
349 | 349 | |
|
350 | 350 | n : int |
|
351 | 351 | The number of lines of history to get for a tail request. |
|
352 | 352 | |
|
353 | 353 | pattern : str |
|
354 | 354 | The glob-syntax pattern for a search request. |
|
355 | 355 | |
|
356 | 356 | Returns |
|
357 | 357 | ------- |
|
358 | 358 | The msg_id of the message sent. |
|
359 | 359 | """ |
|
360 | 360 | content = dict(raw=raw, output=output, hist_access_type=hist_access_type, |
|
361 | 361 | **kwargs) |
|
362 | 362 | msg = self.session.msg('history_request', content) |
|
363 | 363 | self._queue_send(msg) |
|
364 | 364 | return msg['header']['msg_id'] |
|
365 | 365 | |
|
366 | def kernel_info(self): | |
|
367 | """Request kernel info.""" | |
|
368 | msg = self.session.msg('kernel_info_request') | |
|
369 | self._queue_send(msg) | |
|
370 | return msg['header']['msg_id'] | |
|
371 | ||
|
366 | 372 | def shutdown(self, restart=False): |
|
367 | 373 | """Request an immediate kernel shutdown. |
|
368 | 374 | |
|
369 | 375 | Upon receipt of the (empty) reply, client code can safely assume that |
|
370 | 376 | the kernel has shut down and it's safe to forcefully terminate it if |
|
371 | 377 | it's still alive. |
|
372 | 378 | |
|
373 | 379 | The kernel will send the reply via a function registered with Python's |
|
374 | 380 | atexit module, ensuring it's truly done as the kernel is done with all |
|
375 | 381 | normal operation. |
|
376 | 382 | """ |
|
377 | 383 | # Send quit message to kernel. Once we implement kernel-side setattr, |
|
378 | 384 | # this should probably be done that way, but for now this will do. |
|
379 | 385 | msg = self.session.msg('shutdown_request', {'restart':restart}) |
|
380 | 386 | self._queue_send(msg) |
|
381 | 387 | return msg['header']['msg_id'] |
|
382 | 388 | |
|
383 | 389 | |
|
384 | 390 | |
|
385 | 391 | class SubSocketChannel(ZMQSocketChannel): |
|
386 | 392 | """The SUB channel which listens for messages that the kernel publishes. |
|
387 | 393 | """ |
|
388 | 394 | |
|
389 | 395 | def __init__(self, context, session, address): |
|
390 | 396 | super(SubSocketChannel, self).__init__(context, session, address) |
|
391 | 397 | self.ioloop = ioloop.IOLoop() |
|
392 | 398 | |
|
393 | 399 | def run(self): |
|
394 | 400 | """The thread's main activity. Call start() instead.""" |
|
395 | 401 | self.socket = self.context.socket(zmq.SUB) |
|
396 | 402 | self.socket.setsockopt(zmq.SUBSCRIBE,b'') |
|
397 | 403 | self.socket.setsockopt(zmq.IDENTITY, self.session.bsession) |
|
398 | 404 | self.socket.connect(self.address) |
|
399 | 405 | self.stream = zmqstream.ZMQStream(self.socket, self.ioloop) |
|
400 | 406 | self.stream.on_recv(self._handle_recv) |
|
401 | 407 | self._run_loop() |
|
402 | 408 | try: |
|
403 | 409 | self.socket.close() |
|
404 | 410 | except: |
|
405 | 411 | pass |
|
406 | 412 | |
|
407 | 413 | def stop(self): |
|
408 | 414 | self.ioloop.stop() |
|
409 | 415 | super(SubSocketChannel, self).stop() |
|
410 | 416 | |
|
411 | 417 | def call_handlers(self, msg): |
|
412 | 418 | """This method is called in the ioloop thread when a message arrives. |
|
413 | 419 | |
|
414 | 420 | Subclasses should override this method to handle incoming messages. |
|
415 | 421 | It is important to remember that this method is called in the thread |
|
416 | 422 | so that some logic must be done to ensure that the application leve |
|
417 | 423 | handlers are called in the application thread. |
|
418 | 424 | """ |
|
419 | 425 | raise NotImplementedError('call_handlers must be defined in a subclass.') |
|
420 | 426 | |
|
421 | 427 | def flush(self, timeout=1.0): |
|
422 | 428 | """Immediately processes all pending messages on the SUB channel. |
|
423 | 429 | |
|
424 | 430 | Callers should use this method to ensure that :method:`call_handlers` |
|
425 | 431 | has been called for all messages that have been received on the |
|
426 | 432 | 0MQ SUB socket of this channel. |
|
427 | 433 | |
|
428 | 434 | This method is thread safe. |
|
429 | 435 | |
|
430 | 436 | Parameters |
|
431 | 437 | ---------- |
|
432 | 438 | timeout : float, optional |
|
433 | 439 | The maximum amount of time to spend flushing, in seconds. The |
|
434 | 440 | default is one second. |
|
435 | 441 | """ |
|
436 | 442 | # We do the IOLoop callback process twice to ensure that the IOLoop |
|
437 | 443 | # gets to perform at least one full poll. |
|
438 | 444 | stop_time = time.time() + timeout |
|
439 | 445 | for i in xrange(2): |
|
440 | 446 | self._flushed = False |
|
441 | 447 | self.ioloop.add_callback(self._flush) |
|
442 | 448 | while not self._flushed and time.time() < stop_time: |
|
443 | 449 | time.sleep(0.01) |
|
444 | 450 | |
|
445 | 451 | def _flush(self): |
|
446 | 452 | """Callback for :method:`self.flush`.""" |
|
447 | 453 | self.stream.flush() |
|
448 | 454 | self._flushed = True |
|
449 | 455 | |
|
450 | 456 | |
|
451 | 457 | class StdInSocketChannel(ZMQSocketChannel): |
|
452 | 458 | """A reply channel to handle raw_input requests that the kernel makes.""" |
|
453 | 459 | |
|
454 | 460 | msg_queue = None |
|
455 | 461 | |
|
456 | 462 | def __init__(self, context, session, address): |
|
457 | 463 | super(StdInSocketChannel, self).__init__(context, session, address) |
|
458 | 464 | self.ioloop = ioloop.IOLoop() |
|
459 | 465 | |
|
460 | 466 | def run(self): |
|
461 | 467 | """The thread's main activity. Call start() instead.""" |
|
462 | 468 | self.socket = self.context.socket(zmq.DEALER) |
|
463 | 469 | self.socket.setsockopt(zmq.IDENTITY, self.session.bsession) |
|
464 | 470 | self.socket.connect(self.address) |
|
465 | 471 | self.stream = zmqstream.ZMQStream(self.socket, self.ioloop) |
|
466 | 472 | self.stream.on_recv(self._handle_recv) |
|
467 | 473 | self._run_loop() |
|
468 | 474 | try: |
|
469 | 475 | self.socket.close() |
|
470 | 476 | except: |
|
471 | 477 | pass |
|
472 | 478 | |
|
473 | 479 | |
|
474 | 480 | def stop(self): |
|
475 | 481 | self.ioloop.stop() |
|
476 | 482 | super(StdInSocketChannel, self).stop() |
|
477 | 483 | |
|
478 | 484 | def call_handlers(self, msg): |
|
479 | 485 | """This method is called in the ioloop thread when a message arrives. |
|
480 | 486 | |
|
481 | 487 | Subclasses should override this method to handle incoming messages. |
|
482 | 488 | It is important to remember that this method is called in the thread |
|
483 | 489 | so that some logic must be done to ensure that the application leve |
|
484 | 490 | handlers are called in the application thread. |
|
485 | 491 | """ |
|
486 | 492 | raise NotImplementedError('call_handlers must be defined in a subclass.') |
|
487 | 493 | |
|
488 | 494 | def input(self, string): |
|
489 | 495 | """Send a string of raw input to the kernel.""" |
|
490 | 496 | content = dict(value=string) |
|
491 | 497 | msg = self.session.msg('input_reply', content) |
|
492 | 498 | self._queue_send(msg) |
|
493 | 499 | |
|
494 | 500 | |
|
495 | 501 | class HBSocketChannel(ZMQSocketChannel): |
|
496 | 502 | """The heartbeat channel which monitors the kernel heartbeat. |
|
497 | 503 | |
|
498 | 504 | Note that the heartbeat channel is paused by default. As long as you start |
|
499 | 505 | this channel, the kernel manager will ensure that it is paused and un-paused |
|
500 | 506 | as appropriate. |
|
501 | 507 | """ |
|
502 | 508 | |
|
503 | 509 | time_to_dead = 3.0 |
|
504 | 510 | socket = None |
|
505 | 511 | poller = None |
|
506 | 512 | _running = None |
|
507 | 513 | _pause = None |
|
508 | 514 | _beating = None |
|
509 | 515 | |
|
510 | 516 | def __init__(self, context, session, address): |
|
511 | 517 | super(HBSocketChannel, self).__init__(context, session, address) |
|
512 | 518 | self._running = False |
|
513 | 519 | self._pause =True |
|
514 | 520 | self.poller = zmq.Poller() |
|
515 | 521 | |
|
516 | 522 | def _create_socket(self): |
|
517 | 523 | if self.socket is not None: |
|
518 | 524 | # close previous socket, before opening a new one |
|
519 | 525 | self.poller.unregister(self.socket) |
|
520 | 526 | self.socket.close() |
|
521 | 527 | self.socket = self.context.socket(zmq.REQ) |
|
522 | 528 | self.socket.setsockopt(zmq.LINGER, 0) |
|
523 | 529 | self.socket.connect(self.address) |
|
524 | 530 | |
|
525 | 531 | self.poller.register(self.socket, zmq.POLLIN) |
|
526 | 532 | |
|
527 | 533 | def _poll(self, start_time): |
|
528 | 534 | """poll for heartbeat replies until we reach self.time_to_dead |
|
529 | 535 | |
|
530 | 536 | Ignores interrupts, and returns the result of poll(), which |
|
531 | 537 | will be an empty list if no messages arrived before the timeout, |
|
532 | 538 | or the event tuple if there is a message to receive. |
|
533 | 539 | """ |
|
534 | 540 | |
|
535 | 541 | until_dead = self.time_to_dead - (time.time() - start_time) |
|
536 | 542 | # ensure poll at least once |
|
537 | 543 | until_dead = max(until_dead, 1e-3) |
|
538 | 544 | events = [] |
|
539 | 545 | while True: |
|
540 | 546 | try: |
|
541 | 547 | events = self.poller.poll(1000 * until_dead) |
|
542 | 548 | except ZMQError as e: |
|
543 | 549 | if e.errno == errno.EINTR: |
|
544 | 550 | # ignore interrupts during heartbeat |
|
545 | 551 | # this may never actually happen |
|
546 | 552 | until_dead = self.time_to_dead - (time.time() - start_time) |
|
547 | 553 | until_dead = max(until_dead, 1e-3) |
|
548 | 554 | pass |
|
549 | 555 | else: |
|
550 | 556 | raise |
|
551 | 557 | except Exception: |
|
552 | 558 | if self._exiting: |
|
553 | 559 | break |
|
554 | 560 | else: |
|
555 | 561 | raise |
|
556 | 562 | else: |
|
557 | 563 | break |
|
558 | 564 | return events |
|
559 | 565 | |
|
560 | 566 | def run(self): |
|
561 | 567 | """The thread's main activity. Call start() instead.""" |
|
562 | 568 | self._create_socket() |
|
563 | 569 | self._running = True |
|
564 | 570 | self._beating = True |
|
565 | 571 | |
|
566 | 572 | while self._running: |
|
567 | 573 | if self._pause: |
|
568 | 574 | # just sleep, and skip the rest of the loop |
|
569 | 575 | time.sleep(self.time_to_dead) |
|
570 | 576 | continue |
|
571 | 577 | |
|
572 | 578 | since_last_heartbeat = 0.0 |
|
573 | 579 | # io.rprint('Ping from HB channel') # dbg |
|
574 | 580 | # no need to catch EFSM here, because the previous event was |
|
575 | 581 | # either a recv or connect, which cannot be followed by EFSM |
|
576 | 582 | self.socket.send(b'ping') |
|
577 | 583 | request_time = time.time() |
|
578 | 584 | ready = self._poll(request_time) |
|
579 | 585 | if ready: |
|
580 | 586 | self._beating = True |
|
581 | 587 | # the poll above guarantees we have something to recv |
|
582 | 588 | self.socket.recv() |
|
583 | 589 | # sleep the remainder of the cycle |
|
584 | 590 | remainder = self.time_to_dead - (time.time() - request_time) |
|
585 | 591 | if remainder > 0: |
|
586 | 592 | time.sleep(remainder) |
|
587 | 593 | continue |
|
588 | 594 | else: |
|
589 | 595 | # nothing was received within the time limit, signal heart failure |
|
590 | 596 | self._beating = False |
|
591 | 597 | since_last_heartbeat = time.time() - request_time |
|
592 | 598 | self.call_handlers(since_last_heartbeat) |
|
593 | 599 | # and close/reopen the socket, because the REQ/REP cycle has been broken |
|
594 | 600 | self._create_socket() |
|
595 | 601 | continue |
|
596 | 602 | try: |
|
597 | 603 | self.socket.close() |
|
598 | 604 | except: |
|
599 | 605 | pass |
|
600 | 606 | |
|
601 | 607 | def pause(self): |
|
602 | 608 | """Pause the heartbeat.""" |
|
603 | 609 | self._pause = True |
|
604 | 610 | |
|
605 | 611 | def unpause(self): |
|
606 | 612 | """Unpause the heartbeat.""" |
|
607 | 613 | self._pause = False |
|
608 | 614 | |
|
609 | 615 | def is_beating(self): |
|
610 | 616 | """Is the heartbeat running and responsive (and not paused).""" |
|
611 | 617 | if self.is_alive() and not self._pause and self._beating: |
|
612 | 618 | return True |
|
613 | 619 | else: |
|
614 | 620 | return False |
|
615 | 621 | |
|
616 | 622 | def stop(self): |
|
617 | 623 | self._running = False |
|
618 | 624 | super(HBSocketChannel, self).stop() |
|
619 | 625 | |
|
620 | 626 | def call_handlers(self, since_last_heartbeat): |
|
621 | 627 | """This method is called in the ioloop thread when a message arrives. |
|
622 | 628 | |
|
623 | 629 | Subclasses should override this method to handle incoming messages. |
|
624 | 630 | It is important to remember that this method is called in the thread |
|
625 | 631 | so that some logic must be done to ensure that the application level |
|
626 | 632 | handlers are called in the application thread. |
|
627 | 633 | """ |
|
628 | 634 | raise NotImplementedError('call_handlers must be defined in a subclass.') |
|
629 | 635 | |
|
630 | 636 | |
|
631 | 637 | #----------------------------------------------------------------------------- |
|
632 | 638 | # Main kernel manager class |
|
633 | 639 | #----------------------------------------------------------------------------- |
|
634 | 640 | |
|
635 | 641 | class KernelManager(HasTraits): |
|
636 | 642 | """ Manages a kernel for a frontend. |
|
637 | 643 | |
|
638 | 644 | The SUB channel is for the frontend to receive messages published by the |
|
639 | 645 | kernel. |
|
640 | 646 | |
|
641 | 647 | The REQ channel is for the frontend to make requests of the kernel. |
|
642 | 648 | |
|
643 | 649 | The REP channel is for the kernel to request stdin (raw_input) from the |
|
644 | 650 | frontend. |
|
645 | 651 | """ |
|
646 | 652 | # config object for passing to child configurables |
|
647 | 653 | config = Instance(Config) |
|
648 | 654 | |
|
649 | 655 | # The PyZMQ Context to use for communication with the kernel. |
|
650 | 656 | context = Instance(zmq.Context) |
|
651 | 657 | def _context_default(self): |
|
652 | 658 | return zmq.Context.instance() |
|
653 | 659 | |
|
654 | 660 | # The Session to use for communication with the kernel. |
|
655 | 661 | session = Instance(Session) |
|
656 | 662 | |
|
657 | 663 | # The kernel process with which the KernelManager is communicating. |
|
658 | 664 | kernel = Instance(Popen) |
|
659 | 665 | |
|
660 | 666 | # The addresses for the communication channels. |
|
661 | 667 | connection_file = Unicode('') |
|
662 | 668 | |
|
663 | 669 | transport = CaselessStrEnum(['tcp', 'ipc'], default_value='tcp') |
|
664 | 670 | |
|
665 | 671 | |
|
666 | 672 | ip = Unicode(LOCALHOST) |
|
667 | 673 | def _ip_changed(self, name, old, new): |
|
668 | 674 | if new == '*': |
|
669 | 675 | self.ip = '0.0.0.0' |
|
670 | 676 | shell_port = Integer(0) |
|
671 | 677 | iopub_port = Integer(0) |
|
672 | 678 | stdin_port = Integer(0) |
|
673 | 679 | hb_port = Integer(0) |
|
674 | 680 | |
|
675 | 681 | # The classes to use for the various channels. |
|
676 | 682 | shell_channel_class = Type(ShellSocketChannel) |
|
677 | 683 | sub_channel_class = Type(SubSocketChannel) |
|
678 | 684 | stdin_channel_class = Type(StdInSocketChannel) |
|
679 | 685 | hb_channel_class = Type(HBSocketChannel) |
|
680 | 686 | |
|
681 | 687 | # Protected traits. |
|
682 | 688 | _launch_args = Any |
|
683 | 689 | _shell_channel = Any |
|
684 | 690 | _sub_channel = Any |
|
685 | 691 | _stdin_channel = Any |
|
686 | 692 | _hb_channel = Any |
|
687 | 693 | _connection_file_written=Bool(False) |
|
688 | 694 | |
|
689 | 695 | def __init__(self, **kwargs): |
|
690 | 696 | super(KernelManager, self).__init__(**kwargs) |
|
691 | 697 | if self.session is None: |
|
692 | 698 | self.session = Session(config=self.config) |
|
693 | 699 | |
|
694 | 700 | def __del__(self): |
|
695 | 701 | self.cleanup_connection_file() |
|
696 | 702 | |
|
697 | 703 | |
|
698 | 704 | #-------------------------------------------------------------------------- |
|
699 | 705 | # Channel management methods: |
|
700 | 706 | #-------------------------------------------------------------------------- |
|
701 | 707 | |
|
702 | 708 | def start_channels(self, shell=True, sub=True, stdin=True, hb=True): |
|
703 | 709 | """Starts the channels for this kernel. |
|
704 | 710 | |
|
705 | 711 | This will create the channels if they do not exist and then start |
|
706 | 712 | them. If port numbers of 0 are being used (random ports) then you |
|
707 | 713 | must first call :method:`start_kernel`. If the channels have been |
|
708 | 714 | stopped and you call this, :class:`RuntimeError` will be raised. |
|
709 | 715 | """ |
|
710 | 716 | if shell: |
|
711 | 717 | self.shell_channel.start() |
|
712 | 718 | if sub: |
|
713 | 719 | self.sub_channel.start() |
|
714 | 720 | if stdin: |
|
715 | 721 | self.stdin_channel.start() |
|
716 | 722 | self.shell_channel.allow_stdin = True |
|
717 | 723 | else: |
|
718 | 724 | self.shell_channel.allow_stdin = False |
|
719 | 725 | if hb: |
|
720 | 726 | self.hb_channel.start() |
|
721 | 727 | |
|
722 | 728 | def stop_channels(self): |
|
723 | 729 | """Stops all the running channels for this kernel. |
|
724 | 730 | """ |
|
725 | 731 | if self.shell_channel.is_alive(): |
|
726 | 732 | self.shell_channel.stop() |
|
727 | 733 | if self.sub_channel.is_alive(): |
|
728 | 734 | self.sub_channel.stop() |
|
729 | 735 | if self.stdin_channel.is_alive(): |
|
730 | 736 | self.stdin_channel.stop() |
|
731 | 737 | if self.hb_channel.is_alive(): |
|
732 | 738 | self.hb_channel.stop() |
|
733 | 739 | |
|
734 | 740 | @property |
|
735 | 741 | def channels_running(self): |
|
736 | 742 | """Are any of the channels created and running?""" |
|
737 | 743 | return (self.shell_channel.is_alive() or self.sub_channel.is_alive() or |
|
738 | 744 | self.stdin_channel.is_alive() or self.hb_channel.is_alive()) |
|
739 | 745 | |
|
740 | 746 | #-------------------------------------------------------------------------- |
|
741 | 747 | # Kernel process management methods: |
|
742 | 748 | #-------------------------------------------------------------------------- |
|
743 | 749 | |
|
744 | 750 | def cleanup_connection_file(self): |
|
745 | 751 | """cleanup connection file *if we wrote it* |
|
746 | 752 | |
|
747 | 753 | Will not raise if the connection file was already removed somehow. |
|
748 | 754 | """ |
|
749 | 755 | if self._connection_file_written: |
|
750 | 756 | # cleanup connection files on full shutdown of kernel we started |
|
751 | 757 | self._connection_file_written = False |
|
752 | 758 | try: |
|
753 | 759 | os.remove(self.connection_file) |
|
754 | 760 | except (IOError, OSError): |
|
755 | 761 | pass |
|
756 | 762 | |
|
757 | 763 | self._cleanup_ipc_files() |
|
758 | 764 | |
|
759 | 765 | def _cleanup_ipc_files(self): |
|
760 | 766 | """cleanup ipc files if we wrote them""" |
|
761 | 767 | if self.transport != 'ipc': |
|
762 | 768 | return |
|
763 | 769 | for port in (self.shell_port, self.iopub_port, self.stdin_port, self.hb_port): |
|
764 | 770 | ipcfile = "%s-%i" % (self.ip, port) |
|
765 | 771 | try: |
|
766 | 772 | os.remove(ipcfile) |
|
767 | 773 | except (IOError, OSError): |
|
768 | 774 | pass |
|
769 | 775 | |
|
770 | 776 | def load_connection_file(self): |
|
771 | 777 | """load connection info from JSON dict in self.connection_file""" |
|
772 | 778 | with open(self.connection_file) as f: |
|
773 | 779 | cfg = json.loads(f.read()) |
|
774 | 780 | |
|
775 | 781 | from pprint import pprint |
|
776 | 782 | pprint(cfg) |
|
777 | 783 | self.transport = cfg.get('transport', 'tcp') |
|
778 | 784 | self.ip = cfg['ip'] |
|
779 | 785 | self.shell_port = cfg['shell_port'] |
|
780 | 786 | self.stdin_port = cfg['stdin_port'] |
|
781 | 787 | self.iopub_port = cfg['iopub_port'] |
|
782 | 788 | self.hb_port = cfg['hb_port'] |
|
783 | 789 | self.session.key = str_to_bytes(cfg['key']) |
|
784 | 790 | |
|
785 | 791 | def write_connection_file(self): |
|
786 | 792 | """write connection info to JSON dict in self.connection_file""" |
|
787 | 793 | if self._connection_file_written: |
|
788 | 794 | return |
|
789 | 795 | self.connection_file,cfg = write_connection_file(self.connection_file, |
|
790 | 796 | transport=self.transport, ip=self.ip, key=self.session.key, |
|
791 | 797 | stdin_port=self.stdin_port, iopub_port=self.iopub_port, |
|
792 | 798 | shell_port=self.shell_port, hb_port=self.hb_port) |
|
793 | 799 | # write_connection_file also sets default ports: |
|
794 | 800 | self.shell_port = cfg['shell_port'] |
|
795 | 801 | self.stdin_port = cfg['stdin_port'] |
|
796 | 802 | self.iopub_port = cfg['iopub_port'] |
|
797 | 803 | self.hb_port = cfg['hb_port'] |
|
798 | 804 | |
|
799 | 805 | self._connection_file_written = True |
|
800 | 806 | |
|
801 | 807 | def start_kernel(self, **kw): |
|
802 | 808 | """Starts a kernel process and configures the manager to use it. |
|
803 | 809 | |
|
804 | 810 | If random ports (port=0) are being used, this method must be called |
|
805 | 811 | before the channels are created. |
|
806 | 812 | |
|
807 | 813 | Parameters: |
|
808 | 814 | ----------- |
|
809 | 815 | launcher : callable, optional (default None) |
|
810 | 816 | A custom function for launching the kernel process (generally a |
|
811 | 817 | wrapper around ``entry_point.base_launch_kernel``). In most cases, |
|
812 | 818 | it should not be necessary to use this parameter. |
|
813 | 819 | |
|
814 | 820 | **kw : optional |
|
815 | 821 | See respective options for IPython and Python kernels. |
|
816 | 822 | """ |
|
817 | 823 | if self.transport == 'tcp' and self.ip not in LOCAL_IPS: |
|
818 | 824 | raise RuntimeError("Can only launch a kernel on a local interface. " |
|
819 | 825 | "Make sure that the '*_address' attributes are " |
|
820 | 826 | "configured properly. " |
|
821 | 827 | "Currently valid addresses are: %s"%LOCAL_IPS |
|
822 | 828 | ) |
|
823 | 829 | |
|
824 | 830 | # write connection file / get default ports |
|
825 | 831 | self.write_connection_file() |
|
826 | 832 | |
|
827 | 833 | self._launch_args = kw.copy() |
|
828 | 834 | launch_kernel = kw.pop('launcher', None) |
|
829 | 835 | if launch_kernel is None: |
|
830 | 836 | from ipkernel import launch_kernel |
|
831 | 837 | self.kernel = launch_kernel(fname=self.connection_file, **kw) |
|
832 | 838 | |
|
833 | 839 | def shutdown_kernel(self, restart=False): |
|
834 | 840 | """ Attempts to the stop the kernel process cleanly. |
|
835 | 841 | |
|
836 | 842 | If the kernel cannot be stopped and the kernel is local, it is killed. |
|
837 | 843 | """ |
|
838 | 844 | # FIXME: Shutdown does not work on Windows due to ZMQ errors! |
|
839 | 845 | if sys.platform == 'win32': |
|
840 | 846 | self.kill_kernel() |
|
841 | 847 | return |
|
842 | 848 | |
|
843 | 849 | # Pause the heart beat channel if it exists. |
|
844 | 850 | if self._hb_channel is not None: |
|
845 | 851 | self._hb_channel.pause() |
|
846 | 852 | |
|
847 | 853 | # Don't send any additional kernel kill messages immediately, to give |
|
848 | 854 | # the kernel a chance to properly execute shutdown actions. Wait for at |
|
849 | 855 | # most 1s, checking every 0.1s. |
|
850 | 856 | self.shell_channel.shutdown(restart=restart) |
|
851 | 857 | for i in range(10): |
|
852 | 858 | if self.is_alive: |
|
853 | 859 | time.sleep(0.1) |
|
854 | 860 | else: |
|
855 | 861 | break |
|
856 | 862 | else: |
|
857 | 863 | # OK, we've waited long enough. |
|
858 | 864 | if self.has_kernel: |
|
859 | 865 | self.kill_kernel() |
|
860 | 866 | |
|
861 | 867 | if not restart and self._connection_file_written: |
|
862 | 868 | # cleanup connection files on full shutdown of kernel we started |
|
863 | 869 | self._connection_file_written = False |
|
864 | 870 | try: |
|
865 | 871 | os.remove(self.connection_file) |
|
866 | 872 | except IOError: |
|
867 | 873 | pass |
|
868 | 874 | |
|
869 | 875 | def restart_kernel(self, now=False, **kw): |
|
870 | 876 | """Restarts a kernel with the arguments that were used to launch it. |
|
871 | 877 | |
|
872 | 878 | If the old kernel was launched with random ports, the same ports will be |
|
873 | 879 | used for the new kernel. |
|
874 | 880 | |
|
875 | 881 | Parameters |
|
876 | 882 | ---------- |
|
877 | 883 | now : bool, optional |
|
878 | 884 | If True, the kernel is forcefully restarted *immediately*, without |
|
879 | 885 | having a chance to do any cleanup action. Otherwise the kernel is |
|
880 | 886 | given 1s to clean up before a forceful restart is issued. |
|
881 | 887 | |
|
882 | 888 | In all cases the kernel is restarted, the only difference is whether |
|
883 | 889 | it is given a chance to perform a clean shutdown or not. |
|
884 | 890 | |
|
885 | 891 | **kw : optional |
|
886 | 892 | Any options specified here will replace those used to launch the |
|
887 | 893 | kernel. |
|
888 | 894 | """ |
|
889 | 895 | if self._launch_args is None: |
|
890 | 896 | raise RuntimeError("Cannot restart the kernel. " |
|
891 | 897 | "No previous call to 'start_kernel'.") |
|
892 | 898 | else: |
|
893 | 899 | # Stop currently running kernel. |
|
894 | 900 | if self.has_kernel: |
|
895 | 901 | if now: |
|
896 | 902 | self.kill_kernel() |
|
897 | 903 | else: |
|
898 | 904 | self.shutdown_kernel(restart=True) |
|
899 | 905 | |
|
900 | 906 | # Start new kernel. |
|
901 | 907 | self._launch_args.update(kw) |
|
902 | 908 | self.start_kernel(**self._launch_args) |
|
903 | 909 | |
|
904 | 910 | # FIXME: Messages get dropped in Windows due to probable ZMQ bug |
|
905 | 911 | # unless there is some delay here. |
|
906 | 912 | if sys.platform == 'win32': |
|
907 | 913 | time.sleep(0.2) |
|
908 | 914 | |
|
909 | 915 | @property |
|
910 | 916 | def has_kernel(self): |
|
911 | 917 | """Returns whether a kernel process has been specified for the kernel |
|
912 | 918 | manager. |
|
913 | 919 | """ |
|
914 | 920 | return self.kernel is not None |
|
915 | 921 | |
|
916 | 922 | def kill_kernel(self): |
|
917 | 923 | """ Kill the running kernel. |
|
918 | 924 | |
|
919 | 925 | This method blocks until the kernel process has terminated. |
|
920 | 926 | """ |
|
921 | 927 | if self.has_kernel: |
|
922 | 928 | # Pause the heart beat channel if it exists. |
|
923 | 929 | if self._hb_channel is not None: |
|
924 | 930 | self._hb_channel.pause() |
|
925 | 931 | |
|
926 | 932 | # Signal the kernel to terminate (sends SIGKILL on Unix and calls |
|
927 | 933 | # TerminateProcess() on Win32). |
|
928 | 934 | try: |
|
929 | 935 | self.kernel.kill() |
|
930 | 936 | except OSError as e: |
|
931 | 937 | # In Windows, we will get an Access Denied error if the process |
|
932 | 938 | # has already terminated. Ignore it. |
|
933 | 939 | if sys.platform == 'win32': |
|
934 | 940 | if e.winerror != 5: |
|
935 | 941 | raise |
|
936 | 942 | # On Unix, we may get an ESRCH error if the process has already |
|
937 | 943 | # terminated. Ignore it. |
|
938 | 944 | else: |
|
939 | 945 | from errno import ESRCH |
|
940 | 946 | if e.errno != ESRCH: |
|
941 | 947 | raise |
|
942 | 948 | |
|
943 | 949 | # Block until the kernel terminates. |
|
944 | 950 | self.kernel.wait() |
|
945 | 951 | self.kernel = None |
|
946 | 952 | else: |
|
947 | 953 | raise RuntimeError("Cannot kill kernel. No kernel is running!") |
|
948 | 954 | |
|
949 | 955 | def interrupt_kernel(self): |
|
950 | 956 | """ Interrupts the kernel. |
|
951 | 957 | |
|
952 | 958 | Unlike ``signal_kernel``, this operation is well supported on all |
|
953 | 959 | platforms. |
|
954 | 960 | """ |
|
955 | 961 | if self.has_kernel: |
|
956 | 962 | if sys.platform == 'win32': |
|
957 | 963 | from parentpoller import ParentPollerWindows as Poller |
|
958 | 964 | Poller.send_interrupt(self.kernel.win32_interrupt_event) |
|
959 | 965 | else: |
|
960 | 966 | self.kernel.send_signal(signal.SIGINT) |
|
961 | 967 | else: |
|
962 | 968 | raise RuntimeError("Cannot interrupt kernel. No kernel is running!") |
|
963 | 969 | |
|
964 | 970 | def signal_kernel(self, signum): |
|
965 | 971 | """ Sends a signal to the kernel. |
|
966 | 972 | |
|
967 | 973 | Note that since only SIGTERM is supported on Windows, this function is |
|
968 | 974 | only useful on Unix systems. |
|
969 | 975 | """ |
|
970 | 976 | if self.has_kernel: |
|
971 | 977 | self.kernel.send_signal(signum) |
|
972 | 978 | else: |
|
973 | 979 | raise RuntimeError("Cannot signal kernel. No kernel is running!") |
|
974 | 980 | |
|
975 | 981 | @property |
|
976 | 982 | def is_alive(self): |
|
977 | 983 | """Is the kernel process still running?""" |
|
978 | 984 | if self.has_kernel: |
|
979 | 985 | if self.kernel.poll() is None: |
|
980 | 986 | return True |
|
981 | 987 | else: |
|
982 | 988 | return False |
|
983 | 989 | elif self._hb_channel is not None: |
|
984 | 990 | # We didn't start the kernel with this KernelManager so we |
|
985 | 991 | # use the heartbeat. |
|
986 | 992 | return self._hb_channel.is_beating() |
|
987 | 993 | else: |
|
988 | 994 | # no heartbeat and not local, we can't tell if it's running, |
|
989 | 995 | # so naively return True |
|
990 | 996 | return True |
|
991 | 997 | |
|
992 | 998 | #-------------------------------------------------------------------------- |
|
993 | 999 | # Channels used for communication with the kernel: |
|
994 | 1000 | #-------------------------------------------------------------------------- |
|
995 | 1001 | |
|
996 | 1002 | def _make_url(self, port): |
|
997 | 1003 | """make a zmq url with a port""" |
|
998 | 1004 | if self.transport == 'tcp': |
|
999 | 1005 | return "tcp://%s:%i" % (self.ip, port) |
|
1000 | 1006 | else: |
|
1001 | 1007 | return "%s://%s-%s" % (self.transport, self.ip, port) |
|
1002 | 1008 | |
|
1003 | 1009 | @property |
|
1004 | 1010 | def shell_channel(self): |
|
1005 | 1011 | """Get the REQ socket channel object to make requests of the kernel.""" |
|
1006 | 1012 | if self._shell_channel is None: |
|
1007 | 1013 | self._shell_channel = self.shell_channel_class(self.context, |
|
1008 | 1014 | self.session, |
|
1009 | 1015 | self._make_url(self.shell_port), |
|
1010 | 1016 | ) |
|
1011 | 1017 | return self._shell_channel |
|
1012 | 1018 | |
|
1013 | 1019 | @property |
|
1014 | 1020 | def sub_channel(self): |
|
1015 | 1021 | """Get the SUB socket channel object.""" |
|
1016 | 1022 | if self._sub_channel is None: |
|
1017 | 1023 | self._sub_channel = self.sub_channel_class(self.context, |
|
1018 | 1024 | self.session, |
|
1019 | 1025 | self._make_url(self.iopub_port), |
|
1020 | 1026 | ) |
|
1021 | 1027 | return self._sub_channel |
|
1022 | 1028 | |
|
1023 | 1029 | @property |
|
1024 | 1030 | def stdin_channel(self): |
|
1025 | 1031 | """Get the REP socket channel object to handle stdin (raw_input).""" |
|
1026 | 1032 | if self._stdin_channel is None: |
|
1027 | 1033 | self._stdin_channel = self.stdin_channel_class(self.context, |
|
1028 | 1034 | self.session, |
|
1029 | 1035 | self._make_url(self.stdin_port), |
|
1030 | 1036 | ) |
|
1031 | 1037 | return self._stdin_channel |
|
1032 | 1038 | |
|
1033 | 1039 | @property |
|
1034 | 1040 | def hb_channel(self): |
|
1035 | 1041 | """Get the heartbeat socket channel object to check that the |
|
1036 | 1042 | kernel is alive.""" |
|
1037 | 1043 | if self._hb_channel is None: |
|
1038 | 1044 | self._hb_channel = self.hb_channel_class(self.context, |
|
1039 | 1045 | self.session, |
|
1040 | 1046 | self._make_url(self.hb_port), |
|
1041 | 1047 | ) |
|
1042 | 1048 | return self._hb_channel |
@@ -1,769 +1,766 b'' | |||
|
1 | 1 | """Session object for building, serializing, sending, and receiving messages in |
|
2 | 2 | IPython. The Session object supports serialization, HMAC signatures, and |
|
3 | 3 | metadata on messages. |
|
4 | 4 | |
|
5 | 5 | Also defined here are utilities for working with Sessions: |
|
6 | 6 | * A SessionFactory to be used as a base class for configurables that work with |
|
7 | 7 | Sessions. |
|
8 | 8 | * A Message object for convenience that allows attribute-access to the msg dict. |
|
9 | 9 | |
|
10 | 10 | Authors: |
|
11 | 11 | |
|
12 | 12 | * Min RK |
|
13 | 13 | * Brian Granger |
|
14 | 14 | * Fernando Perez |
|
15 | 15 | """ |
|
16 | 16 | #----------------------------------------------------------------------------- |
|
17 | 17 | # Copyright (C) 2010-2011 The IPython Development Team |
|
18 | 18 | # |
|
19 | 19 | # Distributed under the terms of the BSD License. The full license is in |
|
20 | 20 | # the file COPYING, distributed as part of this software. |
|
21 | 21 | #----------------------------------------------------------------------------- |
|
22 | 22 | |
|
23 | 23 | #----------------------------------------------------------------------------- |
|
24 | 24 | # Imports |
|
25 | 25 | #----------------------------------------------------------------------------- |
|
26 | 26 | |
|
27 | 27 | import hmac |
|
28 | 28 | import logging |
|
29 | 29 | import os |
|
30 | 30 | import pprint |
|
31 | 31 | import uuid |
|
32 | 32 | from datetime import datetime |
|
33 | 33 | |
|
34 | 34 | try: |
|
35 | 35 | import cPickle |
|
36 | 36 | pickle = cPickle |
|
37 | 37 | except: |
|
38 | 38 | cPickle = None |
|
39 | 39 | import pickle |
|
40 | 40 | |
|
41 | 41 | import zmq |
|
42 | 42 | from zmq.utils import jsonapi |
|
43 | 43 | from zmq.eventloop.ioloop import IOLoop |
|
44 | 44 | from zmq.eventloop.zmqstream import ZMQStream |
|
45 | 45 | |
|
46 | import IPython | |
|
47 | 46 | from IPython.config.application import Application, boolean_flag |
|
48 | 47 | from IPython.config.configurable import Configurable, LoggingConfigurable |
|
49 | 48 | from IPython.utils.importstring import import_item |
|
50 | 49 | from IPython.utils.jsonutil import extract_dates, squash_dates, date_default |
|
51 | 50 | from IPython.utils.py3compat import str_to_bytes |
|
52 | 51 | from IPython.utils.traitlets import (CBytes, Unicode, Bool, Any, Instance, Set, |
|
53 | 52 | DottedObjectName, CUnicode, Dict, Integer) |
|
54 | 53 | from IPython.zmq.serialize import MAX_ITEMS, MAX_BYTES |
|
55 | 54 | |
|
56 | 55 | #----------------------------------------------------------------------------- |
|
57 | 56 | # utility functions |
|
58 | 57 | #----------------------------------------------------------------------------- |
|
59 | 58 | |
|
60 | 59 | def squash_unicode(obj): |
|
61 | 60 | """coerce unicode back to bytestrings.""" |
|
62 | 61 | if isinstance(obj,dict): |
|
63 | 62 | for key in obj.keys(): |
|
64 | 63 | obj[key] = squash_unicode(obj[key]) |
|
65 | 64 | if isinstance(key, unicode): |
|
66 | 65 | obj[squash_unicode(key)] = obj.pop(key) |
|
67 | 66 | elif isinstance(obj, list): |
|
68 | 67 | for i,v in enumerate(obj): |
|
69 | 68 | obj[i] = squash_unicode(v) |
|
70 | 69 | elif isinstance(obj, unicode): |
|
71 | 70 | obj = obj.encode('utf8') |
|
72 | 71 | return obj |
|
73 | 72 | |
|
74 | 73 | #----------------------------------------------------------------------------- |
|
75 | 74 | # globals and defaults |
|
76 | 75 | #----------------------------------------------------------------------------- |
|
77 | 76 | |
|
78 | _version_info_list = list(IPython.version_info) | |
|
79 | 77 | # ISO8601-ify datetime objects |
|
80 | 78 | json_packer = lambda obj: jsonapi.dumps(obj, default=date_default) |
|
81 | 79 | json_unpacker = lambda s: extract_dates(jsonapi.loads(s)) |
|
82 | 80 | |
|
83 | 81 | pickle_packer = lambda o: pickle.dumps(o,-1) |
|
84 | 82 | pickle_unpacker = pickle.loads |
|
85 | 83 | |
|
86 | 84 | default_packer = json_packer |
|
87 | 85 | default_unpacker = json_unpacker |
|
88 | 86 | |
|
89 | 87 | DELIM = b"<IDS|MSG>" |
|
90 | 88 | # singleton dummy tracker, which will always report as done |
|
91 | 89 | DONE = zmq.MessageTracker() |
|
92 | 90 | |
|
93 | 91 | #----------------------------------------------------------------------------- |
|
94 | 92 | # Mixin tools for apps that use Sessions |
|
95 | 93 | #----------------------------------------------------------------------------- |
|
96 | 94 | |
|
97 | 95 | session_aliases = dict( |
|
98 | 96 | ident = 'Session.session', |
|
99 | 97 | user = 'Session.username', |
|
100 | 98 | keyfile = 'Session.keyfile', |
|
101 | 99 | ) |
|
102 | 100 | |
|
103 | 101 | session_flags = { |
|
104 | 102 | 'secure' : ({'Session' : { 'key' : str_to_bytes(str(uuid.uuid4())), |
|
105 | 103 | 'keyfile' : '' }}, |
|
106 | 104 | """Use HMAC digests for authentication of messages. |
|
107 | 105 | Setting this flag will generate a new UUID to use as the HMAC key. |
|
108 | 106 | """), |
|
109 | 107 | 'no-secure' : ({'Session' : { 'key' : b'', 'keyfile' : '' }}, |
|
110 | 108 | """Don't authenticate messages."""), |
|
111 | 109 | } |
|
112 | 110 | |
|
113 | 111 | def default_secure(cfg): |
|
114 | 112 | """Set the default behavior for a config environment to be secure. |
|
115 | 113 | |
|
116 | 114 | If Session.key/keyfile have not been set, set Session.key to |
|
117 | 115 | a new random UUID. |
|
118 | 116 | """ |
|
119 | 117 | |
|
120 | 118 | if 'Session' in cfg: |
|
121 | 119 | if 'key' in cfg.Session or 'keyfile' in cfg.Session: |
|
122 | 120 | return |
|
123 | 121 | # key/keyfile not specified, generate new UUID: |
|
124 | 122 | cfg.Session.key = str_to_bytes(str(uuid.uuid4())) |
|
125 | 123 | |
|
126 | 124 | |
|
127 | 125 | #----------------------------------------------------------------------------- |
|
128 | 126 | # Classes |
|
129 | 127 | #----------------------------------------------------------------------------- |
|
130 | 128 | |
|
131 | 129 | class SessionFactory(LoggingConfigurable): |
|
132 | 130 | """The Base class for configurables that have a Session, Context, logger, |
|
133 | 131 | and IOLoop. |
|
134 | 132 | """ |
|
135 | 133 | |
|
136 | 134 | logname = Unicode('') |
|
137 | 135 | def _logname_changed(self, name, old, new): |
|
138 | 136 | self.log = logging.getLogger(new) |
|
139 | 137 | |
|
140 | 138 | # not configurable: |
|
141 | 139 | context = Instance('zmq.Context') |
|
142 | 140 | def _context_default(self): |
|
143 | 141 | return zmq.Context.instance() |
|
144 | 142 | |
|
145 | 143 | session = Instance('IPython.zmq.session.Session') |
|
146 | 144 | |
|
147 | 145 | loop = Instance('zmq.eventloop.ioloop.IOLoop', allow_none=False) |
|
148 | 146 | def _loop_default(self): |
|
149 | 147 | return IOLoop.instance() |
|
150 | 148 | |
|
151 | 149 | def __init__(self, **kwargs): |
|
152 | 150 | super(SessionFactory, self).__init__(**kwargs) |
|
153 | 151 | |
|
154 | 152 | if self.session is None: |
|
155 | 153 | # construct the session |
|
156 | 154 | self.session = Session(**kwargs) |
|
157 | 155 | |
|
158 | 156 | |
|
159 | 157 | class Message(object): |
|
160 | 158 | """A simple message object that maps dict keys to attributes. |
|
161 | 159 | |
|
162 | 160 | A Message can be created from a dict and a dict from a Message instance |
|
163 | 161 | simply by calling dict(msg_obj).""" |
|
164 | 162 | |
|
165 | 163 | def __init__(self, msg_dict): |
|
166 | 164 | dct = self.__dict__ |
|
167 | 165 | for k, v in dict(msg_dict).iteritems(): |
|
168 | 166 | if isinstance(v, dict): |
|
169 | 167 | v = Message(v) |
|
170 | 168 | dct[k] = v |
|
171 | 169 | |
|
172 | 170 | # Having this iterator lets dict(msg_obj) work out of the box. |
|
173 | 171 | def __iter__(self): |
|
174 | 172 | return iter(self.__dict__.iteritems()) |
|
175 | 173 | |
|
176 | 174 | def __repr__(self): |
|
177 | 175 | return repr(self.__dict__) |
|
178 | 176 | |
|
179 | 177 | def __str__(self): |
|
180 | 178 | return pprint.pformat(self.__dict__) |
|
181 | 179 | |
|
182 | 180 | def __contains__(self, k): |
|
183 | 181 | return k in self.__dict__ |
|
184 | 182 | |
|
185 | 183 | def __getitem__(self, k): |
|
186 | 184 | return self.__dict__[k] |
|
187 | 185 | |
|
188 | 186 | |
|
189 | 187 | def msg_header(msg_id, msg_type, username, session): |
|
190 | 188 | date = datetime.now() |
|
191 | version = _version_info_list | |
|
192 | 189 | return locals() |
|
193 | 190 | |
|
194 | 191 | def extract_header(msg_or_header): |
|
195 | 192 | """Given a message or header, return the header.""" |
|
196 | 193 | if not msg_or_header: |
|
197 | 194 | return {} |
|
198 | 195 | try: |
|
199 | 196 | # See if msg_or_header is the entire message. |
|
200 | 197 | h = msg_or_header['header'] |
|
201 | 198 | except KeyError: |
|
202 | 199 | try: |
|
203 | 200 | # See if msg_or_header is just the header |
|
204 | 201 | h = msg_or_header['msg_id'] |
|
205 | 202 | except KeyError: |
|
206 | 203 | raise |
|
207 | 204 | else: |
|
208 | 205 | h = msg_or_header |
|
209 | 206 | if not isinstance(h, dict): |
|
210 | 207 | h = dict(h) |
|
211 | 208 | return h |
|
212 | 209 | |
|
213 | 210 | class Session(Configurable): |
|
214 | 211 | """Object for handling serialization and sending of messages. |
|
215 | 212 | |
|
216 | 213 | The Session object handles building messages and sending them |
|
217 | 214 | with ZMQ sockets or ZMQStream objects. Objects can communicate with each |
|
218 | 215 | other over the network via Session objects, and only need to work with the |
|
219 | 216 | dict-based IPython message spec. The Session will handle |
|
220 | 217 | serialization/deserialization, security, and metadata. |
|
221 | 218 | |
|
222 | 219 | Sessions support configurable serialiization via packer/unpacker traits, |
|
223 | 220 | and signing with HMAC digests via the key/keyfile traits. |
|
224 | 221 | |
|
225 | 222 | Parameters |
|
226 | 223 | ---------- |
|
227 | 224 | |
|
228 | 225 | debug : bool |
|
229 | 226 | whether to trigger extra debugging statements |
|
230 | 227 | packer/unpacker : str : 'json', 'pickle' or import_string |
|
231 | 228 | importstrings for methods to serialize message parts. If just |
|
232 | 229 | 'json' or 'pickle', predefined JSON and pickle packers will be used. |
|
233 | 230 | Otherwise, the entire importstring must be used. |
|
234 | 231 | |
|
235 | 232 | The functions must accept at least valid JSON input, and output *bytes*. |
|
236 | 233 | |
|
237 | 234 | For example, to use msgpack: |
|
238 | 235 | packer = 'msgpack.packb', unpacker='msgpack.unpackb' |
|
239 | 236 | pack/unpack : callables |
|
240 | 237 | You can also set the pack/unpack callables for serialization directly. |
|
241 | 238 | session : bytes |
|
242 | 239 | the ID of this Session object. The default is to generate a new UUID. |
|
243 | 240 | username : unicode |
|
244 | 241 | username added to message headers. The default is to ask the OS. |
|
245 | 242 | key : bytes |
|
246 | 243 | The key used to initialize an HMAC signature. If unset, messages |
|
247 | 244 | will not be signed or checked. |
|
248 | 245 | keyfile : filepath |
|
249 | 246 | The file containing a key. If this is set, `key` will be initialized |
|
250 | 247 | to the contents of the file. |
|
251 | 248 | |
|
252 | 249 | """ |
|
253 | 250 | |
|
254 | 251 | debug=Bool(False, config=True, help="""Debug output in the Session""") |
|
255 | 252 | |
|
256 | 253 | packer = DottedObjectName('json',config=True, |
|
257 | 254 | help="""The name of the packer for serializing messages. |
|
258 | 255 | Should be one of 'json', 'pickle', or an import name |
|
259 | 256 | for a custom callable serializer.""") |
|
260 | 257 | def _packer_changed(self, name, old, new): |
|
261 | 258 | if new.lower() == 'json': |
|
262 | 259 | self.pack = json_packer |
|
263 | 260 | self.unpack = json_unpacker |
|
264 | 261 | self.unpacker = new |
|
265 | 262 | elif new.lower() == 'pickle': |
|
266 | 263 | self.pack = pickle_packer |
|
267 | 264 | self.unpack = pickle_unpacker |
|
268 | 265 | self.unpacker = new |
|
269 | 266 | else: |
|
270 | 267 | self.pack = import_item(str(new)) |
|
271 | 268 | |
|
272 | 269 | unpacker = DottedObjectName('json', config=True, |
|
273 | 270 | help="""The name of the unpacker for unserializing messages. |
|
274 | 271 | Only used with custom functions for `packer`.""") |
|
275 | 272 | def _unpacker_changed(self, name, old, new): |
|
276 | 273 | if new.lower() == 'json': |
|
277 | 274 | self.pack = json_packer |
|
278 | 275 | self.unpack = json_unpacker |
|
279 | 276 | self.packer = new |
|
280 | 277 | elif new.lower() == 'pickle': |
|
281 | 278 | self.pack = pickle_packer |
|
282 | 279 | self.unpack = pickle_unpacker |
|
283 | 280 | self.packer = new |
|
284 | 281 | else: |
|
285 | 282 | self.unpack = import_item(str(new)) |
|
286 | 283 | |
|
287 | 284 | session = CUnicode(u'', config=True, |
|
288 | 285 | help="""The UUID identifying this session.""") |
|
289 | 286 | def _session_default(self): |
|
290 | 287 | u = unicode(uuid.uuid4()) |
|
291 | 288 | self.bsession = u.encode('ascii') |
|
292 | 289 | return u |
|
293 | 290 | |
|
294 | 291 | def _session_changed(self, name, old, new): |
|
295 | 292 | self.bsession = self.session.encode('ascii') |
|
296 | 293 | |
|
297 | 294 | # bsession is the session as bytes |
|
298 | 295 | bsession = CBytes(b'') |
|
299 | 296 | |
|
300 | 297 | username = Unicode(os.environ.get('USER',u'username'), config=True, |
|
301 | 298 | help="""Username for the Session. Default is your system username.""") |
|
302 | 299 | |
|
303 | 300 | metadata = Dict({}, config=True, |
|
304 | 301 | help="""Metadata dictionary, which serves as the default top-level metadata dict for each message.""") |
|
305 | 302 | |
|
306 | 303 | # message signature related traits: |
|
307 | 304 | |
|
308 | 305 | key = CBytes(b'', config=True, |
|
309 | 306 | help="""execution key, for extra authentication.""") |
|
310 | 307 | def _key_changed(self, name, old, new): |
|
311 | 308 | if new: |
|
312 | 309 | self.auth = hmac.HMAC(new) |
|
313 | 310 | else: |
|
314 | 311 | self.auth = None |
|
315 | 312 | auth = Instance(hmac.HMAC) |
|
316 | 313 | digest_history = Set() |
|
317 | 314 | |
|
318 | 315 | keyfile = Unicode('', config=True, |
|
319 | 316 | help="""path to file containing execution key.""") |
|
320 | 317 | def _keyfile_changed(self, name, old, new): |
|
321 | 318 | with open(new, 'rb') as f: |
|
322 | 319 | self.key = f.read().strip() |
|
323 | 320 | |
|
324 | 321 | # serialization traits: |
|
325 | 322 | |
|
326 | 323 | pack = Any(default_packer) # the actual packer function |
|
327 | 324 | def _pack_changed(self, name, old, new): |
|
328 | 325 | if not callable(new): |
|
329 | 326 | raise TypeError("packer must be callable, not %s"%type(new)) |
|
330 | 327 | |
|
331 | 328 | unpack = Any(default_unpacker) # the actual packer function |
|
332 | 329 | def _unpack_changed(self, name, old, new): |
|
333 | 330 | # unpacker is not checked - it is assumed to be |
|
334 | 331 | if not callable(new): |
|
335 | 332 | raise TypeError("unpacker must be callable, not %s"%type(new)) |
|
336 | 333 | |
|
337 | 334 | # thresholds: |
|
338 | 335 | copy_threshold = Integer(2**16, config=True, |
|
339 | 336 | help="Threshold (in bytes) beyond which a buffer should be sent without copying.") |
|
340 | 337 | buffer_threshold = Integer(MAX_BYTES, config=True, |
|
341 | 338 | help="Threshold (in bytes) beyond which an object's buffer should be extracted to avoid pickling.") |
|
342 | 339 | item_threshold = Integer(MAX_ITEMS, config=True, |
|
343 | 340 | help="""The maximum number of items for a container to be introspected for custom serialization. |
|
344 | 341 | Containers larger than this are pickled outright. |
|
345 | 342 | """ |
|
346 | 343 | ) |
|
347 | 344 | |
|
348 | 345 | def __init__(self, **kwargs): |
|
349 | 346 | """create a Session object |
|
350 | 347 | |
|
351 | 348 | Parameters |
|
352 | 349 | ---------- |
|
353 | 350 | |
|
354 | 351 | debug : bool |
|
355 | 352 | whether to trigger extra debugging statements |
|
356 | 353 | packer/unpacker : str : 'json', 'pickle' or import_string |
|
357 | 354 | importstrings for methods to serialize message parts. If just |
|
358 | 355 | 'json' or 'pickle', predefined JSON and pickle packers will be used. |
|
359 | 356 | Otherwise, the entire importstring must be used. |
|
360 | 357 | |
|
361 | 358 | The functions must accept at least valid JSON input, and output |
|
362 | 359 | *bytes*. |
|
363 | 360 | |
|
364 | 361 | For example, to use msgpack: |
|
365 | 362 | packer = 'msgpack.packb', unpacker='msgpack.unpackb' |
|
366 | 363 | pack/unpack : callables |
|
367 | 364 | You can also set the pack/unpack callables for serialization |
|
368 | 365 | directly. |
|
369 | 366 | session : unicode (must be ascii) |
|
370 | 367 | the ID of this Session object. The default is to generate a new |
|
371 | 368 | UUID. |
|
372 | 369 | bsession : bytes |
|
373 | 370 | The session as bytes |
|
374 | 371 | username : unicode |
|
375 | 372 | username added to message headers. The default is to ask the OS. |
|
376 | 373 | key : bytes |
|
377 | 374 | The key used to initialize an HMAC signature. If unset, messages |
|
378 | 375 | will not be signed or checked. |
|
379 | 376 | keyfile : filepath |
|
380 | 377 | The file containing a key. If this is set, `key` will be |
|
381 | 378 | initialized to the contents of the file. |
|
382 | 379 | """ |
|
383 | 380 | super(Session, self).__init__(**kwargs) |
|
384 | 381 | self._check_packers() |
|
385 | 382 | self.none = self.pack({}) |
|
386 | 383 | # ensure self._session_default() if necessary, so bsession is defined: |
|
387 | 384 | self.session |
|
388 | 385 | |
|
389 | 386 | @property |
|
390 | 387 | def msg_id(self): |
|
391 | 388 | """always return new uuid""" |
|
392 | 389 | return str(uuid.uuid4()) |
|
393 | 390 | |
|
394 | 391 | def _check_packers(self): |
|
395 | 392 | """check packers for binary data and datetime support.""" |
|
396 | 393 | pack = self.pack |
|
397 | 394 | unpack = self.unpack |
|
398 | 395 | |
|
399 | 396 | # check simple serialization |
|
400 | 397 | msg = dict(a=[1,'hi']) |
|
401 | 398 | try: |
|
402 | 399 | packed = pack(msg) |
|
403 | 400 | except Exception: |
|
404 | 401 | raise ValueError("packer could not serialize a simple message") |
|
405 | 402 | |
|
406 | 403 | # ensure packed message is bytes |
|
407 | 404 | if not isinstance(packed, bytes): |
|
408 | 405 | raise ValueError("message packed to %r, but bytes are required"%type(packed)) |
|
409 | 406 | |
|
410 | 407 | # check that unpack is pack's inverse |
|
411 | 408 | try: |
|
412 | 409 | unpacked = unpack(packed) |
|
413 | 410 | except Exception: |
|
414 | 411 | raise ValueError("unpacker could not handle the packer's output") |
|
415 | 412 | |
|
416 | 413 | # check datetime support |
|
417 | 414 | msg = dict(t=datetime.now()) |
|
418 | 415 | try: |
|
419 | 416 | unpacked = unpack(pack(msg)) |
|
420 | 417 | except Exception: |
|
421 | 418 | self.pack = lambda o: pack(squash_dates(o)) |
|
422 | 419 | self.unpack = lambda s: extract_dates(unpack(s)) |
|
423 | 420 | |
|
424 | 421 | def msg_header(self, msg_type): |
|
425 | 422 | return msg_header(self.msg_id, msg_type, self.username, self.session) |
|
426 | 423 | |
|
427 | 424 | def msg(self, msg_type, content=None, parent=None, header=None, metadata=None): |
|
428 | 425 | """Return the nested message dict. |
|
429 | 426 | |
|
430 | 427 | This format is different from what is sent over the wire. The |
|
431 | 428 | serialize/unserialize methods converts this nested message dict to the wire |
|
432 | 429 | format, which is a list of message parts. |
|
433 | 430 | """ |
|
434 | 431 | msg = {} |
|
435 | 432 | header = self.msg_header(msg_type) if header is None else header |
|
436 | 433 | msg['header'] = header |
|
437 | 434 | msg['msg_id'] = header['msg_id'] |
|
438 | 435 | msg['msg_type'] = header['msg_type'] |
|
439 | 436 | msg['parent_header'] = {} if parent is None else extract_header(parent) |
|
440 | 437 | msg['content'] = {} if content is None else content |
|
441 | 438 | msg['metadata'] = self.metadata.copy() |
|
442 | 439 | if metadata is not None: |
|
443 | 440 | msg['metadata'].update(metadata) |
|
444 | 441 | return msg |
|
445 | 442 | |
|
446 | 443 | def sign(self, msg_list): |
|
447 | 444 | """Sign a message with HMAC digest. If no auth, return b''. |
|
448 | 445 | |
|
449 | 446 | Parameters |
|
450 | 447 | ---------- |
|
451 | 448 | msg_list : list |
|
452 | 449 | The [p_header,p_parent,p_content] part of the message list. |
|
453 | 450 | """ |
|
454 | 451 | if self.auth is None: |
|
455 | 452 | return b'' |
|
456 | 453 | h = self.auth.copy() |
|
457 | 454 | for m in msg_list: |
|
458 | 455 | h.update(m) |
|
459 | 456 | return str_to_bytes(h.hexdigest()) |
|
460 | 457 | |
|
461 | 458 | def serialize(self, msg, ident=None): |
|
462 | 459 | """Serialize the message components to bytes. |
|
463 | 460 | |
|
464 | 461 | This is roughly the inverse of unserialize. The serialize/unserialize |
|
465 | 462 | methods work with full message lists, whereas pack/unpack work with |
|
466 | 463 | the individual message parts in the message list. |
|
467 | 464 | |
|
468 | 465 | Parameters |
|
469 | 466 | ---------- |
|
470 | 467 | msg : dict or Message |
|
471 | 468 | The nexted message dict as returned by the self.msg method. |
|
472 | 469 | |
|
473 | 470 | Returns |
|
474 | 471 | ------- |
|
475 | 472 | msg_list : list |
|
476 | 473 | The list of bytes objects to be sent with the format: |
|
477 | 474 | [ident1,ident2,...,DELIM,HMAC,p_header,p_parent,p_metadata,p_content, |
|
478 | 475 | buffer1,buffer2,...]. In this list, the p_* entities are |
|
479 | 476 | the packed or serialized versions, so if JSON is used, these |
|
480 | 477 | are utf8 encoded JSON strings. |
|
481 | 478 | """ |
|
482 | 479 | content = msg.get('content', {}) |
|
483 | 480 | if content is None: |
|
484 | 481 | content = self.none |
|
485 | 482 | elif isinstance(content, dict): |
|
486 | 483 | content = self.pack(content) |
|
487 | 484 | elif isinstance(content, bytes): |
|
488 | 485 | # content is already packed, as in a relayed message |
|
489 | 486 | pass |
|
490 | 487 | elif isinstance(content, unicode): |
|
491 | 488 | # should be bytes, but JSON often spits out unicode |
|
492 | 489 | content = content.encode('utf8') |
|
493 | 490 | else: |
|
494 | 491 | raise TypeError("Content incorrect type: %s"%type(content)) |
|
495 | 492 | |
|
496 | 493 | real_message = [self.pack(msg['header']), |
|
497 | 494 | self.pack(msg['parent_header']), |
|
498 | 495 | self.pack(msg['metadata']), |
|
499 | 496 | content, |
|
500 | 497 | ] |
|
501 | 498 | |
|
502 | 499 | to_send = [] |
|
503 | 500 | |
|
504 | 501 | if isinstance(ident, list): |
|
505 | 502 | # accept list of idents |
|
506 | 503 | to_send.extend(ident) |
|
507 | 504 | elif ident is not None: |
|
508 | 505 | to_send.append(ident) |
|
509 | 506 | to_send.append(DELIM) |
|
510 | 507 | |
|
511 | 508 | signature = self.sign(real_message) |
|
512 | 509 | to_send.append(signature) |
|
513 | 510 | |
|
514 | 511 | to_send.extend(real_message) |
|
515 | 512 | |
|
516 | 513 | return to_send |
|
517 | 514 | |
|
518 | 515 | def send(self, stream, msg_or_type, content=None, parent=None, ident=None, |
|
519 | 516 | buffers=None, track=False, header=None, metadata=None): |
|
520 | 517 | """Build and send a message via stream or socket. |
|
521 | 518 | |
|
522 | 519 | The message format used by this function internally is as follows: |
|
523 | 520 | |
|
524 | 521 | [ident1,ident2,...,DELIM,HMAC,p_header,p_parent,p_content, |
|
525 | 522 | buffer1,buffer2,...] |
|
526 | 523 | |
|
527 | 524 | The serialize/unserialize methods convert the nested message dict into this |
|
528 | 525 | format. |
|
529 | 526 | |
|
530 | 527 | Parameters |
|
531 | 528 | ---------- |
|
532 | 529 | |
|
533 | 530 | stream : zmq.Socket or ZMQStream |
|
534 | 531 | The socket-like object used to send the data. |
|
535 | 532 | msg_or_type : str or Message/dict |
|
536 | 533 | Normally, msg_or_type will be a msg_type unless a message is being |
|
537 | 534 | sent more than once. If a header is supplied, this can be set to |
|
538 | 535 | None and the msg_type will be pulled from the header. |
|
539 | 536 | |
|
540 | 537 | content : dict or None |
|
541 | 538 | The content of the message (ignored if msg_or_type is a message). |
|
542 | 539 | header : dict or None |
|
543 | 540 | The header dict for the message (ignored if msg_to_type is a message). |
|
544 | 541 | parent : Message or dict or None |
|
545 | 542 | The parent or parent header describing the parent of this message |
|
546 | 543 | (ignored if msg_or_type is a message). |
|
547 | 544 | ident : bytes or list of bytes |
|
548 | 545 | The zmq.IDENTITY routing path. |
|
549 | 546 | metadata : dict or None |
|
550 | 547 | The metadata describing the message |
|
551 | 548 | buffers : list or None |
|
552 | 549 | The already-serialized buffers to be appended to the message. |
|
553 | 550 | track : bool |
|
554 | 551 | Whether to track. Only for use with Sockets, because ZMQStream |
|
555 | 552 | objects cannot track messages. |
|
556 | 553 | |
|
557 | 554 | |
|
558 | 555 | Returns |
|
559 | 556 | ------- |
|
560 | 557 | msg : dict |
|
561 | 558 | The constructed message. |
|
562 | 559 | """ |
|
563 | 560 | |
|
564 | 561 | if not isinstance(stream, (zmq.Socket, ZMQStream)): |
|
565 | 562 | raise TypeError("stream must be Socket or ZMQStream, not %r"%type(stream)) |
|
566 | 563 | elif track and isinstance(stream, ZMQStream): |
|
567 | 564 | raise TypeError("ZMQStream cannot track messages") |
|
568 | 565 | |
|
569 | 566 | if isinstance(msg_or_type, (Message, dict)): |
|
570 | 567 | # We got a Message or message dict, not a msg_type so don't |
|
571 | 568 | # build a new Message. |
|
572 | 569 | msg = msg_or_type |
|
573 | 570 | else: |
|
574 | 571 | msg = self.msg(msg_or_type, content=content, parent=parent, |
|
575 | 572 | header=header, metadata=metadata) |
|
576 | 573 | |
|
577 | 574 | buffers = [] if buffers is None else buffers |
|
578 | 575 | to_send = self.serialize(msg, ident) |
|
579 | 576 | to_send.extend(buffers) |
|
580 | 577 | longest = max([ len(s) for s in to_send ]) |
|
581 | 578 | copy = (longest < self.copy_threshold) |
|
582 | 579 | |
|
583 | 580 | if buffers and track and not copy: |
|
584 | 581 | # only really track when we are doing zero-copy buffers |
|
585 | 582 | tracker = stream.send_multipart(to_send, copy=False, track=True) |
|
586 | 583 | else: |
|
587 | 584 | # use dummy tracker, which will be done immediately |
|
588 | 585 | tracker = DONE |
|
589 | 586 | stream.send_multipart(to_send, copy=copy) |
|
590 | 587 | |
|
591 | 588 | if self.debug: |
|
592 | 589 | pprint.pprint(msg) |
|
593 | 590 | pprint.pprint(to_send) |
|
594 | 591 | pprint.pprint(buffers) |
|
595 | 592 | |
|
596 | 593 | msg['tracker'] = tracker |
|
597 | 594 | |
|
598 | 595 | return msg |
|
599 | 596 | |
|
600 | 597 | def send_raw(self, stream, msg_list, flags=0, copy=True, ident=None): |
|
601 | 598 | """Send a raw message via ident path. |
|
602 | 599 | |
|
603 | 600 | This method is used to send a already serialized message. |
|
604 | 601 | |
|
605 | 602 | Parameters |
|
606 | 603 | ---------- |
|
607 | 604 | stream : ZMQStream or Socket |
|
608 | 605 | The ZMQ stream or socket to use for sending the message. |
|
609 | 606 | msg_list : list |
|
610 | 607 | The serialized list of messages to send. This only includes the |
|
611 | 608 | [p_header,p_parent,p_metadata,p_content,buffer1,buffer2,...] portion of |
|
612 | 609 | the message. |
|
613 | 610 | ident : ident or list |
|
614 | 611 | A single ident or a list of idents to use in sending. |
|
615 | 612 | """ |
|
616 | 613 | to_send = [] |
|
617 | 614 | if isinstance(ident, bytes): |
|
618 | 615 | ident = [ident] |
|
619 | 616 | if ident is not None: |
|
620 | 617 | to_send.extend(ident) |
|
621 | 618 | |
|
622 | 619 | to_send.append(DELIM) |
|
623 | 620 | to_send.append(self.sign(msg_list)) |
|
624 | 621 | to_send.extend(msg_list) |
|
625 | 622 | stream.send_multipart(msg_list, flags, copy=copy) |
|
626 | 623 | |
|
627 | 624 | def recv(self, socket, mode=zmq.NOBLOCK, content=True, copy=True): |
|
628 | 625 | """Receive and unpack a message. |
|
629 | 626 | |
|
630 | 627 | Parameters |
|
631 | 628 | ---------- |
|
632 | 629 | socket : ZMQStream or Socket |
|
633 | 630 | The socket or stream to use in receiving. |
|
634 | 631 | |
|
635 | 632 | Returns |
|
636 | 633 | ------- |
|
637 | 634 | [idents], msg |
|
638 | 635 | [idents] is a list of idents and msg is a nested message dict of |
|
639 | 636 | same format as self.msg returns. |
|
640 | 637 | """ |
|
641 | 638 | if isinstance(socket, ZMQStream): |
|
642 | 639 | socket = socket.socket |
|
643 | 640 | try: |
|
644 | 641 | msg_list = socket.recv_multipart(mode, copy=copy) |
|
645 | 642 | except zmq.ZMQError as e: |
|
646 | 643 | if e.errno == zmq.EAGAIN: |
|
647 | 644 | # We can convert EAGAIN to None as we know in this case |
|
648 | 645 | # recv_multipart won't return None. |
|
649 | 646 | return None,None |
|
650 | 647 | else: |
|
651 | 648 | raise |
|
652 | 649 | # split multipart message into identity list and message dict |
|
653 | 650 | # invalid large messages can cause very expensive string comparisons |
|
654 | 651 | idents, msg_list = self.feed_identities(msg_list, copy) |
|
655 | 652 | try: |
|
656 | 653 | return idents, self.unserialize(msg_list, content=content, copy=copy) |
|
657 | 654 | except Exception as e: |
|
658 | 655 | # TODO: handle it |
|
659 | 656 | raise e |
|
660 | 657 | |
|
661 | 658 | def feed_identities(self, msg_list, copy=True): |
|
662 | 659 | """Split the identities from the rest of the message. |
|
663 | 660 | |
|
664 | 661 | Feed until DELIM is reached, then return the prefix as idents and |
|
665 | 662 | remainder as msg_list. This is easily broken by setting an IDENT to DELIM, |
|
666 | 663 | but that would be silly. |
|
667 | 664 | |
|
668 | 665 | Parameters |
|
669 | 666 | ---------- |
|
670 | 667 | msg_list : a list of Message or bytes objects |
|
671 | 668 | The message to be split. |
|
672 | 669 | copy : bool |
|
673 | 670 | flag determining whether the arguments are bytes or Messages |
|
674 | 671 | |
|
675 | 672 | Returns |
|
676 | 673 | ------- |
|
677 | 674 | (idents, msg_list) : two lists |
|
678 | 675 | idents will always be a list of bytes, each of which is a ZMQ |
|
679 | 676 | identity. msg_list will be a list of bytes or zmq.Messages of the |
|
680 | 677 | form [HMAC,p_header,p_parent,p_content,buffer1,buffer2,...] and |
|
681 | 678 | should be unpackable/unserializable via self.unserialize at this |
|
682 | 679 | point. |
|
683 | 680 | """ |
|
684 | 681 | if copy: |
|
685 | 682 | idx = msg_list.index(DELIM) |
|
686 | 683 | return msg_list[:idx], msg_list[idx+1:] |
|
687 | 684 | else: |
|
688 | 685 | failed = True |
|
689 | 686 | for idx,m in enumerate(msg_list): |
|
690 | 687 | if m.bytes == DELIM: |
|
691 | 688 | failed = False |
|
692 | 689 | break |
|
693 | 690 | if failed: |
|
694 | 691 | raise ValueError("DELIM not in msg_list") |
|
695 | 692 | idents, msg_list = msg_list[:idx], msg_list[idx+1:] |
|
696 | 693 | return [m.bytes for m in idents], msg_list |
|
697 | 694 | |
|
698 | 695 | def unserialize(self, msg_list, content=True, copy=True): |
|
699 | 696 | """Unserialize a msg_list to a nested message dict. |
|
700 | 697 | |
|
701 | 698 | This is roughly the inverse of serialize. The serialize/unserialize |
|
702 | 699 | methods work with full message lists, whereas pack/unpack work with |
|
703 | 700 | the individual message parts in the message list. |
|
704 | 701 | |
|
705 | 702 | Parameters: |
|
706 | 703 | ----------- |
|
707 | 704 | msg_list : list of bytes or Message objects |
|
708 | 705 | The list of message parts of the form [HMAC,p_header,p_parent, |
|
709 | 706 | p_metadata,p_content,buffer1,buffer2,...]. |
|
710 | 707 | content : bool (True) |
|
711 | 708 | Whether to unpack the content dict (True), or leave it packed |
|
712 | 709 | (False). |
|
713 | 710 | copy : bool (True) |
|
714 | 711 | Whether to return the bytes (True), or the non-copying Message |
|
715 | 712 | object in each place (False). |
|
716 | 713 | |
|
717 | 714 | Returns |
|
718 | 715 | ------- |
|
719 | 716 | msg : dict |
|
720 | 717 | The nested message dict with top-level keys [header, parent_header, |
|
721 | 718 | content, buffers]. |
|
722 | 719 | """ |
|
723 | 720 | minlen = 5 |
|
724 | 721 | message = {} |
|
725 | 722 | if not copy: |
|
726 | 723 | for i in range(minlen): |
|
727 | 724 | msg_list[i] = msg_list[i].bytes |
|
728 | 725 | if self.auth is not None: |
|
729 | 726 | signature = msg_list[0] |
|
730 | 727 | if not signature: |
|
731 | 728 | raise ValueError("Unsigned Message") |
|
732 | 729 | if signature in self.digest_history: |
|
733 | 730 | raise ValueError("Duplicate Signature: %r"%signature) |
|
734 | 731 | self.digest_history.add(signature) |
|
735 | 732 | check = self.sign(msg_list[1:5]) |
|
736 | 733 | if not signature == check: |
|
737 | 734 | raise ValueError("Invalid Signature: %r" % signature) |
|
738 | 735 | if not len(msg_list) >= minlen: |
|
739 | 736 | raise TypeError("malformed message, must have at least %i elements"%minlen) |
|
740 | 737 | header = self.unpack(msg_list[1]) |
|
741 | 738 | message['header'] = header |
|
742 | 739 | message['msg_id'] = header['msg_id'] |
|
743 | 740 | message['msg_type'] = header['msg_type'] |
|
744 | 741 | message['parent_header'] = self.unpack(msg_list[2]) |
|
745 | 742 | message['metadata'] = self.unpack(msg_list[3]) |
|
746 | 743 | if content: |
|
747 | 744 | message['content'] = self.unpack(msg_list[4]) |
|
748 | 745 | else: |
|
749 | 746 | message['content'] = msg_list[4] |
|
750 | 747 | |
|
751 | 748 | message['buffers'] = msg_list[5:] |
|
752 | 749 | return message |
|
753 | 750 | |
|
754 | 751 | def test_msg2obj(): |
|
755 | 752 | am = dict(x=1) |
|
756 | 753 | ao = Message(am) |
|
757 | 754 | assert ao.x == am['x'] |
|
758 | 755 | |
|
759 | 756 | am['y'] = dict(z=1) |
|
760 | 757 | ao = Message(am) |
|
761 | 758 | assert ao.y.z == am['y']['z'] |
|
762 | 759 | |
|
763 | 760 | k1, k2 = 'y', 'z' |
|
764 | 761 | assert ao[k1][k2] == am[k1][k2] |
|
765 | 762 | |
|
766 | 763 | am2 = dict(ao) |
|
767 | 764 | assert am['x'] == am2['x'] |
|
768 | 765 | assert am['y']['z'] == am2['y']['z'] |
|
769 | 766 |
@@ -1,452 +1,497 b'' | |||
|
1 | 1 | """Test suite for our zeromq-based messaging specification. |
|
2 | 2 | """ |
|
3 | 3 | #----------------------------------------------------------------------------- |
|
4 | 4 | # Copyright (C) 2010-2011 The IPython Development Team |
|
5 | 5 | # |
|
6 | 6 | # Distributed under the terms of the BSD License. The full license is in |
|
7 | 7 | # the file COPYING.txt, distributed as part of this software. |
|
8 | 8 | #----------------------------------------------------------------------------- |
|
9 | 9 | |
|
10 | 10 | import re |
|
11 | 11 | import sys |
|
12 | 12 | import time |
|
13 | 13 | from subprocess import PIPE |
|
14 | 14 | from Queue import Empty |
|
15 | 15 | |
|
16 | 16 | import nose.tools as nt |
|
17 | 17 | |
|
18 | 18 | from ..blockingkernelmanager import BlockingKernelManager |
|
19 | 19 | |
|
20 | 20 | |
|
21 | 21 | from IPython.testing import decorators as dec |
|
22 | 22 | from IPython.utils import io |
|
23 | 23 | from IPython.utils.traitlets import ( |
|
24 | HasTraits, TraitError, Bool, Unicode, Dict, Integer, List, Enum, | |
|
24 | HasTraits, TraitError, Bool, Unicode, Dict, Integer, List, Enum, Any, | |
|
25 | 25 | ) |
|
26 | 26 | |
|
27 | 27 | #----------------------------------------------------------------------------- |
|
28 | 28 | # Global setup and utilities |
|
29 | 29 | #----------------------------------------------------------------------------- |
|
30 | 30 | |
|
31 | 31 | def setup(): |
|
32 | 32 | global KM |
|
33 | 33 | KM = BlockingKernelManager() |
|
34 | 34 | |
|
35 | 35 | KM.start_kernel(stdout=PIPE, stderr=PIPE) |
|
36 | 36 | KM.start_channels() |
|
37 | 37 | |
|
38 | 38 | # wait for kernel to be ready |
|
39 | 39 | KM.shell_channel.execute("pass") |
|
40 | 40 | KM.shell_channel.get_msg(block=True, timeout=5) |
|
41 | 41 | flush_channels() |
|
42 | 42 | |
|
43 | 43 | |
|
44 | 44 | def teardown(): |
|
45 | 45 | KM.stop_channels() |
|
46 | 46 | KM.shutdown_kernel() |
|
47 | 47 | |
|
48 | 48 | |
|
49 | 49 | def flush_channels(): |
|
50 | 50 | """flush any messages waiting on the queue""" |
|
51 | 51 | for channel in (KM.shell_channel, KM.sub_channel): |
|
52 | 52 | while True: |
|
53 | 53 | try: |
|
54 | 54 | msg = channel.get_msg(block=True, timeout=0.1) |
|
55 | 55 | except Empty: |
|
56 | 56 | break |
|
57 | 57 | else: |
|
58 | 58 | list(validate_message(msg)) |
|
59 | 59 | |
|
60 | 60 | |
|
61 | 61 | def execute(code='', **kwargs): |
|
62 | 62 | """wrapper for doing common steps for validating an execution request""" |
|
63 | 63 | shell = KM.shell_channel |
|
64 | 64 | sub = KM.sub_channel |
|
65 | 65 | |
|
66 | 66 | msg_id = shell.execute(code=code, **kwargs) |
|
67 | 67 | reply = shell.get_msg(timeout=2) |
|
68 | 68 | list(validate_message(reply, 'execute_reply', msg_id)) |
|
69 | 69 | busy = sub.get_msg(timeout=2) |
|
70 | 70 | list(validate_message(busy, 'status', msg_id)) |
|
71 | 71 | nt.assert_equal(busy['content']['execution_state'], 'busy') |
|
72 | 72 | |
|
73 | 73 | if not kwargs.get('silent'): |
|
74 | 74 | pyin = sub.get_msg(timeout=2) |
|
75 | 75 | list(validate_message(pyin, 'pyin', msg_id)) |
|
76 | 76 | nt.assert_equal(pyin['content']['code'], code) |
|
77 | 77 | |
|
78 | 78 | return msg_id, reply['content'] |
|
79 | 79 | |
|
80 | 80 | #----------------------------------------------------------------------------- |
|
81 | 81 | # MSG Spec References |
|
82 | 82 | #----------------------------------------------------------------------------- |
|
83 | 83 | |
|
84 | 84 | |
|
85 | 85 | class Reference(HasTraits): |
|
86 | ||
|
86 | ||
|
87 | """ | |
|
88 | Base class for message spec specification testing. | |
|
89 | ||
|
90 | This class is the core of the message specification test. The | |
|
91 | idea is that child classes implement trait attributes for each | |
|
92 | message keys, so that message keys can be tested against these | |
|
93 | traits using :meth:`check` method. | |
|
94 | ||
|
95 | """ | |
|
96 | ||
|
87 | 97 | def check(self, d): |
|
88 | 98 | """validate a dict against our traits""" |
|
89 | 99 | for key in self.trait_names(): |
|
90 | 100 | yield nt.assert_true(key in d, "Missing key: %r, should be found in %s" % (key, d)) |
|
91 | 101 | # FIXME: always allow None, probably not a good idea |
|
92 | 102 | if d[key] is None: |
|
93 | 103 | continue |
|
94 | 104 | try: |
|
95 | 105 | setattr(self, key, d[key]) |
|
96 | 106 | except TraitError as e: |
|
97 | 107 | yield nt.assert_true(False, str(e)) |
|
98 | 108 | |
|
99 | 109 | |
|
100 | 110 | class RMessage(Reference): |
|
101 | 111 | msg_id = Unicode() |
|
102 | 112 | msg_type = Unicode() |
|
103 | 113 | header = Dict() |
|
104 | 114 | parent_header = Dict() |
|
105 | 115 | content = Dict() |
|
106 | 116 | |
|
107 | 117 | class RHeader(Reference): |
|
108 | 118 | msg_id = Unicode() |
|
109 | 119 | msg_type = Unicode() |
|
110 | 120 | session = Unicode() |
|
111 | 121 | username = Unicode() |
|
112 | 122 | |
|
113 | 123 | class RContent(Reference): |
|
114 | 124 | status = Enum((u'ok', u'error')) |
|
115 | 125 | |
|
116 | 126 | |
|
117 | 127 | class ExecuteReply(Reference): |
|
118 | 128 | execution_count = Integer() |
|
119 | 129 | status = Enum((u'ok', u'error')) |
|
120 | 130 | |
|
121 | 131 | def check(self, d): |
|
122 | 132 | for tst in Reference.check(self, d): |
|
123 | 133 | yield tst |
|
124 | 134 | if d['status'] == 'ok': |
|
125 | 135 | for tst in ExecuteReplyOkay().check(d): |
|
126 | 136 | yield tst |
|
127 | 137 | elif d['status'] == 'error': |
|
128 | 138 | for tst in ExecuteReplyError().check(d): |
|
129 | 139 | yield tst |
|
130 | 140 | |
|
131 | 141 | |
|
132 | 142 | class ExecuteReplyOkay(Reference): |
|
133 | 143 | payload = List(Dict) |
|
134 | 144 | user_variables = Dict() |
|
135 | 145 | user_expressions = Dict() |
|
136 | 146 | |
|
137 | 147 | |
|
138 | 148 | class ExecuteReplyError(Reference): |
|
139 | 149 | ename = Unicode() |
|
140 | 150 | evalue = Unicode() |
|
141 | 151 | traceback = List(Unicode) |
|
142 | 152 | |
|
143 | 153 | |
|
144 | 154 | class OInfoReply(Reference): |
|
145 | 155 | name = Unicode() |
|
146 | 156 | found = Bool() |
|
147 | 157 | ismagic = Bool() |
|
148 | 158 | isalias = Bool() |
|
149 | 159 | namespace = Enum((u'builtin', u'magics', u'alias', u'Interactive')) |
|
150 | 160 | type_name = Unicode() |
|
151 | 161 | string_form = Unicode() |
|
152 | 162 | base_class = Unicode() |
|
153 | 163 | length = Integer() |
|
154 | 164 | file = Unicode() |
|
155 | 165 | definition = Unicode() |
|
156 | 166 | argspec = Dict() |
|
157 | 167 | init_definition = Unicode() |
|
158 | 168 | docstring = Unicode() |
|
159 | 169 | init_docstring = Unicode() |
|
160 | 170 | class_docstring = Unicode() |
|
161 | 171 | call_def = Unicode() |
|
162 | 172 | call_docstring = Unicode() |
|
163 | 173 | source = Unicode() |
|
164 | 174 | |
|
165 | 175 | def check(self, d): |
|
166 | 176 | for tst in Reference.check(self, d): |
|
167 | 177 | yield tst |
|
168 | 178 | if d['argspec'] is not None: |
|
169 | 179 | for tst in ArgSpec().check(d['argspec']): |
|
170 | 180 | yield tst |
|
171 | 181 | |
|
172 | 182 | |
|
173 | 183 | class ArgSpec(Reference): |
|
174 | 184 | args = List(Unicode) |
|
175 | 185 | varargs = Unicode() |
|
176 | 186 | varkw = Unicode() |
|
177 | 187 | defaults = List() |
|
178 | 188 | |
|
179 | 189 | |
|
180 | 190 | class Status(Reference): |
|
181 | 191 | execution_state = Enum((u'busy', u'idle')) |
|
182 | 192 | |
|
183 | 193 | |
|
184 | 194 | class CompleteReply(Reference): |
|
185 | 195 | matches = List(Unicode) |
|
186 | 196 | |
|
187 | 197 | |
|
198 | def Version(num, trait=Integer): | |
|
199 | return List(trait, default_value=[0] * num, minlen=num, maxlen=num) | |
|
200 | ||
|
201 | ||
|
202 | class KernelInfoReply(Reference): | |
|
203 | ||
|
204 | protocol_version = Version(2) | |
|
205 | ipython_version = Version(4, Any) | |
|
206 | language_version = Version(3) | |
|
207 | language = Unicode() | |
|
208 | ||
|
209 | def _ipython_version_changed(self, name, old, new): | |
|
210 | for v in new: | |
|
211 | nt.assert_true( | |
|
212 | isinstance(v, int) or isinstance(v, basestring), | |
|
213 | 'expected int or string as version component, got {0!r}' | |
|
214 | .format(v)) | |
|
215 | ||
|
216 | ||
|
188 | 217 | # IOPub messages |
|
189 | 218 | |
|
190 | 219 | class PyIn(Reference): |
|
191 | 220 | code = Unicode() |
|
192 | 221 | execution_count = Integer() |
|
193 | 222 | |
|
194 | 223 | |
|
195 | 224 | PyErr = ExecuteReplyError |
|
196 | 225 | |
|
197 | 226 | |
|
198 | 227 | class Stream(Reference): |
|
199 | 228 | name = Enum((u'stdout', u'stderr')) |
|
200 | 229 | data = Unicode() |
|
201 | 230 | |
|
202 | 231 | |
|
203 | 232 | mime_pat = re.compile(r'\w+/\w+') |
|
204 | 233 | |
|
205 | 234 | class DisplayData(Reference): |
|
206 | 235 | source = Unicode() |
|
207 | 236 | metadata = Dict() |
|
208 | 237 | data = Dict() |
|
209 | 238 | def _data_changed(self, name, old, new): |
|
210 | 239 | for k,v in new.iteritems(): |
|
211 | 240 | nt.assert_true(mime_pat.match(k)) |
|
212 | 241 | nt.assert_true(isinstance(v, basestring), "expected string data, got %r" % v) |
|
213 | 242 | |
|
214 | 243 | |
|
215 | 244 | class PyOut(Reference): |
|
216 | 245 | execution_count = Integer() |
|
217 | 246 | data = Dict() |
|
218 | 247 | def _data_changed(self, name, old, new): |
|
219 | 248 | for k,v in new.iteritems(): |
|
220 | 249 | nt.assert_true(mime_pat.match(k)) |
|
221 | 250 | nt.assert_true(isinstance(v, basestring), "expected string data, got %r" % v) |
|
222 | 251 | |
|
223 | 252 | |
|
224 | 253 | references = { |
|
225 | 254 | 'execute_reply' : ExecuteReply(), |
|
226 | 255 | 'object_info_reply' : OInfoReply(), |
|
227 | 256 | 'status' : Status(), |
|
228 | 257 | 'complete_reply' : CompleteReply(), |
|
258 | 'kernel_info_reply': KernelInfoReply(), | |
|
229 | 259 | 'pyin' : PyIn(), |
|
230 | 260 | 'pyout' : PyOut(), |
|
231 | 261 | 'pyerr' : PyErr(), |
|
232 | 262 | 'stream' : Stream(), |
|
233 | 263 | 'display_data' : DisplayData(), |
|
234 | 264 | } |
|
265 | """ | |
|
266 | Specifications of `content` part of the reply messages. | |
|
267 | """ | |
|
235 | 268 | |
|
236 | 269 | |
|
237 | 270 | def validate_message(msg, msg_type=None, parent=None): |
|
238 | 271 | """validate a message |
|
239 | 272 | |
|
240 | 273 | This is a generator, and must be iterated through to actually |
|
241 | 274 | trigger each test. |
|
242 | 275 | |
|
243 | 276 | If msg_type and/or parent are given, the msg_type and/or parent msg_id |
|
244 | 277 | are compared with the given values. |
|
245 | 278 | """ |
|
246 | 279 | RMessage().check(msg) |
|
247 | 280 | if msg_type: |
|
248 | 281 | yield nt.assert_equal(msg['msg_type'], msg_type) |
|
249 | 282 | if parent: |
|
250 | 283 | yield nt.assert_equal(msg['parent_header']['msg_id'], parent) |
|
251 | 284 | content = msg['content'] |
|
252 | 285 | ref = references[msg['msg_type']] |
|
253 | 286 | for tst in ref.check(content): |
|
254 | 287 | yield tst |
|
255 | 288 | |
|
256 | 289 | |
|
257 | 290 | #----------------------------------------------------------------------------- |
|
258 | 291 | # Tests |
|
259 | 292 | #----------------------------------------------------------------------------- |
|
260 | 293 | |
|
261 | 294 | # Shell channel |
|
262 | 295 | |
|
263 | 296 | @dec.parametric |
|
264 | 297 | def test_execute(): |
|
265 | 298 | flush_channels() |
|
266 | 299 | |
|
267 | 300 | shell = KM.shell_channel |
|
268 | 301 | msg_id = shell.execute(code='x=1') |
|
269 | 302 | reply = shell.get_msg(timeout=2) |
|
270 | 303 | for tst in validate_message(reply, 'execute_reply', msg_id): |
|
271 | 304 | yield tst |
|
272 | 305 | |
|
273 | 306 | |
|
274 | 307 | @dec.parametric |
|
275 | 308 | def test_execute_silent(): |
|
276 | 309 | flush_channels() |
|
277 | 310 | msg_id, reply = execute(code='x=1', silent=True) |
|
278 | 311 | |
|
279 | 312 | # flush status=idle |
|
280 | 313 | status = KM.sub_channel.get_msg(timeout=2) |
|
281 | 314 | for tst in validate_message(status, 'status', msg_id): |
|
282 | 315 | yield tst |
|
283 | 316 | nt.assert_equal(status['content']['execution_state'], 'idle') |
|
284 | 317 | |
|
285 | 318 | yield nt.assert_raises(Empty, KM.sub_channel.get_msg, timeout=0.1) |
|
286 | 319 | count = reply['execution_count'] |
|
287 | 320 | |
|
288 | 321 | msg_id, reply = execute(code='x=2', silent=True) |
|
289 | 322 | |
|
290 | 323 | # flush status=idle |
|
291 | 324 | status = KM.sub_channel.get_msg(timeout=2) |
|
292 | 325 | for tst in validate_message(status, 'status', msg_id): |
|
293 | 326 | yield tst |
|
294 | 327 | yield nt.assert_equal(status['content']['execution_state'], 'idle') |
|
295 | 328 | |
|
296 | 329 | yield nt.assert_raises(Empty, KM.sub_channel.get_msg, timeout=0.1) |
|
297 | 330 | count_2 = reply['execution_count'] |
|
298 | 331 | yield nt.assert_equal(count_2, count) |
|
299 | 332 | |
|
300 | 333 | |
|
301 | 334 | @dec.parametric |
|
302 | 335 | def test_execute_error(): |
|
303 | 336 | flush_channels() |
|
304 | 337 | |
|
305 | 338 | msg_id, reply = execute(code='1/0') |
|
306 | 339 | yield nt.assert_equal(reply['status'], 'error') |
|
307 | 340 | yield nt.assert_equal(reply['ename'], 'ZeroDivisionError') |
|
308 | 341 | |
|
309 | 342 | pyerr = KM.sub_channel.get_msg(timeout=2) |
|
310 | 343 | for tst in validate_message(pyerr, 'pyerr', msg_id): |
|
311 | 344 | yield tst |
|
312 | 345 | |
|
313 | 346 | |
|
314 | 347 | def test_execute_inc(): |
|
315 | 348 | """execute request should increment execution_count""" |
|
316 | 349 | flush_channels() |
|
317 | 350 | |
|
318 | 351 | msg_id, reply = execute(code='x=1') |
|
319 | 352 | count = reply['execution_count'] |
|
320 | 353 | |
|
321 | 354 | flush_channels() |
|
322 | 355 | |
|
323 | 356 | msg_id, reply = execute(code='x=2') |
|
324 | 357 | count_2 = reply['execution_count'] |
|
325 | 358 | nt.assert_equal(count_2, count+1) |
|
326 | 359 | |
|
327 | 360 | |
|
328 | 361 | def test_user_variables(): |
|
329 | 362 | flush_channels() |
|
330 | 363 | |
|
331 | 364 | msg_id, reply = execute(code='x=1', user_variables=['x']) |
|
332 | 365 | user_variables = reply['user_variables'] |
|
333 | 366 | nt.assert_equal(user_variables, {u'x' : u'1'}) |
|
334 | 367 | |
|
335 | 368 | |
|
336 | 369 | def test_user_expressions(): |
|
337 | 370 | flush_channels() |
|
338 | 371 | |
|
339 | 372 | msg_id, reply = execute(code='x=1', user_expressions=dict(foo='x+1')) |
|
340 | 373 | user_expressions = reply['user_expressions'] |
|
341 | 374 | nt.assert_equal(user_expressions, {u'foo' : u'2'}) |
|
342 | 375 | |
|
343 | 376 | |
|
344 | 377 | @dec.parametric |
|
345 | 378 | def test_oinfo(): |
|
346 | 379 | flush_channels() |
|
347 | 380 | |
|
348 | 381 | shell = KM.shell_channel |
|
349 | 382 | |
|
350 | 383 | msg_id = shell.object_info('a') |
|
351 | 384 | reply = shell.get_msg(timeout=2) |
|
352 | 385 | for tst in validate_message(reply, 'object_info_reply', msg_id): |
|
353 | 386 | yield tst |
|
354 | 387 | |
|
355 | 388 | |
|
356 | 389 | @dec.parametric |
|
357 | 390 | def test_oinfo_found(): |
|
358 | 391 | flush_channels() |
|
359 | 392 | |
|
360 | 393 | shell = KM.shell_channel |
|
361 | 394 | |
|
362 | 395 | msg_id, reply = execute(code='a=5') |
|
363 | 396 | |
|
364 | 397 | msg_id = shell.object_info('a') |
|
365 | 398 | reply = shell.get_msg(timeout=2) |
|
366 | 399 | for tst in validate_message(reply, 'object_info_reply', msg_id): |
|
367 | 400 | yield tst |
|
368 | 401 | content = reply['content'] |
|
369 | 402 | yield nt.assert_true(content['found']) |
|
370 | 403 | argspec = content['argspec'] |
|
371 | 404 | yield nt.assert_true(argspec is None, "didn't expect argspec dict, got %r" % argspec) |
|
372 | 405 | |
|
373 | 406 | |
|
374 | 407 | @dec.parametric |
|
375 | 408 | def test_oinfo_detail(): |
|
376 | 409 | flush_channels() |
|
377 | 410 | |
|
378 | 411 | shell = KM.shell_channel |
|
379 | 412 | |
|
380 | 413 | msg_id, reply = execute(code='ip=get_ipython()') |
|
381 | 414 | |
|
382 | 415 | msg_id = shell.object_info('ip.object_inspect', detail_level=2) |
|
383 | 416 | reply = shell.get_msg(timeout=2) |
|
384 | 417 | for tst in validate_message(reply, 'object_info_reply', msg_id): |
|
385 | 418 | yield tst |
|
386 | 419 | content = reply['content'] |
|
387 | 420 | yield nt.assert_true(content['found']) |
|
388 | 421 | argspec = content['argspec'] |
|
389 | 422 | yield nt.assert_true(isinstance(argspec, dict), "expected non-empty argspec dict, got %r" % argspec) |
|
390 | 423 | yield nt.assert_equal(argspec['defaults'], [0]) |
|
391 | 424 | |
|
392 | 425 | |
|
393 | 426 | @dec.parametric |
|
394 | 427 | def test_oinfo_not_found(): |
|
395 | 428 | flush_channels() |
|
396 | 429 | |
|
397 | 430 | shell = KM.shell_channel |
|
398 | 431 | |
|
399 | 432 | msg_id = shell.object_info('dne') |
|
400 | 433 | reply = shell.get_msg(timeout=2) |
|
401 | 434 | for tst in validate_message(reply, 'object_info_reply', msg_id): |
|
402 | 435 | yield tst |
|
403 | 436 | content = reply['content'] |
|
404 | 437 | yield nt.assert_false(content['found']) |
|
405 | 438 | |
|
406 | 439 | |
|
407 | 440 | @dec.parametric |
|
408 | 441 | def test_complete(): |
|
409 | 442 | flush_channels() |
|
410 | 443 | |
|
411 | 444 | shell = KM.shell_channel |
|
412 | 445 | |
|
413 | 446 | msg_id, reply = execute(code="alpha = albert = 5") |
|
414 | 447 | |
|
415 | 448 | msg_id = shell.complete('al', 'al', 2) |
|
416 | 449 | reply = shell.get_msg(timeout=2) |
|
417 | 450 | for tst in validate_message(reply, 'complete_reply', msg_id): |
|
418 | 451 | yield tst |
|
419 | 452 | matches = reply['content']['matches'] |
|
420 | 453 | for name in ('alpha', 'albert'): |
|
421 | 454 | yield nt.assert_true(name in matches, "Missing match: %r" % name) |
|
422 | 455 | |
|
423 | 456 | |
|
457 | @dec.parametric | |
|
458 | def test_kernel_info_request(): | |
|
459 | flush_channels() | |
|
460 | ||
|
461 | shell = KM.shell_channel | |
|
462 | ||
|
463 | msg_id = shell.kernel_info() | |
|
464 | reply = shell.get_msg(timeout=2) | |
|
465 | for tst in validate_message(reply, 'kernel_info_reply', msg_id): | |
|
466 | yield tst | |
|
467 | ||
|
468 | ||
|
424 | 469 | # IOPub channel |
|
425 | 470 | |
|
426 | 471 | |
|
427 | 472 | @dec.parametric |
|
428 | 473 | def test_stream(): |
|
429 | 474 | flush_channels() |
|
430 | 475 | |
|
431 | 476 | msg_id, reply = execute("print('hi')") |
|
432 | 477 | |
|
433 | 478 | stdout = KM.sub_channel.get_msg(timeout=2) |
|
434 | 479 | for tst in validate_message(stdout, 'stream', msg_id): |
|
435 | 480 | yield tst |
|
436 | 481 | content = stdout['content'] |
|
437 | 482 | yield nt.assert_equal(content['name'], u'stdout') |
|
438 | 483 | yield nt.assert_equal(content['data'], u'hi\n') |
|
439 | 484 | |
|
440 | 485 | |
|
441 | 486 | @dec.parametric |
|
442 | 487 | def test_display_data(): |
|
443 | 488 | flush_channels() |
|
444 | 489 | |
|
445 | 490 | msg_id, reply = execute("from IPython.core.display import display; display(1)") |
|
446 | 491 | |
|
447 | 492 | display = KM.sub_channel.get_msg(timeout=2) |
|
448 | 493 | for tst in validate_message(display, 'display_data', parent=msg_id): |
|
449 | 494 | yield tst |
|
450 | 495 | data = display['content']['data'] |
|
451 | 496 | yield nt.assert_equal(data['text/plain'], u'1') |
|
452 | 497 |
@@ -1,1002 +1,1043 b'' | |||
|
1 | 1 | .. _messaging: |
|
2 | 2 | |
|
3 | 3 | ====================== |
|
4 | 4 | Messaging in IPython |
|
5 | 5 | ====================== |
|
6 | 6 | |
|
7 | 7 | |
|
8 | 8 | Introduction |
|
9 | 9 | ============ |
|
10 | 10 | |
|
11 | 11 | This document explains the basic communications design and messaging |
|
12 | 12 | specification for how the various IPython objects interact over a network |
|
13 | 13 | transport. The current implementation uses the ZeroMQ_ library for messaging |
|
14 | 14 | within and between hosts. |
|
15 | 15 | |
|
16 | 16 | .. Note:: |
|
17 | 17 | |
|
18 | 18 | This document should be considered the authoritative description of the |
|
19 | 19 | IPython messaging protocol, and all developers are strongly encouraged to |
|
20 | 20 | keep it updated as the implementation evolves, so that we have a single |
|
21 | 21 | common reference for all protocol details. |
|
22 | 22 | |
|
23 | 23 | The basic design is explained in the following diagram: |
|
24 | 24 | |
|
25 | 25 | .. image:: figs/frontend-kernel.png |
|
26 | 26 | :width: 450px |
|
27 | 27 | :alt: IPython kernel/frontend messaging architecture. |
|
28 | 28 | :align: center |
|
29 | 29 | :target: ../_images/frontend-kernel.png |
|
30 | 30 | |
|
31 | 31 | A single kernel can be simultaneously connected to one or more frontends. The |
|
32 | 32 | kernel has three sockets that serve the following functions: |
|
33 | 33 | |
|
34 | 34 | 1. stdin: this ROUTER socket is connected to all frontends, and it allows |
|
35 | 35 | the kernel to request input from the active frontend when :func:`raw_input` is called. |
|
36 | 36 | The frontend that executed the code has a DEALER socket that acts as a 'virtual keyboard' |
|
37 | 37 | for the kernel while this communication is happening (illustrated in the |
|
38 | 38 | figure by the black outline around the central keyboard). In practice, |
|
39 | 39 | frontends may display such kernel requests using a special input widget or |
|
40 | 40 | otherwise indicating that the user is to type input for the kernel instead |
|
41 | 41 | of normal commands in the frontend. |
|
42 | 42 | |
|
43 | 43 | 2. Shell: this single ROUTER socket allows multiple incoming connections from |
|
44 | 44 | frontends, and this is the socket where requests for code execution, object |
|
45 | 45 | information, prompts, etc. are made to the kernel by any frontend. The |
|
46 | 46 | communication on this socket is a sequence of request/reply actions from |
|
47 | 47 | each frontend and the kernel. |
|
48 | 48 | |
|
49 | 49 | 3. IOPub: this socket is the 'broadcast channel' where the kernel publishes all |
|
50 | 50 | side effects (stdout, stderr, etc.) as well as the requests coming from any |
|
51 | 51 | client over the shell socket and its own requests on the stdin socket. There |
|
52 | 52 | are a number of actions in Python which generate side effects: :func:`print` |
|
53 | 53 | writes to ``sys.stdout``, errors generate tracebacks, etc. Additionally, in |
|
54 | 54 | a multi-client scenario, we want all frontends to be able to know what each |
|
55 | 55 | other has sent to the kernel (this can be useful in collaborative scenarios, |
|
56 | 56 | for example). This socket allows both side effects and the information |
|
57 | 57 | about communications taking place with one client over the shell channel |
|
58 | 58 | to be made available to all clients in a uniform manner. |
|
59 | 59 | |
|
60 | 60 | All messages are tagged with enough information (details below) for clients |
|
61 | 61 | to know which messages come from their own interaction with the kernel and |
|
62 | 62 | which ones are from other clients, so they can display each type |
|
63 | 63 | appropriately. |
|
64 | 64 | |
|
65 | 65 | The actual format of the messages allowed on each of these channels is |
|
66 | 66 | specified below. Messages are dicts of dicts with string keys and values that |
|
67 | 67 | are reasonably representable in JSON. Our current implementation uses JSON |
|
68 | 68 | explicitly as its message format, but this shouldn't be considered a permanent |
|
69 | 69 | feature. As we've discovered that JSON has non-trivial performance issues due |
|
70 | 70 | to excessive copying, we may in the future move to a pure pickle-based raw |
|
71 | 71 | message format. However, it should be possible to easily convert from the raw |
|
72 | 72 | objects to JSON, since we may have non-python clients (e.g. a web frontend). |
|
73 | 73 | As long as it's easy to make a JSON version of the objects that is a faithful |
|
74 | 74 | representation of all the data, we can communicate with such clients. |
|
75 | 75 | |
|
76 | 76 | .. Note:: |
|
77 | 77 | |
|
78 | 78 | Not all of these have yet been fully fleshed out, but the key ones are, see |
|
79 | 79 | kernel and frontend files for actual implementation details. |
|
80 | 80 | |
|
81 | 81 | General Message Format |
|
82 | 82 | ====================== |
|
83 | 83 | |
|
84 | 84 | A message is defined by the following four-dictionary structure:: |
|
85 | 85 | |
|
86 | 86 | { |
|
87 | 87 | # The message header contains a pair of unique identifiers for the |
|
88 | 88 | # originating session and the actual message id, in addition to the |
|
89 | 89 | # username for the process that generated the message. This is useful in |
|
90 | 90 | # collaborative settings where multiple users may be interacting with the |
|
91 | 91 | # same kernel simultaneously, so that frontends can label the various |
|
92 | 92 | # messages in a meaningful way. |
|
93 | 93 | 'header' : { |
|
94 | 94 | 'msg_id' : uuid, |
|
95 | 95 | 'username' : str, |
|
96 | 96 | 'session' : uuid |
|
97 | 97 | # All recognized message type strings are listed below. |
|
98 | 98 | 'msg_type' : str, |
|
99 | 99 | }, |
|
100 | 100 | |
|
101 | 101 | # In a chain of messages, the header from the parent is copied so that |
|
102 | 102 | # clients can track where messages come from. |
|
103 | 103 | 'parent_header' : dict, |
|
104 | 104 | |
|
105 | 105 | # The actual content of the message must be a dict, whose structure |
|
106 | 106 | # depends on the message type. |
|
107 | 107 | 'content' : dict, |
|
108 | 108 | |
|
109 | 109 | # Any metadata associated with the message. |
|
110 | 110 | 'metadata' : dict, |
|
111 | 111 | } |
|
112 | 112 | |
|
113 | 113 | |
|
114 | 114 | Python functional API |
|
115 | 115 | ===================== |
|
116 | 116 | |
|
117 | 117 | As messages are dicts, they map naturally to a ``func(**kw)`` call form. We |
|
118 | 118 | should develop, at a few key points, functional forms of all the requests that |
|
119 | 119 | take arguments in this manner and automatically construct the necessary dict |
|
120 | 120 | for sending. |
|
121 | 121 | |
|
122 | 122 | In addition, the Python implementation of the message specification extends |
|
123 | 123 | messages upon deserialization to the following form for convenience:: |
|
124 | 124 | |
|
125 | 125 | { |
|
126 | 126 | 'header' : dict, |
|
127 | 127 | # The msg's unique identifier and type are always stored in the header, |
|
128 | 128 | # but the Python implementation copies them to the top level. |
|
129 | 129 | 'msg_id' : uuid, |
|
130 | 130 | 'msg_type' : str, |
|
131 | 131 | 'parent_header' : dict, |
|
132 | 132 | 'content' : dict, |
|
133 | 133 | 'metadata' : dict, |
|
134 | 134 | } |
|
135 | 135 | |
|
136 | 136 | All messages sent to or received by any IPython process should have this |
|
137 | 137 | extended structure. |
|
138 | 138 | |
|
139 | 139 | |
|
140 | 140 | Messages on the shell ROUTER/DEALER sockets |
|
141 | 141 | =========================================== |
|
142 | 142 | |
|
143 | 143 | .. _execute: |
|
144 | 144 | |
|
145 | 145 | Execute |
|
146 | 146 | ------- |
|
147 | 147 | |
|
148 | 148 | This message type is used by frontends to ask the kernel to execute code on |
|
149 | 149 | behalf of the user, in a namespace reserved to the user's variables (and thus |
|
150 | 150 | separate from the kernel's own internal code and variables). |
|
151 | 151 | |
|
152 | 152 | Message type: ``execute_request``:: |
|
153 | 153 | |
|
154 | 154 | content = { |
|
155 | 155 | # Source code to be executed by the kernel, one or more lines. |
|
156 | 156 | 'code' : str, |
|
157 | 157 | |
|
158 | 158 | # A boolean flag which, if True, signals the kernel to execute |
|
159 | 159 | # this code as quietly as possible. This means that the kernel |
|
160 | 160 | # will compile the code with 'exec' instead of 'single' (so |
|
161 | 161 | # sys.displayhook will not fire), forces store_history to be False, |
|
162 | 162 | # and will *not*: |
|
163 | 163 | # - broadcast exceptions on the PUB socket |
|
164 | 164 | # - do any logging |
|
165 | 165 | # |
|
166 | 166 | # The default is False. |
|
167 | 167 | 'silent' : bool, |
|
168 | 168 | |
|
169 | 169 | # A boolean flag which, if True, signals the kernel to populate history |
|
170 | 170 | # The default is True if silent is False. If silent is True, store_history |
|
171 | 171 | # is forced to be False. |
|
172 | 172 | 'store_history' : bool, |
|
173 | 173 | |
|
174 | 174 | # A list of variable names from the user's namespace to be retrieved. What |
|
175 | 175 | # returns is a JSON string of the variable's repr(), not a python object. |
|
176 | 176 | 'user_variables' : list, |
|
177 | 177 | |
|
178 | 178 | # Similarly, a dict mapping names to expressions to be evaluated in the |
|
179 | 179 | # user's dict. |
|
180 | 180 | 'user_expressions' : dict, |
|
181 | 181 | |
|
182 | 182 | # Some frontends (e.g. the Notebook) do not support stdin requests. If |
|
183 | 183 | # raw_input is called from code executed from such a frontend, a |
|
184 | 184 | # StdinNotImplementedError will be raised. |
|
185 | 185 | 'allow_stdin' : True, |
|
186 | 186 | |
|
187 | 187 | } |
|
188 | 188 | |
|
189 | 189 | The ``code`` field contains a single string (possibly multiline). The kernel |
|
190 | 190 | is responsible for splitting this into one or more independent execution blocks |
|
191 | 191 | and deciding whether to compile these in 'single' or 'exec' mode (see below for |
|
192 | 192 | detailed execution semantics). |
|
193 | 193 | |
|
194 | 194 | The ``user_`` fields deserve a detailed explanation. In the past, IPython had |
|
195 | 195 | the notion of a prompt string that allowed arbitrary code to be evaluated, and |
|
196 | 196 | this was put to good use by many in creating prompts that displayed system |
|
197 | 197 | status, path information, and even more esoteric uses like remote instrument |
|
198 | 198 | status aqcuired over the network. But now that IPython has a clean separation |
|
199 | 199 | between the kernel and the clients, the kernel has no prompt knowledge; prompts |
|
200 | 200 | are a frontend-side feature, and it should be even possible for different |
|
201 | 201 | frontends to display different prompts while interacting with the same kernel. |
|
202 | 202 | |
|
203 | 203 | The kernel now provides the ability to retrieve data from the user's namespace |
|
204 | 204 | after the execution of the main ``code``, thanks to two fields in the |
|
205 | 205 | ``execute_request`` message: |
|
206 | 206 | |
|
207 | 207 | - ``user_variables``: If only variables from the user's namespace are needed, a |
|
208 | 208 | list of variable names can be passed and a dict with these names as keys and |
|
209 | 209 | their :func:`repr()` as values will be returned. |
|
210 | 210 | |
|
211 | 211 | - ``user_expressions``: For more complex expressions that require function |
|
212 | 212 | evaluations, a dict can be provided with string keys and arbitrary python |
|
213 | 213 | expressions as values. The return message will contain also a dict with the |
|
214 | 214 | same keys and the :func:`repr()` of the evaluated expressions as value. |
|
215 | 215 | |
|
216 | 216 | With this information, frontends can display any status information they wish |
|
217 | 217 | in the form that best suits each frontend (a status line, a popup, inline for a |
|
218 | 218 | terminal, etc). |
|
219 | 219 | |
|
220 | 220 | .. Note:: |
|
221 | 221 | |
|
222 | 222 | In order to obtain the current execution counter for the purposes of |
|
223 | 223 | displaying input prompts, frontends simply make an execution request with an |
|
224 | 224 | empty code string and ``silent=True``. |
|
225 | 225 | |
|
226 | 226 | Execution semantics |
|
227 | 227 | ~~~~~~~~~~~~~~~~~~~ |
|
228 | 228 | |
|
229 | 229 | When the silent flag is false, the execution of use code consists of the |
|
230 | 230 | following phases (in silent mode, only the ``code`` field is executed): |
|
231 | 231 | |
|
232 | 232 | 1. Run the ``pre_runcode_hook``. |
|
233 | 233 | |
|
234 | 234 | 2. Execute the ``code`` field, see below for details. |
|
235 | 235 | |
|
236 | 236 | 3. If #2 succeeds, compute ``user_variables`` and ``user_expressions`` are |
|
237 | 237 | computed. This ensures that any error in the latter don't harm the main |
|
238 | 238 | code execution. |
|
239 | 239 | |
|
240 | 240 | 4. Call any method registered with :meth:`register_post_execute`. |
|
241 | 241 | |
|
242 | 242 | .. warning:: |
|
243 | 243 | |
|
244 | 244 | The API for running code before/after the main code block is likely to |
|
245 | 245 | change soon. Both the ``pre_runcode_hook`` and the |
|
246 | 246 | :meth:`register_post_execute` are susceptible to modification, as we find a |
|
247 | 247 | consistent model for both. |
|
248 | 248 | |
|
249 | 249 | To understand how the ``code`` field is executed, one must know that Python |
|
250 | 250 | code can be compiled in one of three modes (controlled by the ``mode`` argument |
|
251 | 251 | to the :func:`compile` builtin): |
|
252 | 252 | |
|
253 | 253 | *single* |
|
254 | 254 | Valid for a single interactive statement (though the source can contain |
|
255 | 255 | multiple lines, such as a for loop). When compiled in this mode, the |
|
256 | 256 | generated bytecode contains special instructions that trigger the calling of |
|
257 | 257 | :func:`sys.displayhook` for any expression in the block that returns a value. |
|
258 | 258 | This means that a single statement can actually produce multiple calls to |
|
259 | 259 | :func:`sys.displayhook`, if for example it contains a loop where each |
|
260 | 260 | iteration computes an unassigned expression would generate 10 calls:: |
|
261 | 261 | |
|
262 | 262 | for i in range(10): |
|
263 | 263 | i**2 |
|
264 | 264 | |
|
265 | 265 | *exec* |
|
266 | 266 | An arbitrary amount of source code, this is how modules are compiled. |
|
267 | 267 | :func:`sys.displayhook` is *never* implicitly called. |
|
268 | 268 | |
|
269 | 269 | *eval* |
|
270 | 270 | A single expression that returns a value. :func:`sys.displayhook` is *never* |
|
271 | 271 | implicitly called. |
|
272 | 272 | |
|
273 | 273 | |
|
274 | 274 | The ``code`` field is split into individual blocks each of which is valid for |
|
275 | 275 | execution in 'single' mode, and then: |
|
276 | 276 | |
|
277 | 277 | - If there is only a single block: it is executed in 'single' mode. |
|
278 | 278 | |
|
279 | 279 | - If there is more than one block: |
|
280 | 280 | |
|
281 | 281 | * if the last one is a single line long, run all but the last in 'exec' mode |
|
282 | 282 | and the very last one in 'single' mode. This makes it easy to type simple |
|
283 | 283 | expressions at the end to see computed values. |
|
284 | 284 | |
|
285 | 285 | * if the last one is no more than two lines long, run all but the last in |
|
286 | 286 | 'exec' mode and the very last one in 'single' mode. This makes it easy to |
|
287 | 287 | type simple expressions at the end to see computed values. - otherwise |
|
288 | 288 | (last one is also multiline), run all in 'exec' mode |
|
289 | 289 | |
|
290 | 290 | * otherwise (last one is also multiline), run all in 'exec' mode as a single |
|
291 | 291 | unit. |
|
292 | 292 | |
|
293 | 293 | Any error in retrieving the ``user_variables`` or evaluating the |
|
294 | 294 | ``user_expressions`` will result in a simple error message in the return fields |
|
295 | 295 | of the form:: |
|
296 | 296 | |
|
297 | 297 | [ERROR] ExceptionType: Exception message |
|
298 | 298 | |
|
299 | 299 | The user can simply send the same variable name or expression for evaluation to |
|
300 | 300 | see a regular traceback. |
|
301 | 301 | |
|
302 | 302 | Errors in any registered post_execute functions are also reported similarly, |
|
303 | 303 | and the failing function is removed from the post_execution set so that it does |
|
304 | 304 | not continue triggering failures. |
|
305 | 305 | |
|
306 | 306 | Upon completion of the execution request, the kernel *always* sends a reply, |
|
307 | 307 | with a status code indicating what happened and additional data depending on |
|
308 | 308 | the outcome. See :ref:`below <execution_results>` for the possible return |
|
309 | 309 | codes and associated data. |
|
310 | 310 | |
|
311 | 311 | |
|
312 | 312 | Execution counter (old prompt number) |
|
313 | 313 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
314 | 314 | |
|
315 | 315 | The kernel has a single, monotonically increasing counter of all execution |
|
316 | 316 | requests that are made with ``store_history=True``. This counter is used to populate |
|
317 | 317 | the ``In[n]``, ``Out[n]`` and ``_n`` variables, so clients will likely want to |
|
318 | 318 | display it in some form to the user, which will typically (but not necessarily) |
|
319 | 319 | be done in the prompts. The value of this counter will be returned as the |
|
320 | 320 | ``execution_count`` field of all ``execute_reply`` messages. |
|
321 | 321 | |
|
322 | 322 | .. _execution_results: |
|
323 | 323 | |
|
324 | 324 | Execution results |
|
325 | 325 | ~~~~~~~~~~~~~~~~~ |
|
326 | 326 | |
|
327 | 327 | Message type: ``execute_reply``:: |
|
328 | 328 | |
|
329 | 329 | content = { |
|
330 | 330 | # One of: 'ok' OR 'error' OR 'abort' |
|
331 | 331 | 'status' : str, |
|
332 | 332 | |
|
333 | 333 | # The global kernel counter that increases by one with each request that |
|
334 | 334 | # stores history. This will typically be used by clients to display |
|
335 | 335 | # prompt numbers to the user. If the request did not store history, this will |
|
336 | 336 | # be the current value of the counter in the kernel. |
|
337 | 337 | 'execution_count' : int, |
|
338 | 338 | } |
|
339 | 339 | |
|
340 | 340 | When status is 'ok', the following extra fields are present:: |
|
341 | 341 | |
|
342 | 342 | { |
|
343 | 343 | # 'payload' will be a list of payload dicts. |
|
344 | 344 | # Each execution payload is a dict with string keys that may have been |
|
345 | 345 | # produced by the code being executed. It is retrieved by the kernel at |
|
346 | 346 | # the end of the execution and sent back to the front end, which can take |
|
347 | 347 | # action on it as needed. See main text for further details. |
|
348 | 348 | 'payload' : list(dict), |
|
349 | 349 | |
|
350 | 350 | # Results for the user_variables and user_expressions. |
|
351 | 351 | 'user_variables' : dict, |
|
352 | 352 | 'user_expressions' : dict, |
|
353 | 353 | } |
|
354 | 354 | |
|
355 | 355 | .. admonition:: Execution payloads |
|
356 | 356 | |
|
357 | 357 | The notion of an 'execution payload' is different from a return value of a |
|
358 | 358 | given set of code, which normally is just displayed on the pyout stream |
|
359 | 359 | through the PUB socket. The idea of a payload is to allow special types of |
|
360 | 360 | code, typically magics, to populate a data container in the IPython kernel |
|
361 | 361 | that will be shipped back to the caller via this channel. The kernel |
|
362 | 362 | has an API for this in the PayloadManager:: |
|
363 | 363 | |
|
364 | 364 | ip.payload_manager.write_payload(payload_dict) |
|
365 | 365 | |
|
366 | 366 | which appends a dictionary to the list of payloads. |
|
367 | 367 | |
|
368 | 368 | |
|
369 | 369 | When status is 'error', the following extra fields are present:: |
|
370 | 370 | |
|
371 | 371 | { |
|
372 | 372 | 'ename' : str, # Exception name, as a string |
|
373 | 373 | 'evalue' : str, # Exception value, as a string |
|
374 | 374 | |
|
375 | 375 | # The traceback will contain a list of frames, represented each as a |
|
376 | 376 | # string. For now we'll stick to the existing design of ultraTB, which |
|
377 | 377 | # controls exception level of detail statefully. But eventually we'll |
|
378 | 378 | # want to grow into a model where more information is collected and |
|
379 | 379 | # packed into the traceback object, with clients deciding how little or |
|
380 | 380 | # how much of it to unpack. But for now, let's start with a simple list |
|
381 | 381 | # of strings, since that requires only minimal changes to ultratb as |
|
382 | 382 | # written. |
|
383 | 383 | 'traceback' : list, |
|
384 | 384 | } |
|
385 | 385 | |
|
386 | 386 | |
|
387 | 387 | When status is 'abort', there are for now no additional data fields. This |
|
388 | 388 | happens when the kernel was interrupted by a signal. |
|
389 | 389 | |
|
390 | 390 | Kernel attribute access |
|
391 | 391 | ----------------------- |
|
392 | 392 | |
|
393 | 393 | .. warning:: |
|
394 | 394 | |
|
395 | 395 | This part of the messaging spec is not actually implemented in the kernel |
|
396 | 396 | yet. |
|
397 | 397 | |
|
398 | 398 | While this protocol does not specify full RPC access to arbitrary methods of |
|
399 | 399 | the kernel object, the kernel does allow read (and in some cases write) access |
|
400 | 400 | to certain attributes. |
|
401 | 401 | |
|
402 | 402 | The policy for which attributes can be read is: any attribute of the kernel, or |
|
403 | 403 | its sub-objects, that belongs to a :class:`Configurable` object and has been |
|
404 | 404 | declared at the class-level with Traits validation, is in principle accessible |
|
405 | 405 | as long as its name does not begin with a leading underscore. The attribute |
|
406 | 406 | itself will have metadata indicating whether it allows remote read and/or write |
|
407 | 407 | access. The message spec follows for attribute read and write requests. |
|
408 | 408 | |
|
409 | 409 | Message type: ``getattr_request``:: |
|
410 | 410 | |
|
411 | 411 | content = { |
|
412 | 412 | # The (possibly dotted) name of the attribute |
|
413 | 413 | 'name' : str, |
|
414 | 414 | } |
|
415 | 415 | |
|
416 | 416 | When a ``getattr_request`` fails, there are two possible error types: |
|
417 | 417 | |
|
418 | 418 | - AttributeError: this type of error was raised when trying to access the |
|
419 | 419 | given name by the kernel itself. This means that the attribute likely |
|
420 | 420 | doesn't exist. |
|
421 | 421 | |
|
422 | 422 | - AccessError: the attribute exists but its value is not readable remotely. |
|
423 | 423 | |
|
424 | 424 | |
|
425 | 425 | Message type: ``getattr_reply``:: |
|
426 | 426 | |
|
427 | 427 | content = { |
|
428 | 428 | # One of ['ok', 'AttributeError', 'AccessError']. |
|
429 | 429 | 'status' : str, |
|
430 | 430 | # If status is 'ok', a JSON object. |
|
431 | 431 | 'value' : object, |
|
432 | 432 | } |
|
433 | 433 | |
|
434 | 434 | Message type: ``setattr_request``:: |
|
435 | 435 | |
|
436 | 436 | content = { |
|
437 | 437 | # The (possibly dotted) name of the attribute |
|
438 | 438 | 'name' : str, |
|
439 | 439 | |
|
440 | 440 | # A JSON-encoded object, that will be validated by the Traits |
|
441 | 441 | # information in the kernel |
|
442 | 442 | 'value' : object, |
|
443 | 443 | } |
|
444 | 444 | |
|
445 | 445 | When a ``setattr_request`` fails, there are also two possible error types with |
|
446 | 446 | similar meanings as those of the ``getattr_request`` case, but for writing. |
|
447 | 447 | |
|
448 | 448 | Message type: ``setattr_reply``:: |
|
449 | 449 | |
|
450 | 450 | content = { |
|
451 | 451 | # One of ['ok', 'AttributeError', 'AccessError']. |
|
452 | 452 | 'status' : str, |
|
453 | 453 | } |
|
454 | 454 | |
|
455 | 455 | |
|
456 | 456 | |
|
457 | 457 | Object information |
|
458 | 458 | ------------------ |
|
459 | 459 | |
|
460 | 460 | One of IPython's most used capabilities is the introspection of Python objects |
|
461 | 461 | in the user's namespace, typically invoked via the ``?`` and ``??`` characters |
|
462 | 462 | (which in reality are shorthands for the ``%pinfo`` magic). This is used often |
|
463 | 463 | enough that it warrants an explicit message type, especially because frontends |
|
464 | 464 | may want to get object information in response to user keystrokes (like Tab or |
|
465 | 465 | F1) besides from the user explicitly typing code like ``x??``. |
|
466 | 466 | |
|
467 | 467 | Message type: ``object_info_request``:: |
|
468 | 468 | |
|
469 | 469 | content = { |
|
470 | 470 | # The (possibly dotted) name of the object to be searched in all |
|
471 | 471 | # relevant namespaces |
|
472 | 472 | 'name' : str, |
|
473 | 473 | |
|
474 | 474 | # The level of detail desired. The default (0) is equivalent to typing |
|
475 | 475 | # 'x?' at the prompt, 1 is equivalent to 'x??'. |
|
476 | 476 | 'detail_level' : int, |
|
477 | 477 | } |
|
478 | 478 | |
|
479 | 479 | The returned information will be a dictionary with keys very similar to the |
|
480 | 480 | field names that IPython prints at the terminal. |
|
481 | 481 | |
|
482 | 482 | Message type: ``object_info_reply``:: |
|
483 | 483 | |
|
484 | 484 | content = { |
|
485 | 485 | # The name the object was requested under |
|
486 | 486 | 'name' : str, |
|
487 | 487 | |
|
488 | 488 | # Boolean flag indicating whether the named object was found or not. If |
|
489 | 489 | # it's false, all other fields will be empty. |
|
490 | 490 | 'found' : bool, |
|
491 | 491 | |
|
492 | 492 | # Flags for magics and system aliases |
|
493 | 493 | 'ismagic' : bool, |
|
494 | 494 | 'isalias' : bool, |
|
495 | 495 | |
|
496 | 496 | # The name of the namespace where the object was found ('builtin', |
|
497 | 497 | # 'magics', 'alias', 'interactive', etc.) |
|
498 | 498 | 'namespace' : str, |
|
499 | 499 | |
|
500 | 500 | # The type name will be type.__name__ for normal Python objects, but it |
|
501 | 501 | # can also be a string like 'Magic function' or 'System alias' |
|
502 | 502 | 'type_name' : str, |
|
503 | 503 | |
|
504 | 504 | # The string form of the object, possibly truncated for length if |
|
505 | 505 | # detail_level is 0 |
|
506 | 506 | 'string_form' : str, |
|
507 | 507 | |
|
508 | 508 | # For objects with a __class__ attribute this will be set |
|
509 | 509 | 'base_class' : str, |
|
510 | 510 | |
|
511 | 511 | # For objects with a __len__ attribute this will be set |
|
512 | 512 | 'length' : int, |
|
513 | 513 | |
|
514 | 514 | # If the object is a function, class or method whose file we can find, |
|
515 | 515 | # we give its full path |
|
516 | 516 | 'file' : str, |
|
517 | 517 | |
|
518 | 518 | # For pure Python callable objects, we can reconstruct the object |
|
519 | 519 | # definition line which provides its call signature. For convenience this |
|
520 | 520 | # is returned as a single 'definition' field, but below the raw parts that |
|
521 | 521 | # compose it are also returned as the argspec field. |
|
522 | 522 | 'definition' : str, |
|
523 | 523 | |
|
524 | 524 | # The individual parts that together form the definition string. Clients |
|
525 | 525 | # with rich display capabilities may use this to provide a richer and more |
|
526 | 526 | # precise representation of the definition line (e.g. by highlighting |
|
527 | 527 | # arguments based on the user's cursor position). For non-callable |
|
528 | 528 | # objects, this field is empty. |
|
529 | 529 | 'argspec' : { # The names of all the arguments |
|
530 | 530 | args : list, |
|
531 | 531 | # The name of the varargs (*args), if any |
|
532 | 532 | varargs : str, |
|
533 | 533 | # The name of the varkw (**kw), if any |
|
534 | 534 | varkw : str, |
|
535 | 535 | # The values (as strings) of all default arguments. Note |
|
536 | 536 | # that these must be matched *in reverse* with the 'args' |
|
537 | 537 | # list above, since the first positional args have no default |
|
538 | 538 | # value at all. |
|
539 | 539 | defaults : list, |
|
540 | 540 | }, |
|
541 | 541 | |
|
542 | 542 | # For instances, provide the constructor signature (the definition of |
|
543 | 543 | # the __init__ method): |
|
544 | 544 | 'init_definition' : str, |
|
545 | 545 | |
|
546 | 546 | # Docstrings: for any object (function, method, module, package) with a |
|
547 | 547 | # docstring, we show it. But in addition, we may provide additional |
|
548 | 548 | # docstrings. For example, for instances we will show the constructor |
|
549 | 549 | # and class docstrings as well, if available. |
|
550 | 550 | 'docstring' : str, |
|
551 | 551 | |
|
552 | 552 | # For instances, provide the constructor and class docstrings |
|
553 | 553 | 'init_docstring' : str, |
|
554 | 554 | 'class_docstring' : str, |
|
555 | 555 | |
|
556 | 556 | # If it's a callable object whose call method has a separate docstring and |
|
557 | 557 | # definition line: |
|
558 | 558 | 'call_def' : str, |
|
559 | 559 | 'call_docstring' : str, |
|
560 | 560 | |
|
561 | 561 | # If detail_level was 1, we also try to find the source code that |
|
562 | 562 | # defines the object, if possible. The string 'None' will indicate |
|
563 | 563 | # that no source was found. |
|
564 | 564 | 'source' : str, |
|
565 | 565 | } |
|
566 | 566 | |
|
567 | 567 | |
|
568 | 568 | Complete |
|
569 | 569 | -------- |
|
570 | 570 | |
|
571 | 571 | Message type: ``complete_request``:: |
|
572 | 572 | |
|
573 | 573 | content = { |
|
574 | 574 | # The text to be completed, such as 'a.is' |
|
575 | 575 | 'text' : str, |
|
576 | 576 | |
|
577 | 577 | # The full line, such as 'print a.is'. This allows completers to |
|
578 | 578 | # make decisions that may require information about more than just the |
|
579 | 579 | # current word. |
|
580 | 580 | 'line' : str, |
|
581 | 581 | |
|
582 | 582 | # The entire block of text where the line is. This may be useful in the |
|
583 | 583 | # case of multiline completions where more context may be needed. Note: if |
|
584 | 584 | # in practice this field proves unnecessary, remove it to lighten the |
|
585 | 585 | # messages. |
|
586 | 586 | |
|
587 | 587 | 'block' : str, |
|
588 | 588 | |
|
589 | 589 | # The position of the cursor where the user hit 'TAB' on the line. |
|
590 | 590 | 'cursor_pos' : int, |
|
591 | 591 | } |
|
592 | 592 | |
|
593 | 593 | Message type: ``complete_reply``:: |
|
594 | 594 | |
|
595 | 595 | content = { |
|
596 | 596 | # The list of all matches to the completion request, such as |
|
597 | 597 | # ['a.isalnum', 'a.isalpha'] for the above example. |
|
598 | 598 | 'matches' : list |
|
599 | 599 | } |
|
600 | 600 | |
|
601 | 601 | |
|
602 | 602 | History |
|
603 | 603 | ------- |
|
604 | 604 | |
|
605 | 605 | For clients to explicitly request history from a kernel. The kernel has all |
|
606 | 606 | the actual execution history stored in a single location, so clients can |
|
607 | 607 | request it from the kernel when needed. |
|
608 | 608 | |
|
609 | 609 | Message type: ``history_request``:: |
|
610 | 610 | |
|
611 | 611 | content = { |
|
612 | 612 | |
|
613 | 613 | # If True, also return output history in the resulting dict. |
|
614 | 614 | 'output' : bool, |
|
615 | 615 | |
|
616 | 616 | # If True, return the raw input history, else the transformed input. |
|
617 | 617 | 'raw' : bool, |
|
618 | 618 | |
|
619 | 619 | # So far, this can be 'range', 'tail' or 'search'. |
|
620 | 620 | 'hist_access_type' : str, |
|
621 | 621 | |
|
622 | 622 | # If hist_access_type is 'range', get a range of input cells. session can |
|
623 | 623 | # be a positive session number, or a negative number to count back from |
|
624 | 624 | # the current session. |
|
625 | 625 | 'session' : int, |
|
626 | 626 | # start and stop are line numbers within that session. |
|
627 | 627 | 'start' : int, |
|
628 | 628 | 'stop' : int, |
|
629 | 629 | |
|
630 | 630 | # If hist_access_type is 'tail' or 'search', get the last n cells. |
|
631 | 631 | 'n' : int, |
|
632 | 632 | |
|
633 | 633 | # If hist_access_type is 'search', get cells matching the specified glob |
|
634 | 634 | # pattern (with * and ? as wildcards). |
|
635 | 635 | 'pattern' : str, |
|
636 | 636 | |
|
637 | 637 | } |
|
638 | 638 | |
|
639 | 639 | Message type: ``history_reply``:: |
|
640 | 640 | |
|
641 | 641 | content = { |
|
642 | 642 | # A list of 3 tuples, either: |
|
643 | 643 | # (session, line_number, input) or |
|
644 | 644 | # (session, line_number, (input, output)), |
|
645 | 645 | # depending on whether output was False or True, respectively. |
|
646 | 646 | 'history' : list, |
|
647 | 647 | } |
|
648 | 648 | |
|
649 | 649 | |
|
650 | 650 | Connect |
|
651 | 651 | ------- |
|
652 | 652 | |
|
653 | 653 | When a client connects to the request/reply socket of the kernel, it can issue |
|
654 | 654 | a connect request to get basic information about the kernel, such as the ports |
|
655 | 655 | the other ZeroMQ sockets are listening on. This allows clients to only have |
|
656 | 656 | to know about a single port (the shell channel) to connect to a kernel. |
|
657 | 657 | |
|
658 | 658 | Message type: ``connect_request``:: |
|
659 | 659 | |
|
660 | 660 | content = { |
|
661 | 661 | } |
|
662 | 662 | |
|
663 | 663 | Message type: ``connect_reply``:: |
|
664 | 664 | |
|
665 | 665 | content = { |
|
666 | 666 | 'shell_port' : int # The port the shell ROUTER socket is listening on. |
|
667 | 667 | 'iopub_port' : int # The port the PUB socket is listening on. |
|
668 | 668 | 'stdin_port' : int # The port the stdin ROUTER socket is listening on. |
|
669 | 669 | 'hb_port' : int # The port the heartbeat socket is listening on. |
|
670 | 670 | } |
|
671 | 671 | |
|
672 | 672 | |
|
673 | Kernel info | |
|
674 | ----------- | |
|
675 | ||
|
676 | If a client needs to know what protocol the kernel supports, it can | |
|
677 | ask version number of the messaging protocol supported by the kernel. | |
|
678 | This message can be used to fetch other core information of the | |
|
679 | kernel, including language (e.g., Python), language version number and | |
|
680 | IPython version number. | |
|
681 | ||
|
682 | Message type: ``kernel_info_request``:: | |
|
683 | ||
|
684 | content = { | |
|
685 | } | |
|
686 | ||
|
687 | Message type: ``kernel_info_reply``:: | |
|
688 | ||
|
689 | content = { | |
|
690 | # Version of messaging protocol (mandatory). | |
|
691 | # The first integer indicates major version. It is incremented when | |
|
692 | # there is any backward incompatible change. | |
|
693 | # The second integer indicates minor version. It is incremented when | |
|
694 | # there is any backward compatible change. | |
|
695 | 'protocol_version': [int, int], | |
|
696 | ||
|
697 | # IPython version number (optional). | |
|
698 | # Non-python kernel backend may not have this version number. | |
|
699 | # The last component is an extra field, which may be 'dev' or | |
|
700 | # 'rc1' in development version. It is an empty string for | |
|
701 | # released version. | |
|
702 | 'ipython_version': [int, int, int, str], | |
|
703 | ||
|
704 | # Language version number (mandatory). | |
|
705 | # It is Python version number (e.g., [2, 7, 3]) for the kernel | |
|
706 | # included in IPython. | |
|
707 | 'language_version': [int, ...], | |
|
708 | ||
|
709 | # Programming language in which kernel is implemented (mandatory). | |
|
710 | # Kernel included in IPython returns 'python'. | |
|
711 | 'language': str, | |
|
712 | } | |
|
713 | ||
|
673 | 714 | |
|
674 | 715 | Kernel shutdown |
|
675 | 716 | --------------- |
|
676 | 717 | |
|
677 | 718 | The clients can request the kernel to shut itself down; this is used in |
|
678 | 719 | multiple cases: |
|
679 | 720 | |
|
680 | 721 | - when the user chooses to close the client application via a menu or window |
|
681 | 722 | control. |
|
682 | 723 | - when the user types 'exit' or 'quit' (or their uppercase magic equivalents). |
|
683 | 724 | - when the user chooses a GUI method (like the 'Ctrl-C' shortcut in the |
|
684 | 725 | IPythonQt client) to force a kernel restart to get a clean kernel without |
|
685 | 726 | losing client-side state like history or inlined figures. |
|
686 | 727 | |
|
687 | 728 | The client sends a shutdown request to the kernel, and once it receives the |
|
688 | 729 | reply message (which is otherwise empty), it can assume that the kernel has |
|
689 | 730 | completed shutdown safely. |
|
690 | 731 | |
|
691 | 732 | Upon their own shutdown, client applications will typically execute a last |
|
692 | 733 | minute sanity check and forcefully terminate any kernel that is still alive, to |
|
693 | 734 | avoid leaving stray processes in the user's machine. |
|
694 | 735 | |
|
695 | 736 | For both shutdown request and reply, there is no actual content that needs to |
|
696 | 737 | be sent, so the content dict is empty. |
|
697 | 738 | |
|
698 | 739 | Message type: ``shutdown_request``:: |
|
699 | 740 | |
|
700 | 741 | content = { |
|
701 | 742 | 'restart' : bool # whether the shutdown is final, or precedes a restart |
|
702 | 743 | } |
|
703 | 744 | |
|
704 | 745 | Message type: ``shutdown_reply``:: |
|
705 | 746 | |
|
706 | 747 | content = { |
|
707 | 748 | 'restart' : bool # whether the shutdown is final, or precedes a restart |
|
708 | 749 | } |
|
709 | 750 | |
|
710 | 751 | .. Note:: |
|
711 | 752 | |
|
712 | 753 | When the clients detect a dead kernel thanks to inactivity on the heartbeat |
|
713 | 754 | socket, they simply send a forceful process termination signal, since a dead |
|
714 | 755 | process is unlikely to respond in any useful way to messages. |
|
715 | 756 | |
|
716 | 757 | |
|
717 | 758 | Messages on the PUB/SUB socket |
|
718 | 759 | ============================== |
|
719 | 760 | |
|
720 | 761 | Streams (stdout, stderr, etc) |
|
721 | 762 | ------------------------------ |
|
722 | 763 | |
|
723 | 764 | Message type: ``stream``:: |
|
724 | 765 | |
|
725 | 766 | content = { |
|
726 | 767 | # The name of the stream is one of 'stdin', 'stdout', 'stderr' |
|
727 | 768 | 'name' : str, |
|
728 | 769 | |
|
729 | 770 | # The data is an arbitrary string to be written to that stream |
|
730 | 771 | 'data' : str, |
|
731 | 772 | } |
|
732 | 773 | |
|
733 | 774 | When a kernel receives a raw_input call, it should also broadcast it on the pub |
|
734 | 775 | socket with the names 'stdin' and 'stdin_reply'. This will allow other clients |
|
735 | 776 | to monitor/display kernel interactions and possibly replay them to their user |
|
736 | 777 | or otherwise expose them. |
|
737 | 778 | |
|
738 | 779 | Display Data |
|
739 | 780 | ------------ |
|
740 | 781 | |
|
741 | 782 | This type of message is used to bring back data that should be diplayed (text, |
|
742 | 783 | html, svg, etc.) in the frontends. This data is published to all frontends. |
|
743 | 784 | Each message can have multiple representations of the data; it is up to the |
|
744 | 785 | frontend to decide which to use and how. A single message should contain all |
|
745 | 786 | possible representations of the same information. Each representation should |
|
746 | 787 | be a JSON'able data structure, and should be a valid MIME type. |
|
747 | 788 | |
|
748 | 789 | Some questions remain about this design: |
|
749 | 790 | |
|
750 | 791 | * Do we use this message type for pyout/displayhook? Probably not, because |
|
751 | 792 | the displayhook also has to handle the Out prompt display. On the other hand |
|
752 | 793 | we could put that information into the metadata secion. |
|
753 | 794 | |
|
754 | 795 | Message type: ``display_data``:: |
|
755 | 796 | |
|
756 | 797 | content = { |
|
757 | 798 | |
|
758 | 799 | # Who create the data |
|
759 | 800 | 'source' : str, |
|
760 | 801 | |
|
761 | 802 | # The data dict contains key/value pairs, where the kids are MIME |
|
762 | 803 | # types and the values are the raw data of the representation in that |
|
763 | 804 | # format. The data dict must minimally contain the ``text/plain`` |
|
764 | 805 | # MIME type which is used as a backup representation. |
|
765 | 806 | 'data' : dict, |
|
766 | 807 | |
|
767 | 808 | # Any metadata that describes the data |
|
768 | 809 | 'metadata' : dict |
|
769 | 810 | } |
|
770 | 811 | |
|
771 | 812 | |
|
772 | 813 | Raw Data Publication |
|
773 | 814 | -------------------- |
|
774 | 815 | |
|
775 | 816 | ``display_data`` lets you publish *representations* of data, such as images and html. |
|
776 | 817 | This ``data_pub`` message lets you publish *actual raw data*, sent via message buffers. |
|
777 | 818 | |
|
778 | 819 | data_pub messages are constructed via the :func:`IPython.lib.datapub.publish_data` function: |
|
779 | 820 | |
|
780 | 821 | .. sourcecode:: python |
|
781 | 822 | |
|
782 | 823 | from IPython.zmq.datapub import publish_data |
|
783 | 824 | ns = dict(x=my_array) |
|
784 | 825 | publish_data(ns) |
|
785 | 826 | |
|
786 | 827 | |
|
787 | 828 | Message type: ``data_pub``:: |
|
788 | 829 | |
|
789 | 830 | content = { |
|
790 | 831 | # the keys of the data dict, after it has been unserialized |
|
791 | 832 | keys = ['a', 'b'] |
|
792 | 833 | } |
|
793 | 834 | # the namespace dict will be serialized in the message buffers, |
|
794 | 835 | # which will have a length of at least one |
|
795 | 836 | buffers = ['pdict', ...] |
|
796 | 837 | |
|
797 | 838 | |
|
798 | 839 | The interpretation of a sequence of data_pub messages for a given parent request should be |
|
799 | 840 | to update a single namespace with subsequent results. |
|
800 | 841 | |
|
801 | 842 | .. note:: |
|
802 | 843 | |
|
803 | 844 | No frontends directly handle data_pub messages at this time. |
|
804 | 845 | It is currently only used by the client/engines in :mod:`IPython.parallel`, |
|
805 | 846 | where engines may publish *data* to the Client, |
|
806 | 847 | of which the Client can then publish *representations* via ``display_data`` |
|
807 | 848 | to various frontends. |
|
808 | 849 | |
|
809 | 850 | Python inputs |
|
810 | 851 | ------------- |
|
811 | 852 | |
|
812 | 853 | These messages are the re-broadcast of the ``execute_request``. |
|
813 | 854 | |
|
814 | 855 | Message type: ``pyin``:: |
|
815 | 856 | |
|
816 | 857 | content = { |
|
817 | 858 | 'code' : str, # Source code to be executed, one or more lines |
|
818 | 859 | |
|
819 | 860 | # The counter for this execution is also provided so that clients can |
|
820 | 861 | # display it, since IPython automatically creates variables called _iN |
|
821 | 862 | # (for input prompt In[N]). |
|
822 | 863 | 'execution_count' : int |
|
823 | 864 | } |
|
824 | 865 | |
|
825 | 866 | Python outputs |
|
826 | 867 | -------------- |
|
827 | 868 | |
|
828 | 869 | When Python produces output from code that has been compiled in with the |
|
829 | 870 | 'single' flag to :func:`compile`, any expression that produces a value (such as |
|
830 | 871 | ``1+1``) is passed to ``sys.displayhook``, which is a callable that can do with |
|
831 | 872 | this value whatever it wants. The default behavior of ``sys.displayhook`` in |
|
832 | 873 | the Python interactive prompt is to print to ``sys.stdout`` the :func:`repr` of |
|
833 | 874 | the value as long as it is not ``None`` (which isn't printed at all). In our |
|
834 | 875 | case, the kernel instantiates as ``sys.displayhook`` an object which has |
|
835 | 876 | similar behavior, but which instead of printing to stdout, broadcasts these |
|
836 | 877 | values as ``pyout`` messages for clients to display appropriately. |
|
837 | 878 | |
|
838 | 879 | IPython's displayhook can handle multiple simultaneous formats depending on its |
|
839 | 880 | configuration. The default pretty-printed repr text is always given with the |
|
840 | 881 | ``data`` entry in this message. Any other formats are provided in the |
|
841 | 882 | ``extra_formats`` list. Frontends are free to display any or all of these |
|
842 | 883 | according to its capabilities. ``extra_formats`` list contains 3-tuples of an ID |
|
843 | 884 | string, a type string, and the data. The ID is unique to the formatter |
|
844 | 885 | implementation that created the data. Frontends will typically ignore the ID |
|
845 | 886 | unless if it has requested a particular formatter. The type string tells the |
|
846 | 887 | frontend how to interpret the data. It is often, but not always a MIME type. |
|
847 | 888 | Frontends should ignore types that it does not understand. The data itself is |
|
848 | 889 | any JSON object and depends on the format. It is often, but not always a string. |
|
849 | 890 | |
|
850 | 891 | Message type: ``pyout``:: |
|
851 | 892 | |
|
852 | 893 | content = { |
|
853 | 894 | |
|
854 | 895 | # The counter for this execution is also provided so that clients can |
|
855 | 896 | # display it, since IPython automatically creates variables called _N |
|
856 | 897 | # (for prompt N). |
|
857 | 898 | 'execution_count' : int, |
|
858 | 899 | |
|
859 | 900 | # The data dict contains key/value pairs, where the kids are MIME |
|
860 | 901 | # types and the values are the raw data of the representation in that |
|
861 | 902 | # format. The data dict must minimally contain the ``text/plain`` |
|
862 | 903 | # MIME type which is used as a backup representation. |
|
863 | 904 | 'data' : dict, |
|
864 | 905 | |
|
865 | 906 | } |
|
866 | 907 | |
|
867 | 908 | Python errors |
|
868 | 909 | ------------- |
|
869 | 910 | |
|
870 | 911 | When an error occurs during code execution |
|
871 | 912 | |
|
872 | 913 | Message type: ``pyerr``:: |
|
873 | 914 | |
|
874 | 915 | content = { |
|
875 | 916 | # Similar content to the execute_reply messages for the 'error' case, |
|
876 | 917 | # except the 'status' field is omitted. |
|
877 | 918 | } |
|
878 | 919 | |
|
879 | 920 | Kernel status |
|
880 | 921 | ------------- |
|
881 | 922 | |
|
882 | 923 | This message type is used by frontends to monitor the status of the kernel. |
|
883 | 924 | |
|
884 | 925 | Message type: ``status``:: |
|
885 | 926 | |
|
886 | 927 | content = { |
|
887 | 928 | # When the kernel starts to execute code, it will enter the 'busy' |
|
888 | 929 | # state and when it finishes, it will enter the 'idle' state. |
|
889 | 930 | execution_state : ('busy', 'idle') |
|
890 | 931 | } |
|
891 | 932 | |
|
892 | 933 | Kernel crashes |
|
893 | 934 | -------------- |
|
894 | 935 | |
|
895 | 936 | When the kernel has an unexpected exception, caught by the last-resort |
|
896 | 937 | sys.excepthook, we should broadcast the crash handler's output before exiting. |
|
897 | 938 | This will allow clients to notice that a kernel died, inform the user and |
|
898 | 939 | propose further actions. |
|
899 | 940 | |
|
900 | 941 | Message type: ``crash``:: |
|
901 | 942 | |
|
902 | 943 | content = { |
|
903 | 944 | # Similarly to the 'error' case for execute_reply messages, this will |
|
904 | 945 | # contain ename, etype and traceback fields. |
|
905 | 946 | |
|
906 | 947 | # An additional field with supplementary information such as where to |
|
907 | 948 | # send the crash message |
|
908 | 949 | 'info' : str, |
|
909 | 950 | } |
|
910 | 951 | |
|
911 | 952 | |
|
912 | 953 | Future ideas |
|
913 | 954 | ------------ |
|
914 | 955 | |
|
915 | 956 | Other potential message types, currently unimplemented, listed below as ideas. |
|
916 | 957 | |
|
917 | 958 | Message type: ``file``:: |
|
918 | 959 | |
|
919 | 960 | content = { |
|
920 | 961 | 'path' : 'cool.jpg', |
|
921 | 962 | 'mimetype' : str, |
|
922 | 963 | 'data' : str, |
|
923 | 964 | } |
|
924 | 965 | |
|
925 | 966 | |
|
926 | 967 | Messages on the stdin ROUTER/DEALER sockets |
|
927 | 968 | =========================================== |
|
928 | 969 | |
|
929 | 970 | This is a socket where the request/reply pattern goes in the opposite direction: |
|
930 | 971 | from the kernel to a *single* frontend, and its purpose is to allow |
|
931 | 972 | ``raw_input`` and similar operations that read from ``sys.stdin`` on the kernel |
|
932 | 973 | to be fulfilled by the client. The request should be made to the frontend that |
|
933 | 974 | made the execution request that prompted ``raw_input`` to be called. For now we |
|
934 | 975 | will keep these messages as simple as possible, since they only mean to convey |
|
935 | 976 | the ``raw_input(prompt)`` call. |
|
936 | 977 | |
|
937 | 978 | Message type: ``input_request``:: |
|
938 | 979 | |
|
939 | 980 | content = { 'prompt' : str } |
|
940 | 981 | |
|
941 | 982 | Message type: ``input_reply``:: |
|
942 | 983 | |
|
943 | 984 | content = { 'value' : str } |
|
944 | 985 | |
|
945 | 986 | .. Note:: |
|
946 | 987 | |
|
947 | 988 | We do not explicitly try to forward the raw ``sys.stdin`` object, because in |
|
948 | 989 | practice the kernel should behave like an interactive program. When a |
|
949 | 990 | program is opened on the console, the keyboard effectively takes over the |
|
950 | 991 | ``stdin`` file descriptor, and it can't be used for raw reading anymore. |
|
951 | 992 | Since the IPython kernel effectively behaves like a console program (albeit |
|
952 | 993 | one whose "keyboard" is actually living in a separate process and |
|
953 | 994 | transported over the zmq connection), raw ``stdin`` isn't expected to be |
|
954 | 995 | available. |
|
955 | 996 | |
|
956 | 997 | |
|
957 | 998 | Heartbeat for kernels |
|
958 | 999 | ===================== |
|
959 | 1000 | |
|
960 | 1001 | Initially we had considered using messages like those above over ZMQ for a |
|
961 | 1002 | kernel 'heartbeat' (a way to detect quickly and reliably whether a kernel is |
|
962 | 1003 | alive at all, even if it may be busy executing user code). But this has the |
|
963 | 1004 | problem that if the kernel is locked inside extension code, it wouldn't execute |
|
964 | 1005 | the python heartbeat code. But it turns out that we can implement a basic |
|
965 | 1006 | heartbeat with pure ZMQ, without using any Python messaging at all. |
|
966 | 1007 | |
|
967 | 1008 | The monitor sends out a single zmq message (right now, it is a str of the |
|
968 | 1009 | monitor's lifetime in seconds), and gets the same message right back, prefixed |
|
969 | 1010 | with the zmq identity of the DEALER socket in the heartbeat process. This can be |
|
970 | 1011 | a uuid, or even a full message, but there doesn't seem to be a need for packing |
|
971 | 1012 | up a message when the sender and receiver are the exact same Python object. |
|
972 | 1013 | |
|
973 | 1014 | The model is this:: |
|
974 | 1015 | |
|
975 | 1016 | monitor.send(str(self.lifetime)) # '1.2345678910' |
|
976 | 1017 | |
|
977 | 1018 | and the monitor receives some number of messages of the form:: |
|
978 | 1019 | |
|
979 | 1020 | ['uuid-abcd-dead-beef', '1.2345678910'] |
|
980 | 1021 | |
|
981 | 1022 | where the first part is the zmq.IDENTITY of the heart's DEALER on the engine, and |
|
982 | 1023 | the rest is the message sent by the monitor. No Python code ever has any |
|
983 | 1024 | access to the message between the monitor's send, and the monitor's recv. |
|
984 | 1025 | |
|
985 | 1026 | |
|
986 | 1027 | ToDo |
|
987 | 1028 | ==== |
|
988 | 1029 | |
|
989 | 1030 | Missing things include: |
|
990 | 1031 | |
|
991 | 1032 | * Important: finish thinking through the payload concept and API. |
|
992 | 1033 | |
|
993 | 1034 | * Important: ensure that we have a good solution for magics like %edit. It's |
|
994 | 1035 | likely that with the payload concept we can build a full solution, but not |
|
995 | 1036 | 100% clear yet. |
|
996 | 1037 | |
|
997 | 1038 | * Finishing the details of the heartbeat protocol. |
|
998 | 1039 | |
|
999 | 1040 | * Signal handling: specify what kind of information kernel should broadcast (or |
|
1000 | 1041 | not) when it receives signals. |
|
1001 | 1042 | |
|
1002 | 1043 | .. include:: ../links.rst |
General Comments 0
You need to be logged in to leave comments.
Login now