##// END OF EJS Templates
Ensure set_custom_completer is working. Fixes #11272
Nathan Goldbaum -
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,2089 +1,2093 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\greek small letter alpha<tab>
38 \\greek small letter alpha<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press `<tab>` to expand it to its latex form.
53 and press `<tab>` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103 """
103 """
104
104
105
105
106 # Copyright (c) IPython Development Team.
106 # Copyright (c) IPython Development Team.
107 # Distributed under the terms of the Modified BSD License.
107 # Distributed under the terms of the Modified BSD License.
108 #
108 #
109 # Some of this code originated from rlcompleter in the Python standard library
109 # Some of this code originated from rlcompleter in the Python standard library
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
111
111
112
112
113 import __main__
113 import __main__
114 import builtins as builtin_mod
114 import builtins as builtin_mod
115 import glob
115 import glob
116 import time
116 import time
117 import inspect
117 import inspect
118 import itertools
118 import itertools
119 import keyword
119 import keyword
120 import os
120 import os
121 import re
121 import re
122 import sys
122 import sys
123 import unicodedata
123 import unicodedata
124 import string
124 import string
125 import warnings
125 import warnings
126
126
127 from contextlib import contextmanager
127 from contextlib import contextmanager
128 from importlib import import_module
128 from importlib import import_module
129 from typing import Iterator, List, Tuple, Iterable
129 from typing import Iterator, List, Tuple, Iterable
130 from types import SimpleNamespace
130 from types import SimpleNamespace
131
131
132 from traitlets.config.configurable import Configurable
132 from traitlets.config.configurable import Configurable
133 from IPython.core.error import TryNext
133 from IPython.core.error import TryNext
134 from IPython.core.inputtransformer2 import ESC_MAGIC
134 from IPython.core.inputtransformer2 import ESC_MAGIC
135 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
135 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
136 from IPython.core.oinspect import InspectColors
136 from IPython.core.oinspect import InspectColors
137 from IPython.utils import generics
137 from IPython.utils import generics
138 from IPython.utils.dir2 import dir2, get_real_method
138 from IPython.utils.dir2 import dir2, get_real_method
139 from IPython.utils.process import arg_split
139 from IPython.utils.process import arg_split
140 from traitlets import Bool, Enum, observe, Int
140 from traitlets import Bool, Enum, observe, Int
141
141
142 # skip module docstests
142 # skip module docstests
143 skip_doctest = True
143 skip_doctest = True
144
144
145 try:
145 try:
146 import jedi
146 import jedi
147 jedi.settings.case_insensitive_completion = False
147 jedi.settings.case_insensitive_completion = False
148 import jedi.api.helpers
148 import jedi.api.helpers
149 import jedi.api.classes
149 import jedi.api.classes
150 JEDI_INSTALLED = True
150 JEDI_INSTALLED = True
151 except ImportError:
151 except ImportError:
152 JEDI_INSTALLED = False
152 JEDI_INSTALLED = False
153 #-----------------------------------------------------------------------------
153 #-----------------------------------------------------------------------------
154 # Globals
154 # Globals
155 #-----------------------------------------------------------------------------
155 #-----------------------------------------------------------------------------
156
156
157 # Public API
157 # Public API
158 __all__ = ['Completer','IPCompleter']
158 __all__ = ['Completer','IPCompleter']
159
159
160 if sys.platform == 'win32':
160 if sys.platform == 'win32':
161 PROTECTABLES = ' '
161 PROTECTABLES = ' '
162 else:
162 else:
163 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
163 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
164
164
165 # Protect against returning an enormous number of completions which the frontend
165 # Protect against returning an enormous number of completions which the frontend
166 # may have trouble processing.
166 # may have trouble processing.
167 MATCHES_LIMIT = 500
167 MATCHES_LIMIT = 500
168
168
169 _deprecation_readline_sentinel = object()
169 _deprecation_readline_sentinel = object()
170
170
171
171
172 class ProvisionalCompleterWarning(FutureWarning):
172 class ProvisionalCompleterWarning(FutureWarning):
173 """
173 """
174 Exception raise by an experimental feature in this module.
174 Exception raise by an experimental feature in this module.
175
175
176 Wrap code in :any:`provisionalcompleter` context manager if you
176 Wrap code in :any:`provisionalcompleter` context manager if you
177 are certain you want to use an unstable feature.
177 are certain you want to use an unstable feature.
178 """
178 """
179 pass
179 pass
180
180
181 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
181 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
182
182
183 @contextmanager
183 @contextmanager
184 def provisionalcompleter(action='ignore'):
184 def provisionalcompleter(action='ignore'):
185 """
185 """
186
186
187
187
188 This context manager has to be used in any place where unstable completer
188 This context manager has to be used in any place where unstable completer
189 behavior and API may be called.
189 behavior and API may be called.
190
190
191 >>> with provisionalcompleter():
191 >>> with provisionalcompleter():
192 ... completer.do_experimental_things() # works
192 ... completer.do_experimental_things() # works
193
193
194 >>> completer.do_experimental_things() # raises.
194 >>> completer.do_experimental_things() # raises.
195
195
196 .. note:: Unstable
196 .. note:: Unstable
197
197
198 By using this context manager you agree that the API in use may change
198 By using this context manager you agree that the API in use may change
199 without warning, and that you won't complain if they do so.
199 without warning, and that you won't complain if they do so.
200
200
201 You also understand that, if the API is not to your liking, you should report
201 You also understand that, if the API is not to your liking, you should report
202 a bug to explain your use case upstream.
202 a bug to explain your use case upstream.
203
203
204 We'll be happy to get your feedback, feature requests, and improvements on
204 We'll be happy to get your feedback, feature requests, and improvements on
205 any of the unstable APIs!
205 any of the unstable APIs!
206 """
206 """
207 with warnings.catch_warnings():
207 with warnings.catch_warnings():
208 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
208 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
209 yield
209 yield
210
210
211
211
212 def has_open_quotes(s):
212 def has_open_quotes(s):
213 """Return whether a string has open quotes.
213 """Return whether a string has open quotes.
214
214
215 This simply counts whether the number of quote characters of either type in
215 This simply counts whether the number of quote characters of either type in
216 the string is odd.
216 the string is odd.
217
217
218 Returns
218 Returns
219 -------
219 -------
220 If there is an open quote, the quote character is returned. Else, return
220 If there is an open quote, the quote character is returned. Else, return
221 False.
221 False.
222 """
222 """
223 # We check " first, then ', so complex cases with nested quotes will get
223 # We check " first, then ', so complex cases with nested quotes will get
224 # the " to take precedence.
224 # the " to take precedence.
225 if s.count('"') % 2:
225 if s.count('"') % 2:
226 return '"'
226 return '"'
227 elif s.count("'") % 2:
227 elif s.count("'") % 2:
228 return "'"
228 return "'"
229 else:
229 else:
230 return False
230 return False
231
231
232
232
233 def protect_filename(s, protectables=PROTECTABLES):
233 def protect_filename(s, protectables=PROTECTABLES):
234 """Escape a string to protect certain characters."""
234 """Escape a string to protect certain characters."""
235 if set(s) & set(protectables):
235 if set(s) & set(protectables):
236 if sys.platform == "win32":
236 if sys.platform == "win32":
237 return '"' + s + '"'
237 return '"' + s + '"'
238 else:
238 else:
239 return "".join(("\\" + c if c in protectables else c) for c in s)
239 return "".join(("\\" + c if c in protectables else c) for c in s)
240 else:
240 else:
241 return s
241 return s
242
242
243
243
244 def expand_user(path:str) -> Tuple[str, bool, str]:
244 def expand_user(path:str) -> Tuple[str, bool, str]:
245 """Expand ``~``-style usernames in strings.
245 """Expand ``~``-style usernames in strings.
246
246
247 This is similar to :func:`os.path.expanduser`, but it computes and returns
247 This is similar to :func:`os.path.expanduser`, but it computes and returns
248 extra information that will be useful if the input was being used in
248 extra information that will be useful if the input was being used in
249 computing completions, and you wish to return the completions with the
249 computing completions, and you wish to return the completions with the
250 original '~' instead of its expanded value.
250 original '~' instead of its expanded value.
251
251
252 Parameters
252 Parameters
253 ----------
253 ----------
254 path : str
254 path : str
255 String to be expanded. If no ~ is present, the output is the same as the
255 String to be expanded. If no ~ is present, the output is the same as the
256 input.
256 input.
257
257
258 Returns
258 Returns
259 -------
259 -------
260 newpath : str
260 newpath : str
261 Result of ~ expansion in the input path.
261 Result of ~ expansion in the input path.
262 tilde_expand : bool
262 tilde_expand : bool
263 Whether any expansion was performed or not.
263 Whether any expansion was performed or not.
264 tilde_val : str
264 tilde_val : str
265 The value that ~ was replaced with.
265 The value that ~ was replaced with.
266 """
266 """
267 # Default values
267 # Default values
268 tilde_expand = False
268 tilde_expand = False
269 tilde_val = ''
269 tilde_val = ''
270 newpath = path
270 newpath = path
271
271
272 if path.startswith('~'):
272 if path.startswith('~'):
273 tilde_expand = True
273 tilde_expand = True
274 rest = len(path)-1
274 rest = len(path)-1
275 newpath = os.path.expanduser(path)
275 newpath = os.path.expanduser(path)
276 if rest:
276 if rest:
277 tilde_val = newpath[:-rest]
277 tilde_val = newpath[:-rest]
278 else:
278 else:
279 tilde_val = newpath
279 tilde_val = newpath
280
280
281 return newpath, tilde_expand, tilde_val
281 return newpath, tilde_expand, tilde_val
282
282
283
283
284 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
284 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
285 """Does the opposite of expand_user, with its outputs.
285 """Does the opposite of expand_user, with its outputs.
286 """
286 """
287 if tilde_expand:
287 if tilde_expand:
288 return path.replace(tilde_val, '~')
288 return path.replace(tilde_val, '~')
289 else:
289 else:
290 return path
290 return path
291
291
292
292
293 def completions_sorting_key(word):
293 def completions_sorting_key(word):
294 """key for sorting completions
294 """key for sorting completions
295
295
296 This does several things:
296 This does several things:
297
297
298 - Demote any completions starting with underscores to the end
298 - Demote any completions starting with underscores to the end
299 - Insert any %magic and %%cellmagic completions in the alphabetical order
299 - Insert any %magic and %%cellmagic completions in the alphabetical order
300 by their name
300 by their name
301 """
301 """
302 prio1, prio2 = 0, 0
302 prio1, prio2 = 0, 0
303
303
304 if word.startswith('__'):
304 if word.startswith('__'):
305 prio1 = 2
305 prio1 = 2
306 elif word.startswith('_'):
306 elif word.startswith('_'):
307 prio1 = 1
307 prio1 = 1
308
308
309 if word.endswith('='):
309 if word.endswith('='):
310 prio1 = -1
310 prio1 = -1
311
311
312 if word.startswith('%%'):
312 if word.startswith('%%'):
313 # If there's another % in there, this is something else, so leave it alone
313 # If there's another % in there, this is something else, so leave it alone
314 if not "%" in word[2:]:
314 if not "%" in word[2:]:
315 word = word[2:]
315 word = word[2:]
316 prio2 = 2
316 prio2 = 2
317 elif word.startswith('%'):
317 elif word.startswith('%'):
318 if not "%" in word[1:]:
318 if not "%" in word[1:]:
319 word = word[1:]
319 word = word[1:]
320 prio2 = 1
320 prio2 = 1
321
321
322 return prio1, word, prio2
322 return prio1, word, prio2
323
323
324
324
325 class _FakeJediCompletion:
325 class _FakeJediCompletion:
326 """
326 """
327 This is a workaround to communicate to the UI that Jedi has crashed and to
327 This is a workaround to communicate to the UI that Jedi has crashed and to
328 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
328 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
329
329
330 Added in IPython 6.0 so should likely be removed for 7.0
330 Added in IPython 6.0 so should likely be removed for 7.0
331
331
332 """
332 """
333
333
334 def __init__(self, name):
334 def __init__(self, name):
335
335
336 self.name = name
336 self.name = name
337 self.complete = name
337 self.complete = name
338 self.type = 'crashed'
338 self.type = 'crashed'
339 self.name_with_symbols = name
339 self.name_with_symbols = name
340 self.signature = ''
340 self.signature = ''
341 self._origin = 'fake'
341 self._origin = 'fake'
342
342
343 def __repr__(self):
343 def __repr__(self):
344 return '<Fake completion object jedi has crashed>'
344 return '<Fake completion object jedi has crashed>'
345
345
346
346
347 class Completion:
347 class Completion:
348 """
348 """
349 Completion object used and return by IPython completers.
349 Completion object used and return by IPython completers.
350
350
351 .. warning:: Unstable
351 .. warning:: Unstable
352
352
353 This function is unstable, API may change without warning.
353 This function is unstable, API may change without warning.
354 It will also raise unless use in proper context manager.
354 It will also raise unless use in proper context manager.
355
355
356 This act as a middle ground :any:`Completion` object between the
356 This act as a middle ground :any:`Completion` object between the
357 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
357 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
358 object. While Jedi need a lot of information about evaluator and how the
358 object. While Jedi need a lot of information about evaluator and how the
359 code should be ran/inspected, PromptToolkit (and other frontend) mostly
359 code should be ran/inspected, PromptToolkit (and other frontend) mostly
360 need user facing information.
360 need user facing information.
361
361
362 - Which range should be replaced replaced by what.
362 - Which range should be replaced replaced by what.
363 - Some metadata (like completion type), or meta information to displayed to
363 - Some metadata (like completion type), or meta information to displayed to
364 the use user.
364 the use user.
365
365
366 For debugging purpose we can also store the origin of the completion (``jedi``,
366 For debugging purpose we can also store the origin of the completion (``jedi``,
367 ``IPython.python_matches``, ``IPython.magics_matches``...).
367 ``IPython.python_matches``, ``IPython.magics_matches``...).
368 """
368 """
369
369
370 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
370 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
371
371
372 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
372 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
373 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
373 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
374 "It may change without warnings. "
374 "It may change without warnings. "
375 "Use in corresponding context manager.",
375 "Use in corresponding context manager.",
376 category=ProvisionalCompleterWarning, stacklevel=2)
376 category=ProvisionalCompleterWarning, stacklevel=2)
377
377
378 self.start = start
378 self.start = start
379 self.end = end
379 self.end = end
380 self.text = text
380 self.text = text
381 self.type = type
381 self.type = type
382 self.signature = signature
382 self.signature = signature
383 self._origin = _origin
383 self._origin = _origin
384
384
385 def __repr__(self):
385 def __repr__(self):
386 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
386 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
387 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
387 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
388
388
389 def __eq__(self, other)->Bool:
389 def __eq__(self, other)->Bool:
390 """
390 """
391 Equality and hash do not hash the type (as some completer may not be
391 Equality and hash do not hash the type (as some completer may not be
392 able to infer the type), but are use to (partially) de-duplicate
392 able to infer the type), but are use to (partially) de-duplicate
393 completion.
393 completion.
394
394
395 Completely de-duplicating completion is a bit tricker that just
395 Completely de-duplicating completion is a bit tricker that just
396 comparing as it depends on surrounding text, which Completions are not
396 comparing as it depends on surrounding text, which Completions are not
397 aware of.
397 aware of.
398 """
398 """
399 return self.start == other.start and \
399 return self.start == other.start and \
400 self.end == other.end and \
400 self.end == other.end and \
401 self.text == other.text
401 self.text == other.text
402
402
403 def __hash__(self):
403 def __hash__(self):
404 return hash((self.start, self.end, self.text))
404 return hash((self.start, self.end, self.text))
405
405
406
406
407 _IC = Iterable[Completion]
407 _IC = Iterable[Completion]
408
408
409
409
410 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
410 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
411 """
411 """
412 Deduplicate a set of completions.
412 Deduplicate a set of completions.
413
413
414 .. warning:: Unstable
414 .. warning:: Unstable
415
415
416 This function is unstable, API may change without warning.
416 This function is unstable, API may change without warning.
417
417
418 Parameters
418 Parameters
419 ----------
419 ----------
420 text: str
420 text: str
421 text that should be completed.
421 text that should be completed.
422 completions: Iterator[Completion]
422 completions: Iterator[Completion]
423 iterator over the completions to deduplicate
423 iterator over the completions to deduplicate
424
424
425 Yields
425 Yields
426 ------
426 ------
427 `Completions` objects
427 `Completions` objects
428
428
429
429
430 Completions coming from multiple sources, may be different but end up having
430 Completions coming from multiple sources, may be different but end up having
431 the same effect when applied to ``text``. If this is the case, this will
431 the same effect when applied to ``text``. If this is the case, this will
432 consider completions as equal and only emit the first encountered.
432 consider completions as equal and only emit the first encountered.
433
433
434 Not folded in `completions()` yet for debugging purpose, and to detect when
434 Not folded in `completions()` yet for debugging purpose, and to detect when
435 the IPython completer does return things that Jedi does not, but should be
435 the IPython completer does return things that Jedi does not, but should be
436 at some point.
436 at some point.
437 """
437 """
438 completions = list(completions)
438 completions = list(completions)
439 if not completions:
439 if not completions:
440 return
440 return
441
441
442 new_start = min(c.start for c in completions)
442 new_start = min(c.start for c in completions)
443 new_end = max(c.end for c in completions)
443 new_end = max(c.end for c in completions)
444
444
445 seen = set()
445 seen = set()
446 for c in completions:
446 for c in completions:
447 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
447 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
448 if new_text not in seen:
448 if new_text not in seen:
449 yield c
449 yield c
450 seen.add(new_text)
450 seen.add(new_text)
451
451
452
452
453 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
453 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
454 """
454 """
455 Rectify a set of completions to all have the same ``start`` and ``end``
455 Rectify a set of completions to all have the same ``start`` and ``end``
456
456
457 .. warning:: Unstable
457 .. warning:: Unstable
458
458
459 This function is unstable, API may change without warning.
459 This function is unstable, API may change without warning.
460 It will also raise unless use in proper context manager.
460 It will also raise unless use in proper context manager.
461
461
462 Parameters
462 Parameters
463 ----------
463 ----------
464 text: str
464 text: str
465 text that should be completed.
465 text that should be completed.
466 completions: Iterator[Completion]
466 completions: Iterator[Completion]
467 iterator over the completions to rectify
467 iterator over the completions to rectify
468
468
469
469
470 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
470 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
471 the Jupyter Protocol requires them to behave like so. This will readjust
471 the Jupyter Protocol requires them to behave like so. This will readjust
472 the completion to have the same ``start`` and ``end`` by padding both
472 the completion to have the same ``start`` and ``end`` by padding both
473 extremities with surrounding text.
473 extremities with surrounding text.
474
474
475 During stabilisation should support a ``_debug`` option to log which
475 During stabilisation should support a ``_debug`` option to log which
476 completion are return by the IPython completer and not found in Jedi in
476 completion are return by the IPython completer and not found in Jedi in
477 order to make upstream bug report.
477 order to make upstream bug report.
478 """
478 """
479 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
479 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
480 "It may change without warnings. "
480 "It may change without warnings. "
481 "Use in corresponding context manager.",
481 "Use in corresponding context manager.",
482 category=ProvisionalCompleterWarning, stacklevel=2)
482 category=ProvisionalCompleterWarning, stacklevel=2)
483
483
484 completions = list(completions)
484 completions = list(completions)
485 if not completions:
485 if not completions:
486 return
486 return
487 starts = (c.start for c in completions)
487 starts = (c.start for c in completions)
488 ends = (c.end for c in completions)
488 ends = (c.end for c in completions)
489
489
490 new_start = min(starts)
490 new_start = min(starts)
491 new_end = max(ends)
491 new_end = max(ends)
492
492
493 seen_jedi = set()
493 seen_jedi = set()
494 seen_python_matches = set()
494 seen_python_matches = set()
495 for c in completions:
495 for c in completions:
496 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
496 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
497 if c._origin == 'jedi':
497 if c._origin == 'jedi':
498 seen_jedi.add(new_text)
498 seen_jedi.add(new_text)
499 elif c._origin == 'IPCompleter.python_matches':
499 elif c._origin == 'IPCompleter.python_matches':
500 seen_python_matches.add(new_text)
500 seen_python_matches.add(new_text)
501 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
501 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
502 diff = seen_python_matches.difference(seen_jedi)
502 diff = seen_python_matches.difference(seen_jedi)
503 if diff and _debug:
503 if diff and _debug:
504 print('IPython.python matches have extras:', diff)
504 print('IPython.python matches have extras:', diff)
505
505
506
506
507 if sys.platform == 'win32':
507 if sys.platform == 'win32':
508 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
508 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
509 else:
509 else:
510 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
510 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
511
511
512 GREEDY_DELIMS = ' =\r\n'
512 GREEDY_DELIMS = ' =\r\n'
513
513
514
514
515 class CompletionSplitter(object):
515 class CompletionSplitter(object):
516 """An object to split an input line in a manner similar to readline.
516 """An object to split an input line in a manner similar to readline.
517
517
518 By having our own implementation, we can expose readline-like completion in
518 By having our own implementation, we can expose readline-like completion in
519 a uniform manner to all frontends. This object only needs to be given the
519 a uniform manner to all frontends. This object only needs to be given the
520 line of text to be split and the cursor position on said line, and it
520 line of text to be split and the cursor position on said line, and it
521 returns the 'word' to be completed on at the cursor after splitting the
521 returns the 'word' to be completed on at the cursor after splitting the
522 entire line.
522 entire line.
523
523
524 What characters are used as splitting delimiters can be controlled by
524 What characters are used as splitting delimiters can be controlled by
525 setting the ``delims`` attribute (this is a property that internally
525 setting the ``delims`` attribute (this is a property that internally
526 automatically builds the necessary regular expression)"""
526 automatically builds the necessary regular expression)"""
527
527
528 # Private interface
528 # Private interface
529
529
530 # A string of delimiter characters. The default value makes sense for
530 # A string of delimiter characters. The default value makes sense for
531 # IPython's most typical usage patterns.
531 # IPython's most typical usage patterns.
532 _delims = DELIMS
532 _delims = DELIMS
533
533
534 # The expression (a normal string) to be compiled into a regular expression
534 # The expression (a normal string) to be compiled into a regular expression
535 # for actual splitting. We store it as an attribute mostly for ease of
535 # for actual splitting. We store it as an attribute mostly for ease of
536 # debugging, since this type of code can be so tricky to debug.
536 # debugging, since this type of code can be so tricky to debug.
537 _delim_expr = None
537 _delim_expr = None
538
538
539 # The regular expression that does the actual splitting
539 # The regular expression that does the actual splitting
540 _delim_re = None
540 _delim_re = None
541
541
542 def __init__(self, delims=None):
542 def __init__(self, delims=None):
543 delims = CompletionSplitter._delims if delims is None else delims
543 delims = CompletionSplitter._delims if delims is None else delims
544 self.delims = delims
544 self.delims = delims
545
545
546 @property
546 @property
547 def delims(self):
547 def delims(self):
548 """Return the string of delimiter characters."""
548 """Return the string of delimiter characters."""
549 return self._delims
549 return self._delims
550
550
551 @delims.setter
551 @delims.setter
552 def delims(self, delims):
552 def delims(self, delims):
553 """Set the delimiters for line splitting."""
553 """Set the delimiters for line splitting."""
554 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
554 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
555 self._delim_re = re.compile(expr)
555 self._delim_re = re.compile(expr)
556 self._delims = delims
556 self._delims = delims
557 self._delim_expr = expr
557 self._delim_expr = expr
558
558
559 def split_line(self, line, cursor_pos=None):
559 def split_line(self, line, cursor_pos=None):
560 """Split a line of text with a cursor at the given position.
560 """Split a line of text with a cursor at the given position.
561 """
561 """
562 l = line if cursor_pos is None else line[:cursor_pos]
562 l = line if cursor_pos is None else line[:cursor_pos]
563 return self._delim_re.split(l)[-1]
563 return self._delim_re.split(l)[-1]
564
564
565
565
566
566
567 class Completer(Configurable):
567 class Completer(Configurable):
568
568
569 greedy = Bool(False,
569 greedy = Bool(False,
570 help="""Activate greedy completion
570 help="""Activate greedy completion
571 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
571 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
572
572
573 This will enable completion on elements of lists, results of function calls, etc.,
573 This will enable completion on elements of lists, results of function calls, etc.,
574 but can be unsafe because the code is actually evaluated on TAB.
574 but can be unsafe because the code is actually evaluated on TAB.
575 """
575 """
576 ).tag(config=True)
576 ).tag(config=True)
577
577
578 use_jedi = Bool(default_value=JEDI_INSTALLED,
578 use_jedi = Bool(default_value=JEDI_INSTALLED,
579 help="Experimental: Use Jedi to generate autocompletions. "
579 help="Experimental: Use Jedi to generate autocompletions. "
580 "Default to True if jedi is installed.").tag(config=True)
580 "Default to True if jedi is installed.").tag(config=True)
581
581
582 jedi_compute_type_timeout = Int(default_value=400,
582 jedi_compute_type_timeout = Int(default_value=400,
583 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
583 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
584 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
584 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
585 performance by preventing jedi to build its cache.
585 performance by preventing jedi to build its cache.
586 """).tag(config=True)
586 """).tag(config=True)
587
587
588 debug = Bool(default_value=False,
588 debug = Bool(default_value=False,
589 help='Enable debug for the Completer. Mostly print extra '
589 help='Enable debug for the Completer. Mostly print extra '
590 'information for experimental jedi integration.')\
590 'information for experimental jedi integration.')\
591 .tag(config=True)
591 .tag(config=True)
592
592
593 backslash_combining_completions = Bool(True,
593 backslash_combining_completions = Bool(True,
594 help="Enable unicode completions, e.g. \\alpha<tab> . "
594 help="Enable unicode completions, e.g. \\alpha<tab> . "
595 "Includes completion of latex commands, unicode names, and expanding "
595 "Includes completion of latex commands, unicode names, and expanding "
596 "unicode characters back to latex commands.").tag(config=True)
596 "unicode characters back to latex commands.").tag(config=True)
597
597
598
598
599
599
600 def __init__(self, namespace=None, global_namespace=None, **kwargs):
600 def __init__(self, namespace=None, global_namespace=None, **kwargs):
601 """Create a new completer for the command line.
601 """Create a new completer for the command line.
602
602
603 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
603 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
604
604
605 If unspecified, the default namespace where completions are performed
605 If unspecified, the default namespace where completions are performed
606 is __main__ (technically, __main__.__dict__). Namespaces should be
606 is __main__ (technically, __main__.__dict__). Namespaces should be
607 given as dictionaries.
607 given as dictionaries.
608
608
609 An optional second namespace can be given. This allows the completer
609 An optional second namespace can be given. This allows the completer
610 to handle cases where both the local and global scopes need to be
610 to handle cases where both the local and global scopes need to be
611 distinguished.
611 distinguished.
612 """
612 """
613
613
614 # Don't bind to namespace quite yet, but flag whether the user wants a
614 # Don't bind to namespace quite yet, but flag whether the user wants a
615 # specific namespace or to use __main__.__dict__. This will allow us
615 # specific namespace or to use __main__.__dict__. This will allow us
616 # to bind to __main__.__dict__ at completion time, not now.
616 # to bind to __main__.__dict__ at completion time, not now.
617 if namespace is None:
617 if namespace is None:
618 self.use_main_ns = True
618 self.use_main_ns = True
619 else:
619 else:
620 self.use_main_ns = False
620 self.use_main_ns = False
621 self.namespace = namespace
621 self.namespace = namespace
622
622
623 # The global namespace, if given, can be bound directly
623 # The global namespace, if given, can be bound directly
624 if global_namespace is None:
624 if global_namespace is None:
625 self.global_namespace = {}
625 self.global_namespace = {}
626 else:
626 else:
627 self.global_namespace = global_namespace
627 self.global_namespace = global_namespace
628
628
629 self.custom_matchers = []
630
629 super(Completer, self).__init__(**kwargs)
631 super(Completer, self).__init__(**kwargs)
630
632
631 def complete(self, text, state):
633 def complete(self, text, state):
632 """Return the next possible completion for 'text'.
634 """Return the next possible completion for 'text'.
633
635
634 This is called successively with state == 0, 1, 2, ... until it
636 This is called successively with state == 0, 1, 2, ... until it
635 returns None. The completion should begin with 'text'.
637 returns None. The completion should begin with 'text'.
636
638
637 """
639 """
638 if self.use_main_ns:
640 if self.use_main_ns:
639 self.namespace = __main__.__dict__
641 self.namespace = __main__.__dict__
640
642
641 if state == 0:
643 if state == 0:
642 if "." in text:
644 if "." in text:
643 self.matches = self.attr_matches(text)
645 self.matches = self.attr_matches(text)
644 else:
646 else:
645 self.matches = self.global_matches(text)
647 self.matches = self.global_matches(text)
646 try:
648 try:
647 return self.matches[state]
649 return self.matches[state]
648 except IndexError:
650 except IndexError:
649 return None
651 return None
650
652
651 def global_matches(self, text):
653 def global_matches(self, text):
652 """Compute matches when text is a simple name.
654 """Compute matches when text is a simple name.
653
655
654 Return a list of all keywords, built-in functions and names currently
656 Return a list of all keywords, built-in functions and names currently
655 defined in self.namespace or self.global_namespace that match.
657 defined in self.namespace or self.global_namespace that match.
656
658
657 """
659 """
658 matches = []
660 matches = []
659 match_append = matches.append
661 match_append = matches.append
660 n = len(text)
662 n = len(text)
661 for lst in [keyword.kwlist,
663 for lst in [keyword.kwlist,
662 builtin_mod.__dict__.keys(),
664 builtin_mod.__dict__.keys(),
663 self.namespace.keys(),
665 self.namespace.keys(),
664 self.global_namespace.keys()]:
666 self.global_namespace.keys()]:
665 for word in lst:
667 for word in lst:
666 if word[:n] == text and word != "__builtins__":
668 if word[:n] == text and word != "__builtins__":
667 match_append(word)
669 match_append(word)
668
670
669 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
671 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
670 for lst in [self.namespace.keys(),
672 for lst in [self.namespace.keys(),
671 self.global_namespace.keys()]:
673 self.global_namespace.keys()]:
672 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
674 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
673 for word in lst if snake_case_re.match(word)}
675 for word in lst if snake_case_re.match(word)}
674 for word in shortened.keys():
676 for word in shortened.keys():
675 if word[:n] == text and word != "__builtins__":
677 if word[:n] == text and word != "__builtins__":
676 match_append(shortened[word])
678 match_append(shortened[word])
677 return matches
679 return matches
678
680
679 def attr_matches(self, text):
681 def attr_matches(self, text):
680 """Compute matches when text contains a dot.
682 """Compute matches when text contains a dot.
681
683
682 Assuming the text is of the form NAME.NAME....[NAME], and is
684 Assuming the text is of the form NAME.NAME....[NAME], and is
683 evaluatable in self.namespace or self.global_namespace, it will be
685 evaluatable in self.namespace or self.global_namespace, it will be
684 evaluated and its attributes (as revealed by dir()) are used as
686 evaluated and its attributes (as revealed by dir()) are used as
685 possible completions. (For class instances, class members are
687 possible completions. (For class instances, class members are
686 also considered.)
688 also considered.)
687
689
688 WARNING: this can still invoke arbitrary C code, if an object
690 WARNING: this can still invoke arbitrary C code, if an object
689 with a __getattr__ hook is evaluated.
691 with a __getattr__ hook is evaluated.
690
692
691 """
693 """
692
694
693 # Another option, seems to work great. Catches things like ''.<tab>
695 # Another option, seems to work great. Catches things like ''.<tab>
694 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
696 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
695
697
696 if m:
698 if m:
697 expr, attr = m.group(1, 3)
699 expr, attr = m.group(1, 3)
698 elif self.greedy:
700 elif self.greedy:
699 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
701 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
700 if not m2:
702 if not m2:
701 return []
703 return []
702 expr, attr = m2.group(1,2)
704 expr, attr = m2.group(1,2)
703 else:
705 else:
704 return []
706 return []
705
707
706 try:
708 try:
707 obj = eval(expr, self.namespace)
709 obj = eval(expr, self.namespace)
708 except:
710 except:
709 try:
711 try:
710 obj = eval(expr, self.global_namespace)
712 obj = eval(expr, self.global_namespace)
711 except:
713 except:
712 return []
714 return []
713
715
714 if self.limit_to__all__ and hasattr(obj, '__all__'):
716 if self.limit_to__all__ and hasattr(obj, '__all__'):
715 words = get__all__entries(obj)
717 words = get__all__entries(obj)
716 else:
718 else:
717 words = dir2(obj)
719 words = dir2(obj)
718
720
719 try:
721 try:
720 words = generics.complete_object(obj, words)
722 words = generics.complete_object(obj, words)
721 except TryNext:
723 except TryNext:
722 pass
724 pass
723 except AssertionError:
725 except AssertionError:
724 raise
726 raise
725 except Exception:
727 except Exception:
726 # Silence errors from completion function
728 # Silence errors from completion function
727 #raise # dbg
729 #raise # dbg
728 pass
730 pass
729 # Build match list to return
731 # Build match list to return
730 n = len(attr)
732 n = len(attr)
731 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
733 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
732
734
733
735
734 def get__all__entries(obj):
736 def get__all__entries(obj):
735 """returns the strings in the __all__ attribute"""
737 """returns the strings in the __all__ attribute"""
736 try:
738 try:
737 words = getattr(obj, '__all__')
739 words = getattr(obj, '__all__')
738 except:
740 except:
739 return []
741 return []
740
742
741 return [w for w in words if isinstance(w, str)]
743 return [w for w in words if isinstance(w, str)]
742
744
743
745
744 def match_dict_keys(keys: List[str], prefix: str, delims: str):
746 def match_dict_keys(keys: List[str], prefix: str, delims: str):
745 """Used by dict_key_matches, matching the prefix to a list of keys
747 """Used by dict_key_matches, matching the prefix to a list of keys
746
748
747 Parameters
749 Parameters
748 ==========
750 ==========
749 keys:
751 keys:
750 list of keys in dictionary currently being completed.
752 list of keys in dictionary currently being completed.
751 prefix:
753 prefix:
752 Part of the text already typed by the user. e.g. `mydict[b'fo`
754 Part of the text already typed by the user. e.g. `mydict[b'fo`
753 delims:
755 delims:
754 String of delimiters to consider when finding the current key.
756 String of delimiters to consider when finding the current key.
755
757
756 Returns
758 Returns
757 =======
759 =======
758
760
759 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
761 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
760 ``quote`` being the quote that need to be used to close current string.
762 ``quote`` being the quote that need to be used to close current string.
761 ``token_start`` the position where the replacement should start occurring,
763 ``token_start`` the position where the replacement should start occurring,
762 ``matches`` a list of replacement/completion
764 ``matches`` a list of replacement/completion
763
765
764 """
766 """
765 if not prefix:
767 if not prefix:
766 return None, 0, [repr(k) for k in keys
768 return None, 0, [repr(k) for k in keys
767 if isinstance(k, (str, bytes))]
769 if isinstance(k, (str, bytes))]
768 quote_match = re.search('["\']', prefix)
770 quote_match = re.search('["\']', prefix)
769 quote = quote_match.group()
771 quote = quote_match.group()
770 try:
772 try:
771 prefix_str = eval(prefix + quote, {})
773 prefix_str = eval(prefix + quote, {})
772 except Exception:
774 except Exception:
773 return None, 0, []
775 return None, 0, []
774
776
775 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
777 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
776 token_match = re.search(pattern, prefix, re.UNICODE)
778 token_match = re.search(pattern, prefix, re.UNICODE)
777 token_start = token_match.start()
779 token_start = token_match.start()
778 token_prefix = token_match.group()
780 token_prefix = token_match.group()
779
781
780 matched = []
782 matched = []
781 for key in keys:
783 for key in keys:
782 try:
784 try:
783 if not key.startswith(prefix_str):
785 if not key.startswith(prefix_str):
784 continue
786 continue
785 except (AttributeError, TypeError, UnicodeError):
787 except (AttributeError, TypeError, UnicodeError):
786 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
788 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
787 continue
789 continue
788
790
789 # reformat remainder of key to begin with prefix
791 # reformat remainder of key to begin with prefix
790 rem = key[len(prefix_str):]
792 rem = key[len(prefix_str):]
791 # force repr wrapped in '
793 # force repr wrapped in '
792 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
794 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
793 if rem_repr.startswith('u') and prefix[0] not in 'uU':
795 if rem_repr.startswith('u') and prefix[0] not in 'uU':
794 # Found key is unicode, but prefix is Py2 string.
796 # Found key is unicode, but prefix is Py2 string.
795 # Therefore attempt to interpret key as string.
797 # Therefore attempt to interpret key as string.
796 try:
798 try:
797 rem_repr = repr(rem.encode('ascii') + '"')
799 rem_repr = repr(rem.encode('ascii') + '"')
798 except UnicodeEncodeError:
800 except UnicodeEncodeError:
799 continue
801 continue
800
802
801 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
803 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
802 if quote == '"':
804 if quote == '"':
803 # The entered prefix is quoted with ",
805 # The entered prefix is quoted with ",
804 # but the match is quoted with '.
806 # but the match is quoted with '.
805 # A contained " hence needs escaping for comparison:
807 # A contained " hence needs escaping for comparison:
806 rem_repr = rem_repr.replace('"', '\\"')
808 rem_repr = rem_repr.replace('"', '\\"')
807
809
808 # then reinsert prefix from start of token
810 # then reinsert prefix from start of token
809 matched.append('%s%s' % (token_prefix, rem_repr))
811 matched.append('%s%s' % (token_prefix, rem_repr))
810 return quote, token_start, matched
812 return quote, token_start, matched
811
813
812
814
813 def cursor_to_position(text:str, line:int, column:int)->int:
815 def cursor_to_position(text:str, line:int, column:int)->int:
814 """
816 """
815
817
816 Convert the (line,column) position of the cursor in text to an offset in a
818 Convert the (line,column) position of the cursor in text to an offset in a
817 string.
819 string.
818
820
819 Parameters
821 Parameters
820 ----------
822 ----------
821
823
822 text : str
824 text : str
823 The text in which to calculate the cursor offset
825 The text in which to calculate the cursor offset
824 line : int
826 line : int
825 Line of the cursor; 0-indexed
827 Line of the cursor; 0-indexed
826 column : int
828 column : int
827 Column of the cursor 0-indexed
829 Column of the cursor 0-indexed
828
830
829 Return
831 Return
830 ------
832 ------
831 Position of the cursor in ``text``, 0-indexed.
833 Position of the cursor in ``text``, 0-indexed.
832
834
833 See Also
835 See Also
834 --------
836 --------
835 position_to_cursor: reciprocal of this function
837 position_to_cursor: reciprocal of this function
836
838
837 """
839 """
838 lines = text.split('\n')
840 lines = text.split('\n')
839 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
841 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
840
842
841 return sum(len(l) + 1 for l in lines[:line]) + column
843 return sum(len(l) + 1 for l in lines[:line]) + column
842
844
843 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
845 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
844 """
846 """
845 Convert the position of the cursor in text (0 indexed) to a line
847 Convert the position of the cursor in text (0 indexed) to a line
846 number(0-indexed) and a column number (0-indexed) pair
848 number(0-indexed) and a column number (0-indexed) pair
847
849
848 Position should be a valid position in ``text``.
850 Position should be a valid position in ``text``.
849
851
850 Parameters
852 Parameters
851 ----------
853 ----------
852
854
853 text : str
855 text : str
854 The text in which to calculate the cursor offset
856 The text in which to calculate the cursor offset
855 offset : int
857 offset : int
856 Position of the cursor in ``text``, 0-indexed.
858 Position of the cursor in ``text``, 0-indexed.
857
859
858 Return
860 Return
859 ------
861 ------
860 (line, column) : (int, int)
862 (line, column) : (int, int)
861 Line of the cursor; 0-indexed, column of the cursor 0-indexed
863 Line of the cursor; 0-indexed, column of the cursor 0-indexed
862
864
863
865
864 See Also
866 See Also
865 --------
867 --------
866 cursor_to_position : reciprocal of this function
868 cursor_to_position : reciprocal of this function
867
869
868
870
869 """
871 """
870
872
871 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
873 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
872
874
873 before = text[:offset]
875 before = text[:offset]
874 blines = before.split('\n') # ! splitnes trim trailing \n
876 blines = before.split('\n') # ! splitnes trim trailing \n
875 line = before.count('\n')
877 line = before.count('\n')
876 col = len(blines[-1])
878 col = len(blines[-1])
877 return line, col
879 return line, col
878
880
879
881
880 def _safe_isinstance(obj, module, class_name):
882 def _safe_isinstance(obj, module, class_name):
881 """Checks if obj is an instance of module.class_name if loaded
883 """Checks if obj is an instance of module.class_name if loaded
882 """
884 """
883 return (module in sys.modules and
885 return (module in sys.modules and
884 isinstance(obj, getattr(import_module(module), class_name)))
886 isinstance(obj, getattr(import_module(module), class_name)))
885
887
886
888
887 def back_unicode_name_matches(text):
889 def back_unicode_name_matches(text):
888 u"""Match unicode characters back to unicode name
890 u"""Match unicode characters back to unicode name
889
891
890 This does ``β˜ƒ`` -> ``\\snowman``
892 This does ``β˜ƒ`` -> ``\\snowman``
891
893
892 Note that snowman is not a valid python3 combining character but will be expanded.
894 Note that snowman is not a valid python3 combining character but will be expanded.
893 Though it will not recombine back to the snowman character by the completion machinery.
895 Though it will not recombine back to the snowman character by the completion machinery.
894
896
895 This will not either back-complete standard sequences like \\n, \\b ...
897 This will not either back-complete standard sequences like \\n, \\b ...
896
898
897 Used on Python 3 only.
899 Used on Python 3 only.
898 """
900 """
899 if len(text)<2:
901 if len(text)<2:
900 return u'', ()
902 return u'', ()
901 maybe_slash = text[-2]
903 maybe_slash = text[-2]
902 if maybe_slash != '\\':
904 if maybe_slash != '\\':
903 return u'', ()
905 return u'', ()
904
906
905 char = text[-1]
907 char = text[-1]
906 # no expand on quote for completion in strings.
908 # no expand on quote for completion in strings.
907 # nor backcomplete standard ascii keys
909 # nor backcomplete standard ascii keys
908 if char in string.ascii_letters or char in ['"',"'"]:
910 if char in string.ascii_letters or char in ['"',"'"]:
909 return u'', ()
911 return u'', ()
910 try :
912 try :
911 unic = unicodedata.name(char)
913 unic = unicodedata.name(char)
912 return '\\'+char,['\\'+unic]
914 return '\\'+char,['\\'+unic]
913 except KeyError:
915 except KeyError:
914 pass
916 pass
915 return u'', ()
917 return u'', ()
916
918
917 def back_latex_name_matches(text:str):
919 def back_latex_name_matches(text:str):
918 """Match latex characters back to unicode name
920 """Match latex characters back to unicode name
919
921
920 This does ``\\β„΅`` -> ``\\aleph``
922 This does ``\\β„΅`` -> ``\\aleph``
921
923
922 Used on Python 3 only.
924 Used on Python 3 only.
923 """
925 """
924 if len(text)<2:
926 if len(text)<2:
925 return u'', ()
927 return u'', ()
926 maybe_slash = text[-2]
928 maybe_slash = text[-2]
927 if maybe_slash != '\\':
929 if maybe_slash != '\\':
928 return u'', ()
930 return u'', ()
929
931
930
932
931 char = text[-1]
933 char = text[-1]
932 # no expand on quote for completion in strings.
934 # no expand on quote for completion in strings.
933 # nor backcomplete standard ascii keys
935 # nor backcomplete standard ascii keys
934 if char in string.ascii_letters or char in ['"',"'"]:
936 if char in string.ascii_letters or char in ['"',"'"]:
935 return u'', ()
937 return u'', ()
936 try :
938 try :
937 latex = reverse_latex_symbol[char]
939 latex = reverse_latex_symbol[char]
938 # '\\' replace the \ as well
940 # '\\' replace the \ as well
939 return '\\'+char,[latex]
941 return '\\'+char,[latex]
940 except KeyError:
942 except KeyError:
941 pass
943 pass
942 return u'', ()
944 return u'', ()
943
945
944
946
945 def _formatparamchildren(parameter) -> str:
947 def _formatparamchildren(parameter) -> str:
946 """
948 """
947 Get parameter name and value from Jedi Private API
949 Get parameter name and value from Jedi Private API
948
950
949 Jedi does not expose a simple way to get `param=value` from its API.
951 Jedi does not expose a simple way to get `param=value` from its API.
950
952
951 Parameter
953 Parameter
952 =========
954 =========
953
955
954 parameter:
956 parameter:
955 Jedi's function `Param`
957 Jedi's function `Param`
956
958
957 Returns
959 Returns
958 =======
960 =======
959
961
960 A string like 'a', 'b=1', '*args', '**kwargs'
962 A string like 'a', 'b=1', '*args', '**kwargs'
961
963
962
964
963 """
965 """
964 description = parameter.description
966 description = parameter.description
965 if not description.startswith('param '):
967 if not description.startswith('param '):
966 raise ValueError('Jedi function parameter description have change format.'
968 raise ValueError('Jedi function parameter description have change format.'
967 'Expected "param ...", found %r".' % description)
969 'Expected "param ...", found %r".' % description)
968 return description[6:]
970 return description[6:]
969
971
970 def _make_signature(completion)-> str:
972 def _make_signature(completion)-> str:
971 """
973 """
972 Make the signature from a jedi completion
974 Make the signature from a jedi completion
973
975
974 Parameter
976 Parameter
975 =========
977 =========
976
978
977 completion: jedi.Completion
979 completion: jedi.Completion
978 object does not complete a function type
980 object does not complete a function type
979
981
980 Returns
982 Returns
981 =======
983 =======
982
984
983 a string consisting of the function signature, with the parenthesis but
985 a string consisting of the function signature, with the parenthesis but
984 without the function name. example:
986 without the function name. example:
985 `(a, *args, b=1, **kwargs)`
987 `(a, *args, b=1, **kwargs)`
986
988
987 """
989 """
988
990
989 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for p in completion.params) if f])
991 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for p in completion.params) if f])
990
992
991 class IPCompleter(Completer):
993 class IPCompleter(Completer):
992 """Extension of the completer class with IPython-specific features"""
994 """Extension of the completer class with IPython-specific features"""
993
995
994 _names = None
996 _names = None
995
997
996 @observe('greedy')
998 @observe('greedy')
997 def _greedy_changed(self, change):
999 def _greedy_changed(self, change):
998 """update the splitter and readline delims when greedy is changed"""
1000 """update the splitter and readline delims when greedy is changed"""
999 if change['new']:
1001 if change['new']:
1000 self.splitter.delims = GREEDY_DELIMS
1002 self.splitter.delims = GREEDY_DELIMS
1001 else:
1003 else:
1002 self.splitter.delims = DELIMS
1004 self.splitter.delims = DELIMS
1003
1005
1004 dict_keys_only = Bool(False,
1006 dict_keys_only = Bool(False,
1005 help="""Whether to show dict key matches only""")
1007 help="""Whether to show dict key matches only""")
1006
1008
1007 merge_completions = Bool(True,
1009 merge_completions = Bool(True,
1008 help="""Whether to merge completion results into a single list
1010 help="""Whether to merge completion results into a single list
1009
1011
1010 If False, only the completion results from the first non-empty
1012 If False, only the completion results from the first non-empty
1011 completer will be returned.
1013 completer will be returned.
1012 """
1014 """
1013 ).tag(config=True)
1015 ).tag(config=True)
1014 omit__names = Enum((0,1,2), default_value=2,
1016 omit__names = Enum((0,1,2), default_value=2,
1015 help="""Instruct the completer to omit private method names
1017 help="""Instruct the completer to omit private method names
1016
1018
1017 Specifically, when completing on ``object.<tab>``.
1019 Specifically, when completing on ``object.<tab>``.
1018
1020
1019 When 2 [default]: all names that start with '_' will be excluded.
1021 When 2 [default]: all names that start with '_' will be excluded.
1020
1022
1021 When 1: all 'magic' names (``__foo__``) will be excluded.
1023 When 1: all 'magic' names (``__foo__``) will be excluded.
1022
1024
1023 When 0: nothing will be excluded.
1025 When 0: nothing will be excluded.
1024 """
1026 """
1025 ).tag(config=True)
1027 ).tag(config=True)
1026 limit_to__all__ = Bool(False,
1028 limit_to__all__ = Bool(False,
1027 help="""
1029 help="""
1028 DEPRECATED as of version 5.0.
1030 DEPRECATED as of version 5.0.
1029
1031
1030 Instruct the completer to use __all__ for the completion
1032 Instruct the completer to use __all__ for the completion
1031
1033
1032 Specifically, when completing on ``object.<tab>``.
1034 Specifically, when completing on ``object.<tab>``.
1033
1035
1034 When True: only those names in obj.__all__ will be included.
1036 When True: only those names in obj.__all__ will be included.
1035
1037
1036 When False [default]: the __all__ attribute is ignored
1038 When False [default]: the __all__ attribute is ignored
1037 """,
1039 """,
1038 ).tag(config=True)
1040 ).tag(config=True)
1039
1041
1040 @observe('limit_to__all__')
1042 @observe('limit_to__all__')
1041 def _limit_to_all_changed(self, change):
1043 def _limit_to_all_changed(self, change):
1042 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1044 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1043 'value has been deprecated since IPython 5.0, will be made to have '
1045 'value has been deprecated since IPython 5.0, will be made to have '
1044 'no effects and then removed in future version of IPython.',
1046 'no effects and then removed in future version of IPython.',
1045 UserWarning)
1047 UserWarning)
1046
1048
1047 def __init__(self, shell=None, namespace=None, global_namespace=None,
1049 def __init__(self, shell=None, namespace=None, global_namespace=None,
1048 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1050 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1049 """IPCompleter() -> completer
1051 """IPCompleter() -> completer
1050
1052
1051 Return a completer object.
1053 Return a completer object.
1052
1054
1053 Parameters
1055 Parameters
1054 ----------
1056 ----------
1055
1057
1056 shell
1058 shell
1057 a pointer to the ipython shell itself. This is needed
1059 a pointer to the ipython shell itself. This is needed
1058 because this completer knows about magic functions, and those can
1060 because this completer knows about magic functions, and those can
1059 only be accessed via the ipython instance.
1061 only be accessed via the ipython instance.
1060
1062
1061 namespace : dict, optional
1063 namespace : dict, optional
1062 an optional dict where completions are performed.
1064 an optional dict where completions are performed.
1063
1065
1064 global_namespace : dict, optional
1066 global_namespace : dict, optional
1065 secondary optional dict for completions, to
1067 secondary optional dict for completions, to
1066 handle cases (such as IPython embedded inside functions) where
1068 handle cases (such as IPython embedded inside functions) where
1067 both Python scopes are visible.
1069 both Python scopes are visible.
1068
1070
1069 use_readline : bool, optional
1071 use_readline : bool, optional
1070 DEPRECATED, ignored since IPython 6.0, will have no effects
1072 DEPRECATED, ignored since IPython 6.0, will have no effects
1071 """
1073 """
1072
1074
1073 self.magic_escape = ESC_MAGIC
1075 self.magic_escape = ESC_MAGIC
1074 self.splitter = CompletionSplitter()
1076 self.splitter = CompletionSplitter()
1075
1077
1076 if use_readline is not _deprecation_readline_sentinel:
1078 if use_readline is not _deprecation_readline_sentinel:
1077 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1079 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1078 DeprecationWarning, stacklevel=2)
1080 DeprecationWarning, stacklevel=2)
1079
1081
1080 # _greedy_changed() depends on splitter and readline being defined:
1082 # _greedy_changed() depends on splitter and readline being defined:
1081 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1083 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1082 config=config, **kwargs)
1084 config=config, **kwargs)
1083
1085
1084 # List where completion matches will be stored
1086 # List where completion matches will be stored
1085 self.matches = []
1087 self.matches = []
1086 self.shell = shell
1088 self.shell = shell
1087 # Regexp to split filenames with spaces in them
1089 # Regexp to split filenames with spaces in them
1088 self.space_name_re = re.compile(r'([^\\] )')
1090 self.space_name_re = re.compile(r'([^\\] )')
1089 # Hold a local ref. to glob.glob for speed
1091 # Hold a local ref. to glob.glob for speed
1090 self.glob = glob.glob
1092 self.glob = glob.glob
1091
1093
1092 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1094 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1093 # buffers, to avoid completion problems.
1095 # buffers, to avoid completion problems.
1094 term = os.environ.get('TERM','xterm')
1096 term = os.environ.get('TERM','xterm')
1095 self.dumb_terminal = term in ['dumb','emacs']
1097 self.dumb_terminal = term in ['dumb','emacs']
1096
1098
1097 # Special handling of backslashes needed in win32 platforms
1099 # Special handling of backslashes needed in win32 platforms
1098 if sys.platform == "win32":
1100 if sys.platform == "win32":
1099 self.clean_glob = self._clean_glob_win32
1101 self.clean_glob = self._clean_glob_win32
1100 else:
1102 else:
1101 self.clean_glob = self._clean_glob
1103 self.clean_glob = self._clean_glob
1102
1104
1103 #regexp to parse docstring for function signature
1105 #regexp to parse docstring for function signature
1104 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1106 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1105 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1107 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1106 #use this if positional argument name is also needed
1108 #use this if positional argument name is also needed
1107 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1109 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1108
1110
1109 self.magic_arg_matchers = [
1111 self.magic_arg_matchers = [
1110 self.magic_config_matches,
1112 self.magic_config_matches,
1111 self.magic_color_matches,
1113 self.magic_color_matches,
1112 ]
1114 ]
1113
1115
1114 # This is set externally by InteractiveShell
1116 # This is set externally by InteractiveShell
1115 self.custom_completers = None
1117 self.custom_completers = None
1116
1118
1117 @property
1119 @property
1118 def matchers(self):
1120 def matchers(self):
1119 """All active matcher routines for completion"""
1121 """All active matcher routines for completion"""
1120 if self.dict_keys_only:
1122 if self.dict_keys_only:
1121 return [self.dict_key_matches]
1123 return [self.dict_key_matches]
1122
1124
1123 if self.use_jedi:
1125 if self.use_jedi:
1124 return [
1126 return [
1127 *self.custom_matchers,
1125 self.file_matches,
1128 self.file_matches,
1126 self.magic_matches,
1129 self.magic_matches,
1127 self.dict_key_matches,
1130 self.dict_key_matches,
1128 ]
1131 ]
1129 else:
1132 else:
1130 return [
1133 return [
1134 *self.custom_matchers,
1131 self.python_matches,
1135 self.python_matches,
1132 self.file_matches,
1136 self.file_matches,
1133 self.magic_matches,
1137 self.magic_matches,
1134 self.python_func_kw_matches,
1138 self.python_func_kw_matches,
1135 self.dict_key_matches,
1139 self.dict_key_matches,
1136 ]
1140 ]
1137
1141
1138 def all_completions(self, text) -> List[str]:
1142 def all_completions(self, text) -> List[str]:
1139 """
1143 """
1140 Wrapper around the completion methods for the benefit of emacs.
1144 Wrapper around the completion methods for the benefit of emacs.
1141 """
1145 """
1142 prefix = text.rpartition('.')[0]
1146 prefix = text.rpartition('.')[0]
1143 with provisionalcompleter():
1147 with provisionalcompleter():
1144 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1148 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1145 for c in self.completions(text, len(text))]
1149 for c in self.completions(text, len(text))]
1146
1150
1147 return self.complete(text)[1]
1151 return self.complete(text)[1]
1148
1152
1149 def _clean_glob(self, text):
1153 def _clean_glob(self, text):
1150 return self.glob("%s*" % text)
1154 return self.glob("%s*" % text)
1151
1155
1152 def _clean_glob_win32(self,text):
1156 def _clean_glob_win32(self,text):
1153 return [f.replace("\\","/")
1157 return [f.replace("\\","/")
1154 for f in self.glob("%s*" % text)]
1158 for f in self.glob("%s*" % text)]
1155
1159
1156 def file_matches(self, text):
1160 def file_matches(self, text):
1157 """Match filenames, expanding ~USER type strings.
1161 """Match filenames, expanding ~USER type strings.
1158
1162
1159 Most of the seemingly convoluted logic in this completer is an
1163 Most of the seemingly convoluted logic in this completer is an
1160 attempt to handle filenames with spaces in them. And yet it's not
1164 attempt to handle filenames with spaces in them. And yet it's not
1161 quite perfect, because Python's readline doesn't expose all of the
1165 quite perfect, because Python's readline doesn't expose all of the
1162 GNU readline details needed for this to be done correctly.
1166 GNU readline details needed for this to be done correctly.
1163
1167
1164 For a filename with a space in it, the printed completions will be
1168 For a filename with a space in it, the printed completions will be
1165 only the parts after what's already been typed (instead of the
1169 only the parts after what's already been typed (instead of the
1166 full completions, as is normally done). I don't think with the
1170 full completions, as is normally done). I don't think with the
1167 current (as of Python 2.3) Python readline it's possible to do
1171 current (as of Python 2.3) Python readline it's possible to do
1168 better."""
1172 better."""
1169
1173
1170 # chars that require escaping with backslash - i.e. chars
1174 # chars that require escaping with backslash - i.e. chars
1171 # that readline treats incorrectly as delimiters, but we
1175 # that readline treats incorrectly as delimiters, but we
1172 # don't want to treat as delimiters in filename matching
1176 # don't want to treat as delimiters in filename matching
1173 # when escaped with backslash
1177 # when escaped with backslash
1174 if text.startswith('!'):
1178 if text.startswith('!'):
1175 text = text[1:]
1179 text = text[1:]
1176 text_prefix = u'!'
1180 text_prefix = u'!'
1177 else:
1181 else:
1178 text_prefix = u''
1182 text_prefix = u''
1179
1183
1180 text_until_cursor = self.text_until_cursor
1184 text_until_cursor = self.text_until_cursor
1181 # track strings with open quotes
1185 # track strings with open quotes
1182 open_quotes = has_open_quotes(text_until_cursor)
1186 open_quotes = has_open_quotes(text_until_cursor)
1183
1187
1184 if '(' in text_until_cursor or '[' in text_until_cursor:
1188 if '(' in text_until_cursor or '[' in text_until_cursor:
1185 lsplit = text
1189 lsplit = text
1186 else:
1190 else:
1187 try:
1191 try:
1188 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1192 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1189 lsplit = arg_split(text_until_cursor)[-1]
1193 lsplit = arg_split(text_until_cursor)[-1]
1190 except ValueError:
1194 except ValueError:
1191 # typically an unmatched ", or backslash without escaped char.
1195 # typically an unmatched ", or backslash without escaped char.
1192 if open_quotes:
1196 if open_quotes:
1193 lsplit = text_until_cursor.split(open_quotes)[-1]
1197 lsplit = text_until_cursor.split(open_quotes)[-1]
1194 else:
1198 else:
1195 return []
1199 return []
1196 except IndexError:
1200 except IndexError:
1197 # tab pressed on empty line
1201 # tab pressed on empty line
1198 lsplit = ""
1202 lsplit = ""
1199
1203
1200 if not open_quotes and lsplit != protect_filename(lsplit):
1204 if not open_quotes and lsplit != protect_filename(lsplit):
1201 # if protectables are found, do matching on the whole escaped name
1205 # if protectables are found, do matching on the whole escaped name
1202 has_protectables = True
1206 has_protectables = True
1203 text0,text = text,lsplit
1207 text0,text = text,lsplit
1204 else:
1208 else:
1205 has_protectables = False
1209 has_protectables = False
1206 text = os.path.expanduser(text)
1210 text = os.path.expanduser(text)
1207
1211
1208 if text == "":
1212 if text == "":
1209 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1213 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1210
1214
1211 # Compute the matches from the filesystem
1215 # Compute the matches from the filesystem
1212 if sys.platform == 'win32':
1216 if sys.platform == 'win32':
1213 m0 = self.clean_glob(text)
1217 m0 = self.clean_glob(text)
1214 else:
1218 else:
1215 m0 = self.clean_glob(text.replace('\\', ''))
1219 m0 = self.clean_glob(text.replace('\\', ''))
1216
1220
1217 if has_protectables:
1221 if has_protectables:
1218 # If we had protectables, we need to revert our changes to the
1222 # If we had protectables, we need to revert our changes to the
1219 # beginning of filename so that we don't double-write the part
1223 # beginning of filename so that we don't double-write the part
1220 # of the filename we have so far
1224 # of the filename we have so far
1221 len_lsplit = len(lsplit)
1225 len_lsplit = len(lsplit)
1222 matches = [text_prefix + text0 +
1226 matches = [text_prefix + text0 +
1223 protect_filename(f[len_lsplit:]) for f in m0]
1227 protect_filename(f[len_lsplit:]) for f in m0]
1224 else:
1228 else:
1225 if open_quotes:
1229 if open_quotes:
1226 # if we have a string with an open quote, we don't need to
1230 # if we have a string with an open quote, we don't need to
1227 # protect the names beyond the quote (and we _shouldn't_, as
1231 # protect the names beyond the quote (and we _shouldn't_, as
1228 # it would cause bugs when the filesystem call is made).
1232 # it would cause bugs when the filesystem call is made).
1229 matches = m0 if sys.platform == "win32" else\
1233 matches = m0 if sys.platform == "win32" else\
1230 [protect_filename(f, open_quotes) for f in m0]
1234 [protect_filename(f, open_quotes) for f in m0]
1231 else:
1235 else:
1232 matches = [text_prefix +
1236 matches = [text_prefix +
1233 protect_filename(f) for f in m0]
1237 protect_filename(f) for f in m0]
1234
1238
1235 # Mark directories in input list by appending '/' to their names.
1239 # Mark directories in input list by appending '/' to their names.
1236 return [x+'/' if os.path.isdir(x) else x for x in matches]
1240 return [x+'/' if os.path.isdir(x) else x for x in matches]
1237
1241
1238 def magic_matches(self, text):
1242 def magic_matches(self, text):
1239 """Match magics"""
1243 """Match magics"""
1240 # Get all shell magics now rather than statically, so magics loaded at
1244 # Get all shell magics now rather than statically, so magics loaded at
1241 # runtime show up too.
1245 # runtime show up too.
1242 lsm = self.shell.magics_manager.lsmagic()
1246 lsm = self.shell.magics_manager.lsmagic()
1243 line_magics = lsm['line']
1247 line_magics = lsm['line']
1244 cell_magics = lsm['cell']
1248 cell_magics = lsm['cell']
1245 pre = self.magic_escape
1249 pre = self.magic_escape
1246 pre2 = pre+pre
1250 pre2 = pre+pre
1247
1251
1248 explicit_magic = text.startswith(pre)
1252 explicit_magic = text.startswith(pre)
1249
1253
1250 # Completion logic:
1254 # Completion logic:
1251 # - user gives %%: only do cell magics
1255 # - user gives %%: only do cell magics
1252 # - user gives %: do both line and cell magics
1256 # - user gives %: do both line and cell magics
1253 # - no prefix: do both
1257 # - no prefix: do both
1254 # In other words, line magics are skipped if the user gives %% explicitly
1258 # In other words, line magics are skipped if the user gives %% explicitly
1255 #
1259 #
1256 # We also exclude magics that match any currently visible names:
1260 # We also exclude magics that match any currently visible names:
1257 # https://github.com/ipython/ipython/issues/4877, unless the user has
1261 # https://github.com/ipython/ipython/issues/4877, unless the user has
1258 # typed a %:
1262 # typed a %:
1259 # https://github.com/ipython/ipython/issues/10754
1263 # https://github.com/ipython/ipython/issues/10754
1260 bare_text = text.lstrip(pre)
1264 bare_text = text.lstrip(pre)
1261 global_matches = self.global_matches(bare_text)
1265 global_matches = self.global_matches(bare_text)
1262 if not explicit_magic:
1266 if not explicit_magic:
1263 def matches(magic):
1267 def matches(magic):
1264 """
1268 """
1265 Filter magics, in particular remove magics that match
1269 Filter magics, in particular remove magics that match
1266 a name present in global namespace.
1270 a name present in global namespace.
1267 """
1271 """
1268 return ( magic.startswith(bare_text) and
1272 return ( magic.startswith(bare_text) and
1269 magic not in global_matches )
1273 magic not in global_matches )
1270 else:
1274 else:
1271 def matches(magic):
1275 def matches(magic):
1272 return magic.startswith(bare_text)
1276 return magic.startswith(bare_text)
1273
1277
1274 comp = [ pre2+m for m in cell_magics if matches(m)]
1278 comp = [ pre2+m for m in cell_magics if matches(m)]
1275 if not text.startswith(pre2):
1279 if not text.startswith(pre2):
1276 comp += [ pre+m for m in line_magics if matches(m)]
1280 comp += [ pre+m for m in line_magics if matches(m)]
1277
1281
1278 return comp
1282 return comp
1279
1283
1280 def magic_config_matches(self, text:str) -> List[str]:
1284 def magic_config_matches(self, text:str) -> List[str]:
1281 """ Match class names and attributes for %config magic """
1285 """ Match class names and attributes for %config magic """
1282 texts = text.strip().split()
1286 texts = text.strip().split()
1283
1287
1284 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1288 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1285 # get all configuration classes
1289 # get all configuration classes
1286 classes = sorted(set([ c for c in self.shell.configurables
1290 classes = sorted(set([ c for c in self.shell.configurables
1287 if c.__class__.class_traits(config=True)
1291 if c.__class__.class_traits(config=True)
1288 ]), key=lambda x: x.__class__.__name__)
1292 ]), key=lambda x: x.__class__.__name__)
1289 classnames = [ c.__class__.__name__ for c in classes ]
1293 classnames = [ c.__class__.__name__ for c in classes ]
1290
1294
1291 # return all classnames if config or %config is given
1295 # return all classnames if config or %config is given
1292 if len(texts) == 1:
1296 if len(texts) == 1:
1293 return classnames
1297 return classnames
1294
1298
1295 # match classname
1299 # match classname
1296 classname_texts = texts[1].split('.')
1300 classname_texts = texts[1].split('.')
1297 classname = classname_texts[0]
1301 classname = classname_texts[0]
1298 classname_matches = [ c for c in classnames
1302 classname_matches = [ c for c in classnames
1299 if c.startswith(classname) ]
1303 if c.startswith(classname) ]
1300
1304
1301 # return matched classes or the matched class with attributes
1305 # return matched classes or the matched class with attributes
1302 if texts[1].find('.') < 0:
1306 if texts[1].find('.') < 0:
1303 return classname_matches
1307 return classname_matches
1304 elif len(classname_matches) == 1 and \
1308 elif len(classname_matches) == 1 and \
1305 classname_matches[0] == classname:
1309 classname_matches[0] == classname:
1306 cls = classes[classnames.index(classname)].__class__
1310 cls = classes[classnames.index(classname)].__class__
1307 help = cls.class_get_help()
1311 help = cls.class_get_help()
1308 # strip leading '--' from cl-args:
1312 # strip leading '--' from cl-args:
1309 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1313 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1310 return [ attr.split('=')[0]
1314 return [ attr.split('=')[0]
1311 for attr in help.strip().splitlines()
1315 for attr in help.strip().splitlines()
1312 if attr.startswith(texts[1]) ]
1316 if attr.startswith(texts[1]) ]
1313 return []
1317 return []
1314
1318
1315 def magic_color_matches(self, text:str) -> List[str] :
1319 def magic_color_matches(self, text:str) -> List[str] :
1316 """ Match color schemes for %colors magic"""
1320 """ Match color schemes for %colors magic"""
1317 texts = text.split()
1321 texts = text.split()
1318 if text.endswith(' '):
1322 if text.endswith(' '):
1319 # .split() strips off the trailing whitespace. Add '' back
1323 # .split() strips off the trailing whitespace. Add '' back
1320 # so that: '%colors ' -> ['%colors', '']
1324 # so that: '%colors ' -> ['%colors', '']
1321 texts.append('')
1325 texts.append('')
1322
1326
1323 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1327 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1324 prefix = texts[1]
1328 prefix = texts[1]
1325 return [ color for color in InspectColors.keys()
1329 return [ color for color in InspectColors.keys()
1326 if color.startswith(prefix) ]
1330 if color.startswith(prefix) ]
1327 return []
1331 return []
1328
1332
1329 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str):
1333 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str):
1330 """
1334 """
1331
1335
1332 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1336 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1333 cursor position.
1337 cursor position.
1334
1338
1335 Parameters
1339 Parameters
1336 ----------
1340 ----------
1337 cursor_column : int
1341 cursor_column : int
1338 column position of the cursor in ``text``, 0-indexed.
1342 column position of the cursor in ``text``, 0-indexed.
1339 cursor_line : int
1343 cursor_line : int
1340 line position of the cursor in ``text``, 0-indexed
1344 line position of the cursor in ``text``, 0-indexed
1341 text : str
1345 text : str
1342 text to complete
1346 text to complete
1343
1347
1344 Debugging
1348 Debugging
1345 ---------
1349 ---------
1346
1350
1347 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1351 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1348 object containing a string with the Jedi debug information attached.
1352 object containing a string with the Jedi debug information attached.
1349 """
1353 """
1350 namespaces = [self.namespace]
1354 namespaces = [self.namespace]
1351 if self.global_namespace is not None:
1355 if self.global_namespace is not None:
1352 namespaces.append(self.global_namespace)
1356 namespaces.append(self.global_namespace)
1353
1357
1354 completion_filter = lambda x:x
1358 completion_filter = lambda x:x
1355 offset = cursor_to_position(text, cursor_line, cursor_column)
1359 offset = cursor_to_position(text, cursor_line, cursor_column)
1356 # filter output if we are completing for object members
1360 # filter output if we are completing for object members
1357 if offset:
1361 if offset:
1358 pre = text[offset-1]
1362 pre = text[offset-1]
1359 if pre == '.':
1363 if pre == '.':
1360 if self.omit__names == 2:
1364 if self.omit__names == 2:
1361 completion_filter = lambda c:not c.name.startswith('_')
1365 completion_filter = lambda c:not c.name.startswith('_')
1362 elif self.omit__names == 1:
1366 elif self.omit__names == 1:
1363 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1367 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1364 elif self.omit__names == 0:
1368 elif self.omit__names == 0:
1365 completion_filter = lambda x:x
1369 completion_filter = lambda x:x
1366 else:
1370 else:
1367 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1371 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1368
1372
1369 interpreter = jedi.Interpreter(
1373 interpreter = jedi.Interpreter(
1370 text[:offset], namespaces, column=cursor_column, line=cursor_line + 1)
1374 text[:offset], namespaces, column=cursor_column, line=cursor_line + 1)
1371 try_jedi = True
1375 try_jedi = True
1372
1376
1373 try:
1377 try:
1374 # should we check the type of the node is Error ?
1378 # should we check the type of the node is Error ?
1375 try:
1379 try:
1376 # jedi < 0.11
1380 # jedi < 0.11
1377 from jedi.parser.tree import ErrorLeaf
1381 from jedi.parser.tree import ErrorLeaf
1378 except ImportError:
1382 except ImportError:
1379 # jedi >= 0.11
1383 # jedi >= 0.11
1380 from parso.tree import ErrorLeaf
1384 from parso.tree import ErrorLeaf
1381
1385
1382 next_to_last_tree = interpreter._get_module().tree_node.children[-2]
1386 next_to_last_tree = interpreter._get_module().tree_node.children[-2]
1383 completing_string = False
1387 completing_string = False
1384 if isinstance(next_to_last_tree, ErrorLeaf):
1388 if isinstance(next_to_last_tree, ErrorLeaf):
1385 completing_string = next_to_last_tree.value.lstrip()[0] in {'"', "'"}
1389 completing_string = next_to_last_tree.value.lstrip()[0] in {'"', "'"}
1386 # if we are in a string jedi is likely not the right candidate for
1390 # if we are in a string jedi is likely not the right candidate for
1387 # now. Skip it.
1391 # now. Skip it.
1388 try_jedi = not completing_string
1392 try_jedi = not completing_string
1389 except Exception as e:
1393 except Exception as e:
1390 # many of things can go wrong, we are using private API just don't crash.
1394 # many of things can go wrong, we are using private API just don't crash.
1391 if self.debug:
1395 if self.debug:
1392 print("Error detecting if completing a non-finished string :", e, '|')
1396 print("Error detecting if completing a non-finished string :", e, '|')
1393
1397
1394 if not try_jedi:
1398 if not try_jedi:
1395 return []
1399 return []
1396 try:
1400 try:
1397 return filter(completion_filter, interpreter.completions())
1401 return filter(completion_filter, interpreter.completions())
1398 except Exception as e:
1402 except Exception as e:
1399 if self.debug:
1403 if self.debug:
1400 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1404 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1401 else:
1405 else:
1402 return []
1406 return []
1403
1407
1404 def python_matches(self, text):
1408 def python_matches(self, text):
1405 """Match attributes or global python names"""
1409 """Match attributes or global python names"""
1406 if "." in text:
1410 if "." in text:
1407 try:
1411 try:
1408 matches = self.attr_matches(text)
1412 matches = self.attr_matches(text)
1409 if text.endswith('.') and self.omit__names:
1413 if text.endswith('.') and self.omit__names:
1410 if self.omit__names == 1:
1414 if self.omit__names == 1:
1411 # true if txt is _not_ a __ name, false otherwise:
1415 # true if txt is _not_ a __ name, false otherwise:
1412 no__name = (lambda txt:
1416 no__name = (lambda txt:
1413 re.match(r'.*\.__.*?__',txt) is None)
1417 re.match(r'.*\.__.*?__',txt) is None)
1414 else:
1418 else:
1415 # true if txt is _not_ a _ name, false otherwise:
1419 # true if txt is _not_ a _ name, false otherwise:
1416 no__name = (lambda txt:
1420 no__name = (lambda txt:
1417 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1421 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1418 matches = filter(no__name, matches)
1422 matches = filter(no__name, matches)
1419 except NameError:
1423 except NameError:
1420 # catches <undefined attributes>.<tab>
1424 # catches <undefined attributes>.<tab>
1421 matches = []
1425 matches = []
1422 else:
1426 else:
1423 matches = self.global_matches(text)
1427 matches = self.global_matches(text)
1424 return matches
1428 return matches
1425
1429
1426 def _default_arguments_from_docstring(self, doc):
1430 def _default_arguments_from_docstring(self, doc):
1427 """Parse the first line of docstring for call signature.
1431 """Parse the first line of docstring for call signature.
1428
1432
1429 Docstring should be of the form 'min(iterable[, key=func])\n'.
1433 Docstring should be of the form 'min(iterable[, key=func])\n'.
1430 It can also parse cython docstring of the form
1434 It can also parse cython docstring of the form
1431 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1435 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1432 """
1436 """
1433 if doc is None:
1437 if doc is None:
1434 return []
1438 return []
1435
1439
1436 #care only the firstline
1440 #care only the firstline
1437 line = doc.lstrip().splitlines()[0]
1441 line = doc.lstrip().splitlines()[0]
1438
1442
1439 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1443 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1440 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1444 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1441 sig = self.docstring_sig_re.search(line)
1445 sig = self.docstring_sig_re.search(line)
1442 if sig is None:
1446 if sig is None:
1443 return []
1447 return []
1444 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1448 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1445 sig = sig.groups()[0].split(',')
1449 sig = sig.groups()[0].split(',')
1446 ret = []
1450 ret = []
1447 for s in sig:
1451 for s in sig:
1448 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1452 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1449 ret += self.docstring_kwd_re.findall(s)
1453 ret += self.docstring_kwd_re.findall(s)
1450 return ret
1454 return ret
1451
1455
1452 def _default_arguments(self, obj):
1456 def _default_arguments(self, obj):
1453 """Return the list of default arguments of obj if it is callable,
1457 """Return the list of default arguments of obj if it is callable,
1454 or empty list otherwise."""
1458 or empty list otherwise."""
1455 call_obj = obj
1459 call_obj = obj
1456 ret = []
1460 ret = []
1457 if inspect.isbuiltin(obj):
1461 if inspect.isbuiltin(obj):
1458 pass
1462 pass
1459 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1463 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1460 if inspect.isclass(obj):
1464 if inspect.isclass(obj):
1461 #for cython embedsignature=True the constructor docstring
1465 #for cython embedsignature=True the constructor docstring
1462 #belongs to the object itself not __init__
1466 #belongs to the object itself not __init__
1463 ret += self._default_arguments_from_docstring(
1467 ret += self._default_arguments_from_docstring(
1464 getattr(obj, '__doc__', ''))
1468 getattr(obj, '__doc__', ''))
1465 # for classes, check for __init__,__new__
1469 # for classes, check for __init__,__new__
1466 call_obj = (getattr(obj, '__init__', None) or
1470 call_obj = (getattr(obj, '__init__', None) or
1467 getattr(obj, '__new__', None))
1471 getattr(obj, '__new__', None))
1468 # for all others, check if they are __call__able
1472 # for all others, check if they are __call__able
1469 elif hasattr(obj, '__call__'):
1473 elif hasattr(obj, '__call__'):
1470 call_obj = obj.__call__
1474 call_obj = obj.__call__
1471 ret += self._default_arguments_from_docstring(
1475 ret += self._default_arguments_from_docstring(
1472 getattr(call_obj, '__doc__', ''))
1476 getattr(call_obj, '__doc__', ''))
1473
1477
1474 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1478 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1475 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1479 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1476
1480
1477 try:
1481 try:
1478 sig = inspect.signature(call_obj)
1482 sig = inspect.signature(call_obj)
1479 ret.extend(k for k, v in sig.parameters.items() if
1483 ret.extend(k for k, v in sig.parameters.items() if
1480 v.kind in _keeps)
1484 v.kind in _keeps)
1481 except ValueError:
1485 except ValueError:
1482 pass
1486 pass
1483
1487
1484 return list(set(ret))
1488 return list(set(ret))
1485
1489
1486 def python_func_kw_matches(self,text):
1490 def python_func_kw_matches(self,text):
1487 """Match named parameters (kwargs) of the last open function"""
1491 """Match named parameters (kwargs) of the last open function"""
1488
1492
1489 if "." in text: # a parameter cannot be dotted
1493 if "." in text: # a parameter cannot be dotted
1490 return []
1494 return []
1491 try: regexp = self.__funcParamsRegex
1495 try: regexp = self.__funcParamsRegex
1492 except AttributeError:
1496 except AttributeError:
1493 regexp = self.__funcParamsRegex = re.compile(r'''
1497 regexp = self.__funcParamsRegex = re.compile(r'''
1494 '.*?(?<!\\)' | # single quoted strings or
1498 '.*?(?<!\\)' | # single quoted strings or
1495 ".*?(?<!\\)" | # double quoted strings or
1499 ".*?(?<!\\)" | # double quoted strings or
1496 \w+ | # identifier
1500 \w+ | # identifier
1497 \S # other characters
1501 \S # other characters
1498 ''', re.VERBOSE | re.DOTALL)
1502 ''', re.VERBOSE | re.DOTALL)
1499 # 1. find the nearest identifier that comes before an unclosed
1503 # 1. find the nearest identifier that comes before an unclosed
1500 # parenthesis before the cursor
1504 # parenthesis before the cursor
1501 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1505 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1502 tokens = regexp.findall(self.text_until_cursor)
1506 tokens = regexp.findall(self.text_until_cursor)
1503 iterTokens = reversed(tokens); openPar = 0
1507 iterTokens = reversed(tokens); openPar = 0
1504
1508
1505 for token in iterTokens:
1509 for token in iterTokens:
1506 if token == ')':
1510 if token == ')':
1507 openPar -= 1
1511 openPar -= 1
1508 elif token == '(':
1512 elif token == '(':
1509 openPar += 1
1513 openPar += 1
1510 if openPar > 0:
1514 if openPar > 0:
1511 # found the last unclosed parenthesis
1515 # found the last unclosed parenthesis
1512 break
1516 break
1513 else:
1517 else:
1514 return []
1518 return []
1515 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1519 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1516 ids = []
1520 ids = []
1517 isId = re.compile(r'\w+$').match
1521 isId = re.compile(r'\w+$').match
1518
1522
1519 while True:
1523 while True:
1520 try:
1524 try:
1521 ids.append(next(iterTokens))
1525 ids.append(next(iterTokens))
1522 if not isId(ids[-1]):
1526 if not isId(ids[-1]):
1523 ids.pop(); break
1527 ids.pop(); break
1524 if not next(iterTokens) == '.':
1528 if not next(iterTokens) == '.':
1525 break
1529 break
1526 except StopIteration:
1530 except StopIteration:
1527 break
1531 break
1528
1532
1529 # Find all named arguments already assigned to, as to avoid suggesting
1533 # Find all named arguments already assigned to, as to avoid suggesting
1530 # them again
1534 # them again
1531 usedNamedArgs = set()
1535 usedNamedArgs = set()
1532 par_level = -1
1536 par_level = -1
1533 for token, next_token in zip(tokens, tokens[1:]):
1537 for token, next_token in zip(tokens, tokens[1:]):
1534 if token == '(':
1538 if token == '(':
1535 par_level += 1
1539 par_level += 1
1536 elif token == ')':
1540 elif token == ')':
1537 par_level -= 1
1541 par_level -= 1
1538
1542
1539 if par_level != 0:
1543 if par_level != 0:
1540 continue
1544 continue
1541
1545
1542 if next_token != '=':
1546 if next_token != '=':
1543 continue
1547 continue
1544
1548
1545 usedNamedArgs.add(token)
1549 usedNamedArgs.add(token)
1546
1550
1547 argMatches = []
1551 argMatches = []
1548 try:
1552 try:
1549 callableObj = '.'.join(ids[::-1])
1553 callableObj = '.'.join(ids[::-1])
1550 namedArgs = self._default_arguments(eval(callableObj,
1554 namedArgs = self._default_arguments(eval(callableObj,
1551 self.namespace))
1555 self.namespace))
1552
1556
1553 # Remove used named arguments from the list, no need to show twice
1557 # Remove used named arguments from the list, no need to show twice
1554 for namedArg in set(namedArgs) - usedNamedArgs:
1558 for namedArg in set(namedArgs) - usedNamedArgs:
1555 if namedArg.startswith(text):
1559 if namedArg.startswith(text):
1556 argMatches.append(u"%s=" %namedArg)
1560 argMatches.append(u"%s=" %namedArg)
1557 except:
1561 except:
1558 pass
1562 pass
1559
1563
1560 return argMatches
1564 return argMatches
1561
1565
1562 def dict_key_matches(self, text):
1566 def dict_key_matches(self, text):
1563 "Match string keys in a dictionary, after e.g. 'foo[' "
1567 "Match string keys in a dictionary, after e.g. 'foo[' "
1564 def get_keys(obj):
1568 def get_keys(obj):
1565 # Objects can define their own completions by defining an
1569 # Objects can define their own completions by defining an
1566 # _ipy_key_completions_() method.
1570 # _ipy_key_completions_() method.
1567 method = get_real_method(obj, '_ipython_key_completions_')
1571 method = get_real_method(obj, '_ipython_key_completions_')
1568 if method is not None:
1572 if method is not None:
1569 return method()
1573 return method()
1570
1574
1571 # Special case some common in-memory dict-like types
1575 # Special case some common in-memory dict-like types
1572 if isinstance(obj, dict) or\
1576 if isinstance(obj, dict) or\
1573 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1577 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1574 try:
1578 try:
1575 return list(obj.keys())
1579 return list(obj.keys())
1576 except Exception:
1580 except Exception:
1577 return []
1581 return []
1578 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1582 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1579 _safe_isinstance(obj, 'numpy', 'void'):
1583 _safe_isinstance(obj, 'numpy', 'void'):
1580 return obj.dtype.names or []
1584 return obj.dtype.names or []
1581 return []
1585 return []
1582
1586
1583 try:
1587 try:
1584 regexps = self.__dict_key_regexps
1588 regexps = self.__dict_key_regexps
1585 except AttributeError:
1589 except AttributeError:
1586 dict_key_re_fmt = r'''(?x)
1590 dict_key_re_fmt = r'''(?x)
1587 ( # match dict-referring expression wrt greedy setting
1591 ( # match dict-referring expression wrt greedy setting
1588 %s
1592 %s
1589 )
1593 )
1590 \[ # open bracket
1594 \[ # open bracket
1591 \s* # and optional whitespace
1595 \s* # and optional whitespace
1592 ([uUbB]? # string prefix (r not handled)
1596 ([uUbB]? # string prefix (r not handled)
1593 (?: # unclosed string
1597 (?: # unclosed string
1594 '(?:[^']|(?<!\\)\\')*
1598 '(?:[^']|(?<!\\)\\')*
1595 |
1599 |
1596 "(?:[^"]|(?<!\\)\\")*
1600 "(?:[^"]|(?<!\\)\\")*
1597 )
1601 )
1598 )?
1602 )?
1599 $
1603 $
1600 '''
1604 '''
1601 regexps = self.__dict_key_regexps = {
1605 regexps = self.__dict_key_regexps = {
1602 False: re.compile(dict_key_re_fmt % r'''
1606 False: re.compile(dict_key_re_fmt % r'''
1603 # identifiers separated by .
1607 # identifiers separated by .
1604 (?!\d)\w+
1608 (?!\d)\w+
1605 (?:\.(?!\d)\w+)*
1609 (?:\.(?!\d)\w+)*
1606 '''),
1610 '''),
1607 True: re.compile(dict_key_re_fmt % '''
1611 True: re.compile(dict_key_re_fmt % '''
1608 .+
1612 .+
1609 ''')
1613 ''')
1610 }
1614 }
1611
1615
1612 match = regexps[self.greedy].search(self.text_until_cursor)
1616 match = regexps[self.greedy].search(self.text_until_cursor)
1613 if match is None:
1617 if match is None:
1614 return []
1618 return []
1615
1619
1616 expr, prefix = match.groups()
1620 expr, prefix = match.groups()
1617 try:
1621 try:
1618 obj = eval(expr, self.namespace)
1622 obj = eval(expr, self.namespace)
1619 except Exception:
1623 except Exception:
1620 try:
1624 try:
1621 obj = eval(expr, self.global_namespace)
1625 obj = eval(expr, self.global_namespace)
1622 except Exception:
1626 except Exception:
1623 return []
1627 return []
1624
1628
1625 keys = get_keys(obj)
1629 keys = get_keys(obj)
1626 if not keys:
1630 if not keys:
1627 return keys
1631 return keys
1628 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims)
1632 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims)
1629 if not matches:
1633 if not matches:
1630 return matches
1634 return matches
1631
1635
1632 # get the cursor position of
1636 # get the cursor position of
1633 # - the text being completed
1637 # - the text being completed
1634 # - the start of the key text
1638 # - the start of the key text
1635 # - the start of the completion
1639 # - the start of the completion
1636 text_start = len(self.text_until_cursor) - len(text)
1640 text_start = len(self.text_until_cursor) - len(text)
1637 if prefix:
1641 if prefix:
1638 key_start = match.start(2)
1642 key_start = match.start(2)
1639 completion_start = key_start + token_offset
1643 completion_start = key_start + token_offset
1640 else:
1644 else:
1641 key_start = completion_start = match.end()
1645 key_start = completion_start = match.end()
1642
1646
1643 # grab the leading prefix, to make sure all completions start with `text`
1647 # grab the leading prefix, to make sure all completions start with `text`
1644 if text_start > key_start:
1648 if text_start > key_start:
1645 leading = ''
1649 leading = ''
1646 else:
1650 else:
1647 leading = text[text_start:completion_start]
1651 leading = text[text_start:completion_start]
1648
1652
1649 # the index of the `[` character
1653 # the index of the `[` character
1650 bracket_idx = match.end(1)
1654 bracket_idx = match.end(1)
1651
1655
1652 # append closing quote and bracket as appropriate
1656 # append closing quote and bracket as appropriate
1653 # this is *not* appropriate if the opening quote or bracket is outside
1657 # this is *not* appropriate if the opening quote or bracket is outside
1654 # the text given to this method
1658 # the text given to this method
1655 suf = ''
1659 suf = ''
1656 continuation = self.line_buffer[len(self.text_until_cursor):]
1660 continuation = self.line_buffer[len(self.text_until_cursor):]
1657 if key_start > text_start and closing_quote:
1661 if key_start > text_start and closing_quote:
1658 # quotes were opened inside text, maybe close them
1662 # quotes were opened inside text, maybe close them
1659 if continuation.startswith(closing_quote):
1663 if continuation.startswith(closing_quote):
1660 continuation = continuation[len(closing_quote):]
1664 continuation = continuation[len(closing_quote):]
1661 else:
1665 else:
1662 suf += closing_quote
1666 suf += closing_quote
1663 if bracket_idx > text_start:
1667 if bracket_idx > text_start:
1664 # brackets were opened inside text, maybe close them
1668 # brackets were opened inside text, maybe close them
1665 if not continuation.startswith(']'):
1669 if not continuation.startswith(']'):
1666 suf += ']'
1670 suf += ']'
1667
1671
1668 return [leading + k + suf for k in matches]
1672 return [leading + k + suf for k in matches]
1669
1673
1670 def unicode_name_matches(self, text):
1674 def unicode_name_matches(self, text):
1671 u"""Match Latex-like syntax for unicode characters base
1675 u"""Match Latex-like syntax for unicode characters base
1672 on the name of the character.
1676 on the name of the character.
1673
1677
1674 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1678 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1675
1679
1676 Works only on valid python 3 identifier, or on combining characters that
1680 Works only on valid python 3 identifier, or on combining characters that
1677 will combine to form a valid identifier.
1681 will combine to form a valid identifier.
1678
1682
1679 Used on Python 3 only.
1683 Used on Python 3 only.
1680 """
1684 """
1681 slashpos = text.rfind('\\')
1685 slashpos = text.rfind('\\')
1682 if slashpos > -1:
1686 if slashpos > -1:
1683 s = text[slashpos+1:]
1687 s = text[slashpos+1:]
1684 try :
1688 try :
1685 unic = unicodedata.lookup(s)
1689 unic = unicodedata.lookup(s)
1686 # allow combining chars
1690 # allow combining chars
1687 if ('a'+unic).isidentifier():
1691 if ('a'+unic).isidentifier():
1688 return '\\'+s,[unic]
1692 return '\\'+s,[unic]
1689 except KeyError:
1693 except KeyError:
1690 pass
1694 pass
1691 return u'', []
1695 return u'', []
1692
1696
1693
1697
1694 def latex_matches(self, text):
1698 def latex_matches(self, text):
1695 u"""Match Latex syntax for unicode characters.
1699 u"""Match Latex syntax for unicode characters.
1696
1700
1697 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1701 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1698
1702
1699 Used on Python 3 only.
1703 Used on Python 3 only.
1700 """
1704 """
1701 slashpos = text.rfind('\\')
1705 slashpos = text.rfind('\\')
1702 if slashpos > -1:
1706 if slashpos > -1:
1703 s = text[slashpos:]
1707 s = text[slashpos:]
1704 if s in latex_symbols:
1708 if s in latex_symbols:
1705 # Try to complete a full latex symbol to unicode
1709 # Try to complete a full latex symbol to unicode
1706 # \\alpha -> Ξ±
1710 # \\alpha -> Ξ±
1707 return s, [latex_symbols[s]]
1711 return s, [latex_symbols[s]]
1708 else:
1712 else:
1709 # If a user has partially typed a latex symbol, give them
1713 # If a user has partially typed a latex symbol, give them
1710 # a full list of options \al -> [\aleph, \alpha]
1714 # a full list of options \al -> [\aleph, \alpha]
1711 matches = [k for k in latex_symbols if k.startswith(s)]
1715 matches = [k for k in latex_symbols if k.startswith(s)]
1712 return s, matches
1716 return s, matches
1713 return u'', []
1717 return u'', []
1714
1718
1715 def dispatch_custom_completer(self, text):
1719 def dispatch_custom_completer(self, text):
1716 if not self.custom_completers:
1720 if not self.custom_completers:
1717 return
1721 return
1718
1722
1719 line = self.line_buffer
1723 line = self.line_buffer
1720 if not line.strip():
1724 if not line.strip():
1721 return None
1725 return None
1722
1726
1723 # Create a little structure to pass all the relevant information about
1727 # Create a little structure to pass all the relevant information about
1724 # the current completion to any custom completer.
1728 # the current completion to any custom completer.
1725 event = SimpleNamespace()
1729 event = SimpleNamespace()
1726 event.line = line
1730 event.line = line
1727 event.symbol = text
1731 event.symbol = text
1728 cmd = line.split(None,1)[0]
1732 cmd = line.split(None,1)[0]
1729 event.command = cmd
1733 event.command = cmd
1730 event.text_until_cursor = self.text_until_cursor
1734 event.text_until_cursor = self.text_until_cursor
1731
1735
1732 # for foo etc, try also to find completer for %foo
1736 # for foo etc, try also to find completer for %foo
1733 if not cmd.startswith(self.magic_escape):
1737 if not cmd.startswith(self.magic_escape):
1734 try_magic = self.custom_completers.s_matches(
1738 try_magic = self.custom_completers.s_matches(
1735 self.magic_escape + cmd)
1739 self.magic_escape + cmd)
1736 else:
1740 else:
1737 try_magic = []
1741 try_magic = []
1738
1742
1739 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1743 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1740 try_magic,
1744 try_magic,
1741 self.custom_completers.flat_matches(self.text_until_cursor)):
1745 self.custom_completers.flat_matches(self.text_until_cursor)):
1742 try:
1746 try:
1743 res = c(event)
1747 res = c(event)
1744 if res:
1748 if res:
1745 # first, try case sensitive match
1749 # first, try case sensitive match
1746 withcase = [r for r in res if r.startswith(text)]
1750 withcase = [r for r in res if r.startswith(text)]
1747 if withcase:
1751 if withcase:
1748 return withcase
1752 return withcase
1749 # if none, then case insensitive ones are ok too
1753 # if none, then case insensitive ones are ok too
1750 text_low = text.lower()
1754 text_low = text.lower()
1751 return [r for r in res if r.lower().startswith(text_low)]
1755 return [r for r in res if r.lower().startswith(text_low)]
1752 except TryNext:
1756 except TryNext:
1753 pass
1757 pass
1754 except KeyboardInterrupt:
1758 except KeyboardInterrupt:
1755 """
1759 """
1756 If custom completer take too long,
1760 If custom completer take too long,
1757 let keyboard interrupt abort and return nothing.
1761 let keyboard interrupt abort and return nothing.
1758 """
1762 """
1759 break
1763 break
1760
1764
1761 return None
1765 return None
1762
1766
1763 def completions(self, text: str, offset: int)->Iterator[Completion]:
1767 def completions(self, text: str, offset: int)->Iterator[Completion]:
1764 """
1768 """
1765 Returns an iterator over the possible completions
1769 Returns an iterator over the possible completions
1766
1770
1767 .. warning:: Unstable
1771 .. warning:: Unstable
1768
1772
1769 This function is unstable, API may change without warning.
1773 This function is unstable, API may change without warning.
1770 It will also raise unless use in proper context manager.
1774 It will also raise unless use in proper context manager.
1771
1775
1772 Parameters
1776 Parameters
1773 ----------
1777 ----------
1774
1778
1775 text:str
1779 text:str
1776 Full text of the current input, multi line string.
1780 Full text of the current input, multi line string.
1777 offset:int
1781 offset:int
1778 Integer representing the position of the cursor in ``text``. Offset
1782 Integer representing the position of the cursor in ``text``. Offset
1779 is 0-based indexed.
1783 is 0-based indexed.
1780
1784
1781 Yields
1785 Yields
1782 ------
1786 ------
1783 :any:`Completion` object
1787 :any:`Completion` object
1784
1788
1785
1789
1786 The cursor on a text can either be seen as being "in between"
1790 The cursor on a text can either be seen as being "in between"
1787 characters or "On" a character depending on the interface visible to
1791 characters or "On" a character depending on the interface visible to
1788 the user. For consistency the cursor being on "in between" characters X
1792 the user. For consistency the cursor being on "in between" characters X
1789 and Y is equivalent to the cursor being "on" character Y, that is to say
1793 and Y is equivalent to the cursor being "on" character Y, that is to say
1790 the character the cursor is on is considered as being after the cursor.
1794 the character the cursor is on is considered as being after the cursor.
1791
1795
1792 Combining characters may span more that one position in the
1796 Combining characters may span more that one position in the
1793 text.
1797 text.
1794
1798
1795
1799
1796 .. note::
1800 .. note::
1797
1801
1798 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1802 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1799 fake Completion token to distinguish completion returned by Jedi
1803 fake Completion token to distinguish completion returned by Jedi
1800 and usual IPython completion.
1804 and usual IPython completion.
1801
1805
1802 .. note::
1806 .. note::
1803
1807
1804 Completions are not completely deduplicated yet. If identical
1808 Completions are not completely deduplicated yet. If identical
1805 completions are coming from different sources this function does not
1809 completions are coming from different sources this function does not
1806 ensure that each completion object will only be present once.
1810 ensure that each completion object will only be present once.
1807 """
1811 """
1808 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1812 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1809 "It may change without warnings. "
1813 "It may change without warnings. "
1810 "Use in corresponding context manager.",
1814 "Use in corresponding context manager.",
1811 category=ProvisionalCompleterWarning, stacklevel=2)
1815 category=ProvisionalCompleterWarning, stacklevel=2)
1812
1816
1813 seen = set()
1817 seen = set()
1814 try:
1818 try:
1815 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1819 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1816 if c and (c in seen):
1820 if c and (c in seen):
1817 continue
1821 continue
1818 yield c
1822 yield c
1819 seen.add(c)
1823 seen.add(c)
1820 except KeyboardInterrupt:
1824 except KeyboardInterrupt:
1821 """if completions take too long and users send keyboard interrupt,
1825 """if completions take too long and users send keyboard interrupt,
1822 do not crash and return ASAP. """
1826 do not crash and return ASAP. """
1823 pass
1827 pass
1824
1828
1825 def _completions(self, full_text: str, offset: int, *, _timeout)->Iterator[Completion]:
1829 def _completions(self, full_text: str, offset: int, *, _timeout)->Iterator[Completion]:
1826 """
1830 """
1827 Core completion module.Same signature as :any:`completions`, with the
1831 Core completion module.Same signature as :any:`completions`, with the
1828 extra `timeout` parameter (in seconds).
1832 extra `timeout` parameter (in seconds).
1829
1833
1830
1834
1831 Computing jedi's completion ``.type`` can be quite expensive (it is a
1835 Computing jedi's completion ``.type`` can be quite expensive (it is a
1832 lazy property) and can require some warm-up, more warm up than just
1836 lazy property) and can require some warm-up, more warm up than just
1833 computing the ``name`` of a completion. The warm-up can be :
1837 computing the ``name`` of a completion. The warm-up can be :
1834
1838
1835 - Long warm-up the first time a module is encountered after
1839 - Long warm-up the first time a module is encountered after
1836 install/update: actually build parse/inference tree.
1840 install/update: actually build parse/inference tree.
1837
1841
1838 - first time the module is encountered in a session: load tree from
1842 - first time the module is encountered in a session: load tree from
1839 disk.
1843 disk.
1840
1844
1841 We don't want to block completions for tens of seconds so we give the
1845 We don't want to block completions for tens of seconds so we give the
1842 completer a "budget" of ``_timeout`` seconds per invocation to compute
1846 completer a "budget" of ``_timeout`` seconds per invocation to compute
1843 completions types, the completions that have not yet been computed will
1847 completions types, the completions that have not yet been computed will
1844 be marked as "unknown" an will have a chance to be computed next round
1848 be marked as "unknown" an will have a chance to be computed next round
1845 are things get cached.
1849 are things get cached.
1846
1850
1847 Keep in mind that Jedi is not the only thing treating the completion so
1851 Keep in mind that Jedi is not the only thing treating the completion so
1848 keep the timeout short-ish as if we take more than 0.3 second we still
1852 keep the timeout short-ish as if we take more than 0.3 second we still
1849 have lots of processing to do.
1853 have lots of processing to do.
1850
1854
1851 """
1855 """
1852 deadline = time.monotonic() + _timeout
1856 deadline = time.monotonic() + _timeout
1853
1857
1854
1858
1855 before = full_text[:offset]
1859 before = full_text[:offset]
1856 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1860 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1857
1861
1858 matched_text, matches, matches_origin, jedi_matches = self._complete(
1862 matched_text, matches, matches_origin, jedi_matches = self._complete(
1859 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1863 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1860
1864
1861 iter_jm = iter(jedi_matches)
1865 iter_jm = iter(jedi_matches)
1862 if _timeout:
1866 if _timeout:
1863 for jm in iter_jm:
1867 for jm in iter_jm:
1864 try:
1868 try:
1865 type_ = jm.type
1869 type_ = jm.type
1866 except Exception:
1870 except Exception:
1867 if self.debug:
1871 if self.debug:
1868 print("Error in Jedi getting type of ", jm)
1872 print("Error in Jedi getting type of ", jm)
1869 type_ = None
1873 type_ = None
1870 delta = len(jm.name_with_symbols) - len(jm.complete)
1874 delta = len(jm.name_with_symbols) - len(jm.complete)
1871 if type_ == 'function':
1875 if type_ == 'function':
1872 signature = _make_signature(jm)
1876 signature = _make_signature(jm)
1873 else:
1877 else:
1874 signature = ''
1878 signature = ''
1875 yield Completion(start=offset - delta,
1879 yield Completion(start=offset - delta,
1876 end=offset,
1880 end=offset,
1877 text=jm.name_with_symbols,
1881 text=jm.name_with_symbols,
1878 type=type_,
1882 type=type_,
1879 signature=signature,
1883 signature=signature,
1880 _origin='jedi')
1884 _origin='jedi')
1881
1885
1882 if time.monotonic() > deadline:
1886 if time.monotonic() > deadline:
1883 break
1887 break
1884
1888
1885 for jm in iter_jm:
1889 for jm in iter_jm:
1886 delta = len(jm.name_with_symbols) - len(jm.complete)
1890 delta = len(jm.name_with_symbols) - len(jm.complete)
1887 yield Completion(start=offset - delta,
1891 yield Completion(start=offset - delta,
1888 end=offset,
1892 end=offset,
1889 text=jm.name_with_symbols,
1893 text=jm.name_with_symbols,
1890 type='<unknown>', # don't compute type for speed
1894 type='<unknown>', # don't compute type for speed
1891 _origin='jedi',
1895 _origin='jedi',
1892 signature='')
1896 signature='')
1893
1897
1894
1898
1895 start_offset = before.rfind(matched_text)
1899 start_offset = before.rfind(matched_text)
1896
1900
1897 # TODO:
1901 # TODO:
1898 # Suppress this, right now just for debug.
1902 # Suppress this, right now just for debug.
1899 if jedi_matches and matches and self.debug:
1903 if jedi_matches and matches and self.debug:
1900 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
1904 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
1901 _origin='debug', type='none', signature='')
1905 _origin='debug', type='none', signature='')
1902
1906
1903 # I'm unsure if this is always true, so let's assert and see if it
1907 # I'm unsure if this is always true, so let's assert and see if it
1904 # crash
1908 # crash
1905 assert before.endswith(matched_text)
1909 assert before.endswith(matched_text)
1906 for m, t in zip(matches, matches_origin):
1910 for m, t in zip(matches, matches_origin):
1907 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
1911 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
1908
1912
1909
1913
1910 def complete(self, text=None, line_buffer=None, cursor_pos=None):
1914 def complete(self, text=None, line_buffer=None, cursor_pos=None):
1911 """Find completions for the given text and line context.
1915 """Find completions for the given text and line context.
1912
1916
1913 Note that both the text and the line_buffer are optional, but at least
1917 Note that both the text and the line_buffer are optional, but at least
1914 one of them must be given.
1918 one of them must be given.
1915
1919
1916 Parameters
1920 Parameters
1917 ----------
1921 ----------
1918 text : string, optional
1922 text : string, optional
1919 Text to perform the completion on. If not given, the line buffer
1923 Text to perform the completion on. If not given, the line buffer
1920 is split using the instance's CompletionSplitter object.
1924 is split using the instance's CompletionSplitter object.
1921
1925
1922 line_buffer : string, optional
1926 line_buffer : string, optional
1923 If not given, the completer attempts to obtain the current line
1927 If not given, the completer attempts to obtain the current line
1924 buffer via readline. This keyword allows clients which are
1928 buffer via readline. This keyword allows clients which are
1925 requesting for text completions in non-readline contexts to inform
1929 requesting for text completions in non-readline contexts to inform
1926 the completer of the entire text.
1930 the completer of the entire text.
1927
1931
1928 cursor_pos : int, optional
1932 cursor_pos : int, optional
1929 Index of the cursor in the full line buffer. Should be provided by
1933 Index of the cursor in the full line buffer. Should be provided by
1930 remote frontends where kernel has no access to frontend state.
1934 remote frontends where kernel has no access to frontend state.
1931
1935
1932 Returns
1936 Returns
1933 -------
1937 -------
1934 text : str
1938 text : str
1935 Text that was actually used in the completion.
1939 Text that was actually used in the completion.
1936
1940
1937 matches : list
1941 matches : list
1938 A list of completion matches.
1942 A list of completion matches.
1939
1943
1940
1944
1941 .. note::
1945 .. note::
1942
1946
1943 This API is likely to be deprecated and replaced by
1947 This API is likely to be deprecated and replaced by
1944 :any:`IPCompleter.completions` in the future.
1948 :any:`IPCompleter.completions` in the future.
1945
1949
1946
1950
1947 """
1951 """
1948 warnings.warn('`Completer.complete` is pending deprecation since '
1952 warnings.warn('`Completer.complete` is pending deprecation since '
1949 'IPython 6.0 and will be replaced by `Completer.completions`.',
1953 'IPython 6.0 and will be replaced by `Completer.completions`.',
1950 PendingDeprecationWarning)
1954 PendingDeprecationWarning)
1951 # potential todo, FOLD the 3rd throw away argument of _complete
1955 # potential todo, FOLD the 3rd throw away argument of _complete
1952 # into the first 2 one.
1956 # into the first 2 one.
1953 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
1957 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
1954
1958
1955 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
1959 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
1956 full_text=None) -> Tuple[str, List[str], List[str], Iterable[_FakeJediCompletion]]:
1960 full_text=None) -> Tuple[str, List[str], List[str], Iterable[_FakeJediCompletion]]:
1957 """
1961 """
1958
1962
1959 Like complete but can also returns raw jedi completions as well as the
1963 Like complete but can also returns raw jedi completions as well as the
1960 origin of the completion text. This could (and should) be made much
1964 origin of the completion text. This could (and should) be made much
1961 cleaner but that will be simpler once we drop the old (and stateful)
1965 cleaner but that will be simpler once we drop the old (and stateful)
1962 :any:`complete` API.
1966 :any:`complete` API.
1963
1967
1964
1968
1965 With current provisional API, cursor_pos act both (depending on the
1969 With current provisional API, cursor_pos act both (depending on the
1966 caller) as the offset in the ``text`` or ``line_buffer``, or as the
1970 caller) as the offset in the ``text`` or ``line_buffer``, or as the
1967 ``column`` when passing multiline strings this could/should be renamed
1971 ``column`` when passing multiline strings this could/should be renamed
1968 but would add extra noise.
1972 but would add extra noise.
1969 """
1973 """
1970
1974
1971 # if the cursor position isn't given, the only sane assumption we can
1975 # if the cursor position isn't given, the only sane assumption we can
1972 # make is that it's at the end of the line (the common case)
1976 # make is that it's at the end of the line (the common case)
1973 if cursor_pos is None:
1977 if cursor_pos is None:
1974 cursor_pos = len(line_buffer) if text is None else len(text)
1978 cursor_pos = len(line_buffer) if text is None else len(text)
1975
1979
1976 if self.use_main_ns:
1980 if self.use_main_ns:
1977 self.namespace = __main__.__dict__
1981 self.namespace = __main__.__dict__
1978
1982
1979 # if text is either None or an empty string, rely on the line buffer
1983 # if text is either None or an empty string, rely on the line buffer
1980 if (not line_buffer) and full_text:
1984 if (not line_buffer) and full_text:
1981 line_buffer = full_text.split('\n')[cursor_line]
1985 line_buffer = full_text.split('\n')[cursor_line]
1982 if not text:
1986 if not text:
1983 text = self.splitter.split_line(line_buffer, cursor_pos)
1987 text = self.splitter.split_line(line_buffer, cursor_pos)
1984
1988
1985 if self.backslash_combining_completions:
1989 if self.backslash_combining_completions:
1986 # allow deactivation of these on windows.
1990 # allow deactivation of these on windows.
1987 base_text = text if not line_buffer else line_buffer[:cursor_pos]
1991 base_text = text if not line_buffer else line_buffer[:cursor_pos]
1988 latex_text, latex_matches = self.latex_matches(base_text)
1992 latex_text, latex_matches = self.latex_matches(base_text)
1989 if latex_matches:
1993 if latex_matches:
1990 return latex_text, latex_matches, ['latex_matches']*len(latex_matches), ()
1994 return latex_text, latex_matches, ['latex_matches']*len(latex_matches), ()
1991 name_text = ''
1995 name_text = ''
1992 name_matches = []
1996 name_matches = []
1993 # need to add self.fwd_unicode_match() function here when done
1997 # need to add self.fwd_unicode_match() function here when done
1994 for meth in (self.unicode_name_matches, back_latex_name_matches, back_unicode_name_matches, self.fwd_unicode_match):
1998 for meth in (self.unicode_name_matches, back_latex_name_matches, back_unicode_name_matches, self.fwd_unicode_match):
1995 name_text, name_matches = meth(base_text)
1999 name_text, name_matches = meth(base_text)
1996 if name_text:
2000 if name_text:
1997 return name_text, name_matches[:MATCHES_LIMIT], \
2001 return name_text, name_matches[:MATCHES_LIMIT], \
1998 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ()
2002 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ()
1999
2003
2000
2004
2001 # If no line buffer is given, assume the input text is all there was
2005 # If no line buffer is given, assume the input text is all there was
2002 if line_buffer is None:
2006 if line_buffer is None:
2003 line_buffer = text
2007 line_buffer = text
2004
2008
2005 self.line_buffer = line_buffer
2009 self.line_buffer = line_buffer
2006 self.text_until_cursor = self.line_buffer[:cursor_pos]
2010 self.text_until_cursor = self.line_buffer[:cursor_pos]
2007
2011
2008 # Do magic arg matches
2012 # Do magic arg matches
2009 for matcher in self.magic_arg_matchers:
2013 for matcher in self.magic_arg_matchers:
2010 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2014 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2011 if matches:
2015 if matches:
2012 origins = [matcher.__qualname__] * len(matches)
2016 origins = [matcher.__qualname__] * len(matches)
2013 return text, matches, origins, ()
2017 return text, matches, origins, ()
2014
2018
2015 # Start with a clean slate of completions
2019 # Start with a clean slate of completions
2016 matches = []
2020 matches = []
2017
2021
2018 # FIXME: we should extend our api to return a dict with completions for
2022 # FIXME: we should extend our api to return a dict with completions for
2019 # different types of objects. The rlcomplete() method could then
2023 # different types of objects. The rlcomplete() method could then
2020 # simply collapse the dict into a list for readline, but we'd have
2024 # simply collapse the dict into a list for readline, but we'd have
2021 # richer completion semantics in other environments.
2025 # richer completion semantics in other environments.
2022 completions = ()
2026 completions = ()
2023 if self.use_jedi:
2027 if self.use_jedi:
2024 if not full_text:
2028 if not full_text:
2025 full_text = line_buffer
2029 full_text = line_buffer
2026 completions = self._jedi_matches(
2030 completions = self._jedi_matches(
2027 cursor_pos, cursor_line, full_text)
2031 cursor_pos, cursor_line, full_text)
2028
2032
2029 if self.merge_completions:
2033 if self.merge_completions:
2030 matches = []
2034 matches = []
2031 for matcher in self.matchers:
2035 for matcher in self.matchers:
2032 try:
2036 try:
2033 matches.extend([(m, matcher.__qualname__)
2037 matches.extend([(m, matcher.__qualname__)
2034 for m in matcher(text)])
2038 for m in matcher(text)])
2035 except:
2039 except:
2036 # Show the ugly traceback if the matcher causes an
2040 # Show the ugly traceback if the matcher causes an
2037 # exception, but do NOT crash the kernel!
2041 # exception, but do NOT crash the kernel!
2038 sys.excepthook(*sys.exc_info())
2042 sys.excepthook(*sys.exc_info())
2039 else:
2043 else:
2040 for matcher in self.matchers:
2044 for matcher in self.matchers:
2041 matches = [(m, matcher.__qualname__)
2045 matches = [(m, matcher.__qualname__)
2042 for m in matcher(text)]
2046 for m in matcher(text)]
2043 if matches:
2047 if matches:
2044 break
2048 break
2045
2049
2046 seen = set()
2050 seen = set()
2047 filtered_matches = set()
2051 filtered_matches = set()
2048 for m in matches:
2052 for m in matches:
2049 t, c = m
2053 t, c = m
2050 if t not in seen:
2054 if t not in seen:
2051 filtered_matches.add(m)
2055 filtered_matches.add(m)
2052 seen.add(t)
2056 seen.add(t)
2053
2057
2054 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2058 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2055
2059
2056 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2060 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2057
2061
2058 _filtered_matches = custom_res or _filtered_matches
2062 _filtered_matches = custom_res or _filtered_matches
2059
2063
2060 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2064 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2061 _matches = [m[0] for m in _filtered_matches]
2065 _matches = [m[0] for m in _filtered_matches]
2062 origins = [m[1] for m in _filtered_matches]
2066 origins = [m[1] for m in _filtered_matches]
2063
2067
2064 self.matches = _matches
2068 self.matches = _matches
2065
2069
2066 return text, _matches, origins, completions
2070 return text, _matches, origins, completions
2067
2071
2068 def fwd_unicode_match(self, text:str) -> Tuple[str, list]:
2072 def fwd_unicode_match(self, text:str) -> Tuple[str, list]:
2069 if self._names is None:
2073 if self._names is None:
2070 self._names = []
2074 self._names = []
2071 for c in range(0,0x10FFFF + 1):
2075 for c in range(0,0x10FFFF + 1):
2072 try:
2076 try:
2073 self._names.append(unicodedata.name(chr(c)))
2077 self._names.append(unicodedata.name(chr(c)))
2074 except ValueError:
2078 except ValueError:
2075 pass
2079 pass
2076
2080
2077 slashpos = text.rfind('\\')
2081 slashpos = text.rfind('\\')
2078 # if text starts with slash
2082 # if text starts with slash
2079 if slashpos > -1:
2083 if slashpos > -1:
2080 s = text[slashpos+1:]
2084 s = text[slashpos+1:]
2081 candidates = [x for x in self._names if x.startswith(s)]
2085 candidates = [x for x in self._names if x.startswith(s)]
2082 if candidates:
2086 if candidates:
2083 return s, candidates
2087 return s, candidates
2084 else:
2088 else:
2085 return '', ()
2089 return '', ()
2086
2090
2087 # if text does not start with slash
2091 # if text does not start with slash
2088 else:
2092 else:
2089 return u'', ()
2093 return u'', ()
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
@@ -1,1000 +1,1018 b''
1 # -*- coding: utf-8 -*-
1 # -*- coding: utf-8 -*-
2 """Tests for the key interactiveshell module.
2 """Tests for the key interactiveshell module.
3
3
4 Historically the main classes in interactiveshell have been under-tested. This
4 Historically the main classes in interactiveshell have been under-tested. This
5 module should grow as many single-method tests as possible to trap many of the
5 module should grow as many single-method tests as possible to trap many of the
6 recurring bugs we seem to encounter with high-level interaction.
6 recurring bugs we seem to encounter with high-level interaction.
7 """
7 """
8
8
9 # Copyright (c) IPython Development Team.
9 # Copyright (c) IPython Development Team.
10 # Distributed under the terms of the Modified BSD License.
10 # Distributed under the terms of the Modified BSD License.
11
11
12 import asyncio
12 import asyncio
13 import ast
13 import ast
14 import os
14 import os
15 import signal
15 import signal
16 import shutil
16 import shutil
17 import sys
17 import sys
18 import tempfile
18 import tempfile
19 import unittest
19 import unittest
20 from unittest import mock
20 from unittest import mock
21
21
22 from os.path import join
22 from os.path import join
23
23
24 import nose.tools as nt
24 import nose.tools as nt
25
25
26 from IPython.core.error import InputRejected
26 from IPython.core.error import InputRejected
27 from IPython.core.inputtransformer import InputTransformer
27 from IPython.core.inputtransformer import InputTransformer
28 from IPython.core import interactiveshell
28 from IPython.core import interactiveshell
29 from IPython.testing.decorators import (
29 from IPython.testing.decorators import (
30 skipif, skip_win32, onlyif_unicode_paths, onlyif_cmds_exist,
30 skipif, skip_win32, onlyif_unicode_paths, onlyif_cmds_exist,
31 )
31 )
32 from IPython.testing import tools as tt
32 from IPython.testing import tools as tt
33 from IPython.utils.process import find_cmd
33 from IPython.utils.process import find_cmd
34
34
35 #-----------------------------------------------------------------------------
35 #-----------------------------------------------------------------------------
36 # Globals
36 # Globals
37 #-----------------------------------------------------------------------------
37 #-----------------------------------------------------------------------------
38 # This is used by every single test, no point repeating it ad nauseam
38 # This is used by every single test, no point repeating it ad nauseam
39
39
40 #-----------------------------------------------------------------------------
40 #-----------------------------------------------------------------------------
41 # Tests
41 # Tests
42 #-----------------------------------------------------------------------------
42 #-----------------------------------------------------------------------------
43
43
44 class DerivedInterrupt(KeyboardInterrupt):
44 class DerivedInterrupt(KeyboardInterrupt):
45 pass
45 pass
46
46
47 class InteractiveShellTestCase(unittest.TestCase):
47 class InteractiveShellTestCase(unittest.TestCase):
48 def test_naked_string_cells(self):
48 def test_naked_string_cells(self):
49 """Test that cells with only naked strings are fully executed"""
49 """Test that cells with only naked strings are fully executed"""
50 # First, single-line inputs
50 # First, single-line inputs
51 ip.run_cell('"a"\n')
51 ip.run_cell('"a"\n')
52 self.assertEqual(ip.user_ns['_'], 'a')
52 self.assertEqual(ip.user_ns['_'], 'a')
53 # And also multi-line cells
53 # And also multi-line cells
54 ip.run_cell('"""a\nb"""\n')
54 ip.run_cell('"""a\nb"""\n')
55 self.assertEqual(ip.user_ns['_'], 'a\nb')
55 self.assertEqual(ip.user_ns['_'], 'a\nb')
56
56
57 def test_run_empty_cell(self):
57 def test_run_empty_cell(self):
58 """Just make sure we don't get a horrible error with a blank
58 """Just make sure we don't get a horrible error with a blank
59 cell of input. Yes, I did overlook that."""
59 cell of input. Yes, I did overlook that."""
60 old_xc = ip.execution_count
60 old_xc = ip.execution_count
61 res = ip.run_cell('')
61 res = ip.run_cell('')
62 self.assertEqual(ip.execution_count, old_xc)
62 self.assertEqual(ip.execution_count, old_xc)
63 self.assertEqual(res.execution_count, None)
63 self.assertEqual(res.execution_count, None)
64
64
65 def test_run_cell_multiline(self):
65 def test_run_cell_multiline(self):
66 """Multi-block, multi-line cells must execute correctly.
66 """Multi-block, multi-line cells must execute correctly.
67 """
67 """
68 src = '\n'.join(["x=1",
68 src = '\n'.join(["x=1",
69 "y=2",
69 "y=2",
70 "if 1:",
70 "if 1:",
71 " x += 1",
71 " x += 1",
72 " y += 1",])
72 " y += 1",])
73 res = ip.run_cell(src)
73 res = ip.run_cell(src)
74 self.assertEqual(ip.user_ns['x'], 2)
74 self.assertEqual(ip.user_ns['x'], 2)
75 self.assertEqual(ip.user_ns['y'], 3)
75 self.assertEqual(ip.user_ns['y'], 3)
76 self.assertEqual(res.success, True)
76 self.assertEqual(res.success, True)
77 self.assertEqual(res.result, None)
77 self.assertEqual(res.result, None)
78
78
79 def test_multiline_string_cells(self):
79 def test_multiline_string_cells(self):
80 "Code sprinkled with multiline strings should execute (GH-306)"
80 "Code sprinkled with multiline strings should execute (GH-306)"
81 ip.run_cell('tmp=0')
81 ip.run_cell('tmp=0')
82 self.assertEqual(ip.user_ns['tmp'], 0)
82 self.assertEqual(ip.user_ns['tmp'], 0)
83 res = ip.run_cell('tmp=1;"""a\nb"""\n')
83 res = ip.run_cell('tmp=1;"""a\nb"""\n')
84 self.assertEqual(ip.user_ns['tmp'], 1)
84 self.assertEqual(ip.user_ns['tmp'], 1)
85 self.assertEqual(res.success, True)
85 self.assertEqual(res.success, True)
86 self.assertEqual(res.result, "a\nb")
86 self.assertEqual(res.result, "a\nb")
87
87
88 def test_dont_cache_with_semicolon(self):
88 def test_dont_cache_with_semicolon(self):
89 "Ending a line with semicolon should not cache the returned object (GH-307)"
89 "Ending a line with semicolon should not cache the returned object (GH-307)"
90 oldlen = len(ip.user_ns['Out'])
90 oldlen = len(ip.user_ns['Out'])
91 for cell in ['1;', '1;1;']:
91 for cell in ['1;', '1;1;']:
92 res = ip.run_cell(cell, store_history=True)
92 res = ip.run_cell(cell, store_history=True)
93 newlen = len(ip.user_ns['Out'])
93 newlen = len(ip.user_ns['Out'])
94 self.assertEqual(oldlen, newlen)
94 self.assertEqual(oldlen, newlen)
95 self.assertIsNone(res.result)
95 self.assertIsNone(res.result)
96 i = 0
96 i = 0
97 #also test the default caching behavior
97 #also test the default caching behavior
98 for cell in ['1', '1;1']:
98 for cell in ['1', '1;1']:
99 ip.run_cell(cell, store_history=True)
99 ip.run_cell(cell, store_history=True)
100 newlen = len(ip.user_ns['Out'])
100 newlen = len(ip.user_ns['Out'])
101 i += 1
101 i += 1
102 self.assertEqual(oldlen+i, newlen)
102 self.assertEqual(oldlen+i, newlen)
103
103
104 def test_syntax_error(self):
104 def test_syntax_error(self):
105 res = ip.run_cell("raise = 3")
105 res = ip.run_cell("raise = 3")
106 self.assertIsInstance(res.error_before_exec, SyntaxError)
106 self.assertIsInstance(res.error_before_exec, SyntaxError)
107
107
108 def test_In_variable(self):
108 def test_In_variable(self):
109 "Verify that In variable grows with user input (GH-284)"
109 "Verify that In variable grows with user input (GH-284)"
110 oldlen = len(ip.user_ns['In'])
110 oldlen = len(ip.user_ns['In'])
111 ip.run_cell('1;', store_history=True)
111 ip.run_cell('1;', store_history=True)
112 newlen = len(ip.user_ns['In'])
112 newlen = len(ip.user_ns['In'])
113 self.assertEqual(oldlen+1, newlen)
113 self.assertEqual(oldlen+1, newlen)
114 self.assertEqual(ip.user_ns['In'][-1],'1;')
114 self.assertEqual(ip.user_ns['In'][-1],'1;')
115
115
116 def test_magic_names_in_string(self):
116 def test_magic_names_in_string(self):
117 ip.run_cell('a = """\n%exit\n"""')
117 ip.run_cell('a = """\n%exit\n"""')
118 self.assertEqual(ip.user_ns['a'], '\n%exit\n')
118 self.assertEqual(ip.user_ns['a'], '\n%exit\n')
119
119
120 def test_trailing_newline(self):
120 def test_trailing_newline(self):
121 """test that running !(command) does not raise a SyntaxError"""
121 """test that running !(command) does not raise a SyntaxError"""
122 ip.run_cell('!(true)\n', False)
122 ip.run_cell('!(true)\n', False)
123 ip.run_cell('!(true)\n\n\n', False)
123 ip.run_cell('!(true)\n\n\n', False)
124
124
125 def test_gh_597(self):
125 def test_gh_597(self):
126 """Pretty-printing lists of objects with non-ascii reprs may cause
126 """Pretty-printing lists of objects with non-ascii reprs may cause
127 problems."""
127 problems."""
128 class Spam(object):
128 class Spam(object):
129 def __repr__(self):
129 def __repr__(self):
130 return "\xe9"*50
130 return "\xe9"*50
131 import IPython.core.formatters
131 import IPython.core.formatters
132 f = IPython.core.formatters.PlainTextFormatter()
132 f = IPython.core.formatters.PlainTextFormatter()
133 f([Spam(),Spam()])
133 f([Spam(),Spam()])
134
134
135
135
136 def test_future_flags(self):
136 def test_future_flags(self):
137 """Check that future flags are used for parsing code (gh-777)"""
137 """Check that future flags are used for parsing code (gh-777)"""
138 ip.run_cell('from __future__ import barry_as_FLUFL')
138 ip.run_cell('from __future__ import barry_as_FLUFL')
139 try:
139 try:
140 ip.run_cell('prfunc_return_val = 1 <> 2')
140 ip.run_cell('prfunc_return_val = 1 <> 2')
141 assert 'prfunc_return_val' in ip.user_ns
141 assert 'prfunc_return_val' in ip.user_ns
142 finally:
142 finally:
143 # Reset compiler flags so we don't mess up other tests.
143 # Reset compiler flags so we don't mess up other tests.
144 ip.compile.reset_compiler_flags()
144 ip.compile.reset_compiler_flags()
145
145
146 def test_can_pickle(self):
146 def test_can_pickle(self):
147 "Can we pickle objects defined interactively (GH-29)"
147 "Can we pickle objects defined interactively (GH-29)"
148 ip = get_ipython()
148 ip = get_ipython()
149 ip.reset()
149 ip.reset()
150 ip.run_cell(("class Mylist(list):\n"
150 ip.run_cell(("class Mylist(list):\n"
151 " def __init__(self,x=[]):\n"
151 " def __init__(self,x=[]):\n"
152 " list.__init__(self,x)"))
152 " list.__init__(self,x)"))
153 ip.run_cell("w=Mylist([1,2,3])")
153 ip.run_cell("w=Mylist([1,2,3])")
154
154
155 from pickle import dumps
155 from pickle import dumps
156
156
157 # We need to swap in our main module - this is only necessary
157 # We need to swap in our main module - this is only necessary
158 # inside the test framework, because IPython puts the interactive module
158 # inside the test framework, because IPython puts the interactive module
159 # in place (but the test framework undoes this).
159 # in place (but the test framework undoes this).
160 _main = sys.modules['__main__']
160 _main = sys.modules['__main__']
161 sys.modules['__main__'] = ip.user_module
161 sys.modules['__main__'] = ip.user_module
162 try:
162 try:
163 res = dumps(ip.user_ns["w"])
163 res = dumps(ip.user_ns["w"])
164 finally:
164 finally:
165 sys.modules['__main__'] = _main
165 sys.modules['__main__'] = _main
166 self.assertTrue(isinstance(res, bytes))
166 self.assertTrue(isinstance(res, bytes))
167
167
168 def test_global_ns(self):
168 def test_global_ns(self):
169 "Code in functions must be able to access variables outside them."
169 "Code in functions must be able to access variables outside them."
170 ip = get_ipython()
170 ip = get_ipython()
171 ip.run_cell("a = 10")
171 ip.run_cell("a = 10")
172 ip.run_cell(("def f(x):\n"
172 ip.run_cell(("def f(x):\n"
173 " return x + a"))
173 " return x + a"))
174 ip.run_cell("b = f(12)")
174 ip.run_cell("b = f(12)")
175 self.assertEqual(ip.user_ns["b"], 22)
175 self.assertEqual(ip.user_ns["b"], 22)
176
176
177 def test_bad_custom_tb(self):
177 def test_bad_custom_tb(self):
178 """Check that InteractiveShell is protected from bad custom exception handlers"""
178 """Check that InteractiveShell is protected from bad custom exception handlers"""
179 ip.set_custom_exc((IOError,), lambda etype,value,tb: 1/0)
179 ip.set_custom_exc((IOError,), lambda etype,value,tb: 1/0)
180 self.assertEqual(ip.custom_exceptions, (IOError,))
180 self.assertEqual(ip.custom_exceptions, (IOError,))
181 with tt.AssertPrints("Custom TB Handler failed", channel='stderr'):
181 with tt.AssertPrints("Custom TB Handler failed", channel='stderr'):
182 ip.run_cell(u'raise IOError("foo")')
182 ip.run_cell(u'raise IOError("foo")')
183 self.assertEqual(ip.custom_exceptions, ())
183 self.assertEqual(ip.custom_exceptions, ())
184
184
185 def test_bad_custom_tb_return(self):
185 def test_bad_custom_tb_return(self):
186 """Check that InteractiveShell is protected from bad return types in custom exception handlers"""
186 """Check that InteractiveShell is protected from bad return types in custom exception handlers"""
187 ip.set_custom_exc((NameError,),lambda etype,value,tb, tb_offset=None: 1)
187 ip.set_custom_exc((NameError,),lambda etype,value,tb, tb_offset=None: 1)
188 self.assertEqual(ip.custom_exceptions, (NameError,))
188 self.assertEqual(ip.custom_exceptions, (NameError,))
189 with tt.AssertPrints("Custom TB Handler failed", channel='stderr'):
189 with tt.AssertPrints("Custom TB Handler failed", channel='stderr'):
190 ip.run_cell(u'a=abracadabra')
190 ip.run_cell(u'a=abracadabra')
191 self.assertEqual(ip.custom_exceptions, ())
191 self.assertEqual(ip.custom_exceptions, ())
192
192
193 def test_drop_by_id(self):
193 def test_drop_by_id(self):
194 myvars = {"a":object(), "b":object(), "c": object()}
194 myvars = {"a":object(), "b":object(), "c": object()}
195 ip.push(myvars, interactive=False)
195 ip.push(myvars, interactive=False)
196 for name in myvars:
196 for name in myvars:
197 assert name in ip.user_ns, name
197 assert name in ip.user_ns, name
198 assert name in ip.user_ns_hidden, name
198 assert name in ip.user_ns_hidden, name
199 ip.user_ns['b'] = 12
199 ip.user_ns['b'] = 12
200 ip.drop_by_id(myvars)
200 ip.drop_by_id(myvars)
201 for name in ["a", "c"]:
201 for name in ["a", "c"]:
202 assert name not in ip.user_ns, name
202 assert name not in ip.user_ns, name
203 assert name not in ip.user_ns_hidden, name
203 assert name not in ip.user_ns_hidden, name
204 assert ip.user_ns['b'] == 12
204 assert ip.user_ns['b'] == 12
205 ip.reset()
205 ip.reset()
206
206
207 def test_var_expand(self):
207 def test_var_expand(self):
208 ip.user_ns['f'] = u'Ca\xf1o'
208 ip.user_ns['f'] = u'Ca\xf1o'
209 self.assertEqual(ip.var_expand(u'echo $f'), u'echo Ca\xf1o')
209 self.assertEqual(ip.var_expand(u'echo $f'), u'echo Ca\xf1o')
210 self.assertEqual(ip.var_expand(u'echo {f}'), u'echo Ca\xf1o')
210 self.assertEqual(ip.var_expand(u'echo {f}'), u'echo Ca\xf1o')
211 self.assertEqual(ip.var_expand(u'echo {f[:-1]}'), u'echo Ca\xf1')
211 self.assertEqual(ip.var_expand(u'echo {f[:-1]}'), u'echo Ca\xf1')
212 self.assertEqual(ip.var_expand(u'echo {1*2}'), u'echo 2')
212 self.assertEqual(ip.var_expand(u'echo {1*2}'), u'echo 2')
213
213
214 self.assertEqual(ip.var_expand(u"grep x | awk '{print $1}'"), u"grep x | awk '{print $1}'")
214 self.assertEqual(ip.var_expand(u"grep x | awk '{print $1}'"), u"grep x | awk '{print $1}'")
215
215
216 ip.user_ns['f'] = b'Ca\xc3\xb1o'
216 ip.user_ns['f'] = b'Ca\xc3\xb1o'
217 # This should not raise any exception:
217 # This should not raise any exception:
218 ip.var_expand(u'echo $f')
218 ip.var_expand(u'echo $f')
219
219
220 def test_var_expand_local(self):
220 def test_var_expand_local(self):
221 """Test local variable expansion in !system and %magic calls"""
221 """Test local variable expansion in !system and %magic calls"""
222 # !system
222 # !system
223 ip.run_cell('def test():\n'
223 ip.run_cell('def test():\n'
224 ' lvar = "ttt"\n'
224 ' lvar = "ttt"\n'
225 ' ret = !echo {lvar}\n'
225 ' ret = !echo {lvar}\n'
226 ' return ret[0]\n')
226 ' return ret[0]\n')
227 res = ip.user_ns['test']()
227 res = ip.user_ns['test']()
228 nt.assert_in('ttt', res)
228 nt.assert_in('ttt', res)
229
229
230 # %magic
230 # %magic
231 ip.run_cell('def makemacro():\n'
231 ip.run_cell('def makemacro():\n'
232 ' macroname = "macro_var_expand_locals"\n'
232 ' macroname = "macro_var_expand_locals"\n'
233 ' %macro {macroname} codestr\n')
233 ' %macro {macroname} codestr\n')
234 ip.user_ns['codestr'] = "str(12)"
234 ip.user_ns['codestr'] = "str(12)"
235 ip.run_cell('makemacro()')
235 ip.run_cell('makemacro()')
236 nt.assert_in('macro_var_expand_locals', ip.user_ns)
236 nt.assert_in('macro_var_expand_locals', ip.user_ns)
237
237
238 def test_var_expand_self(self):
238 def test_var_expand_self(self):
239 """Test variable expansion with the name 'self', which was failing.
239 """Test variable expansion with the name 'self', which was failing.
240
240
241 See https://github.com/ipython/ipython/issues/1878#issuecomment-7698218
241 See https://github.com/ipython/ipython/issues/1878#issuecomment-7698218
242 """
242 """
243 ip.run_cell('class cTest:\n'
243 ip.run_cell('class cTest:\n'
244 ' classvar="see me"\n'
244 ' classvar="see me"\n'
245 ' def test(self):\n'
245 ' def test(self):\n'
246 ' res = !echo Variable: {self.classvar}\n'
246 ' res = !echo Variable: {self.classvar}\n'
247 ' return res[0]\n')
247 ' return res[0]\n')
248 nt.assert_in('see me', ip.user_ns['cTest']().test())
248 nt.assert_in('see me', ip.user_ns['cTest']().test())
249
249
250 def test_bad_var_expand(self):
250 def test_bad_var_expand(self):
251 """var_expand on invalid formats shouldn't raise"""
251 """var_expand on invalid formats shouldn't raise"""
252 # SyntaxError
252 # SyntaxError
253 self.assertEqual(ip.var_expand(u"{'a':5}"), u"{'a':5}")
253 self.assertEqual(ip.var_expand(u"{'a':5}"), u"{'a':5}")
254 # NameError
254 # NameError
255 self.assertEqual(ip.var_expand(u"{asdf}"), u"{asdf}")
255 self.assertEqual(ip.var_expand(u"{asdf}"), u"{asdf}")
256 # ZeroDivisionError
256 # ZeroDivisionError
257 self.assertEqual(ip.var_expand(u"{1/0}"), u"{1/0}")
257 self.assertEqual(ip.var_expand(u"{1/0}"), u"{1/0}")
258
258
259 def test_silent_postexec(self):
259 def test_silent_postexec(self):
260 """run_cell(silent=True) doesn't invoke pre/post_run_cell callbacks"""
260 """run_cell(silent=True) doesn't invoke pre/post_run_cell callbacks"""
261 pre_explicit = mock.Mock()
261 pre_explicit = mock.Mock()
262 pre_always = mock.Mock()
262 pre_always = mock.Mock()
263 post_explicit = mock.Mock()
263 post_explicit = mock.Mock()
264 post_always = mock.Mock()
264 post_always = mock.Mock()
265 all_mocks = [pre_explicit, pre_always, post_explicit, post_always]
265 all_mocks = [pre_explicit, pre_always, post_explicit, post_always]
266
266
267 ip.events.register('pre_run_cell', pre_explicit)
267 ip.events.register('pre_run_cell', pre_explicit)
268 ip.events.register('pre_execute', pre_always)
268 ip.events.register('pre_execute', pre_always)
269 ip.events.register('post_run_cell', post_explicit)
269 ip.events.register('post_run_cell', post_explicit)
270 ip.events.register('post_execute', post_always)
270 ip.events.register('post_execute', post_always)
271
271
272 try:
272 try:
273 ip.run_cell("1", silent=True)
273 ip.run_cell("1", silent=True)
274 assert pre_always.called
274 assert pre_always.called
275 assert not pre_explicit.called
275 assert not pre_explicit.called
276 assert post_always.called
276 assert post_always.called
277 assert not post_explicit.called
277 assert not post_explicit.called
278 # double-check that non-silent exec did what we expected
278 # double-check that non-silent exec did what we expected
279 # silent to avoid
279 # silent to avoid
280 ip.run_cell("1")
280 ip.run_cell("1")
281 assert pre_explicit.called
281 assert pre_explicit.called
282 assert post_explicit.called
282 assert post_explicit.called
283 info, = pre_explicit.call_args[0]
283 info, = pre_explicit.call_args[0]
284 result, = post_explicit.call_args[0]
284 result, = post_explicit.call_args[0]
285 self.assertEqual(info, result.info)
285 self.assertEqual(info, result.info)
286 # check that post hooks are always called
286 # check that post hooks are always called
287 [m.reset_mock() for m in all_mocks]
287 [m.reset_mock() for m in all_mocks]
288 ip.run_cell("syntax error")
288 ip.run_cell("syntax error")
289 assert pre_always.called
289 assert pre_always.called
290 assert pre_explicit.called
290 assert pre_explicit.called
291 assert post_always.called
291 assert post_always.called
292 assert post_explicit.called
292 assert post_explicit.called
293 info, = pre_explicit.call_args[0]
293 info, = pre_explicit.call_args[0]
294 result, = post_explicit.call_args[0]
294 result, = post_explicit.call_args[0]
295 self.assertEqual(info, result.info)
295 self.assertEqual(info, result.info)
296 finally:
296 finally:
297 # remove post-exec
297 # remove post-exec
298 ip.events.unregister('pre_run_cell', pre_explicit)
298 ip.events.unregister('pre_run_cell', pre_explicit)
299 ip.events.unregister('pre_execute', pre_always)
299 ip.events.unregister('pre_execute', pre_always)
300 ip.events.unregister('post_run_cell', post_explicit)
300 ip.events.unregister('post_run_cell', post_explicit)
301 ip.events.unregister('post_execute', post_always)
301 ip.events.unregister('post_execute', post_always)
302
302
303 def test_silent_noadvance(self):
303 def test_silent_noadvance(self):
304 """run_cell(silent=True) doesn't advance execution_count"""
304 """run_cell(silent=True) doesn't advance execution_count"""
305 ec = ip.execution_count
305 ec = ip.execution_count
306 # silent should force store_history=False
306 # silent should force store_history=False
307 ip.run_cell("1", store_history=True, silent=True)
307 ip.run_cell("1", store_history=True, silent=True)
308
308
309 self.assertEqual(ec, ip.execution_count)
309 self.assertEqual(ec, ip.execution_count)
310 # double-check that non-silent exec did what we expected
310 # double-check that non-silent exec did what we expected
311 # silent to avoid
311 # silent to avoid
312 ip.run_cell("1", store_history=True)
312 ip.run_cell("1", store_history=True)
313 self.assertEqual(ec+1, ip.execution_count)
313 self.assertEqual(ec+1, ip.execution_count)
314
314
315 def test_silent_nodisplayhook(self):
315 def test_silent_nodisplayhook(self):
316 """run_cell(silent=True) doesn't trigger displayhook"""
316 """run_cell(silent=True) doesn't trigger displayhook"""
317 d = dict(called=False)
317 d = dict(called=False)
318
318
319 trap = ip.display_trap
319 trap = ip.display_trap
320 save_hook = trap.hook
320 save_hook = trap.hook
321
321
322 def failing_hook(*args, **kwargs):
322 def failing_hook(*args, **kwargs):
323 d['called'] = True
323 d['called'] = True
324
324
325 try:
325 try:
326 trap.hook = failing_hook
326 trap.hook = failing_hook
327 res = ip.run_cell("1", silent=True)
327 res = ip.run_cell("1", silent=True)
328 self.assertFalse(d['called'])
328 self.assertFalse(d['called'])
329 self.assertIsNone(res.result)
329 self.assertIsNone(res.result)
330 # double-check that non-silent exec did what we expected
330 # double-check that non-silent exec did what we expected
331 # silent to avoid
331 # silent to avoid
332 ip.run_cell("1")
332 ip.run_cell("1")
333 self.assertTrue(d['called'])
333 self.assertTrue(d['called'])
334 finally:
334 finally:
335 trap.hook = save_hook
335 trap.hook = save_hook
336
336
337 def test_ofind_line_magic(self):
337 def test_ofind_line_magic(self):
338 from IPython.core.magic import register_line_magic
338 from IPython.core.magic import register_line_magic
339
339
340 @register_line_magic
340 @register_line_magic
341 def lmagic(line):
341 def lmagic(line):
342 "A line magic"
342 "A line magic"
343
343
344 # Get info on line magic
344 # Get info on line magic
345 lfind = ip._ofind('lmagic')
345 lfind = ip._ofind('lmagic')
346 info = dict(found=True, isalias=False, ismagic=True,
346 info = dict(found=True, isalias=False, ismagic=True,
347 namespace = 'IPython internal', obj= lmagic.__wrapped__,
347 namespace = 'IPython internal', obj= lmagic.__wrapped__,
348 parent = None)
348 parent = None)
349 nt.assert_equal(lfind, info)
349 nt.assert_equal(lfind, info)
350
350
351 def test_ofind_cell_magic(self):
351 def test_ofind_cell_magic(self):
352 from IPython.core.magic import register_cell_magic
352 from IPython.core.magic import register_cell_magic
353
353
354 @register_cell_magic
354 @register_cell_magic
355 def cmagic(line, cell):
355 def cmagic(line, cell):
356 "A cell magic"
356 "A cell magic"
357
357
358 # Get info on cell magic
358 # Get info on cell magic
359 find = ip._ofind('cmagic')
359 find = ip._ofind('cmagic')
360 info = dict(found=True, isalias=False, ismagic=True,
360 info = dict(found=True, isalias=False, ismagic=True,
361 namespace = 'IPython internal', obj= cmagic.__wrapped__,
361 namespace = 'IPython internal', obj= cmagic.__wrapped__,
362 parent = None)
362 parent = None)
363 nt.assert_equal(find, info)
363 nt.assert_equal(find, info)
364
364
365 def test_ofind_property_with_error(self):
365 def test_ofind_property_with_error(self):
366 class A(object):
366 class A(object):
367 @property
367 @property
368 def foo(self):
368 def foo(self):
369 raise NotImplementedError()
369 raise NotImplementedError()
370 a = A()
370 a = A()
371
371
372 found = ip._ofind('a.foo', [('locals', locals())])
372 found = ip._ofind('a.foo', [('locals', locals())])
373 info = dict(found=True, isalias=False, ismagic=False,
373 info = dict(found=True, isalias=False, ismagic=False,
374 namespace='locals', obj=A.foo, parent=a)
374 namespace='locals', obj=A.foo, parent=a)
375 nt.assert_equal(found, info)
375 nt.assert_equal(found, info)
376
376
377 def test_ofind_multiple_attribute_lookups(self):
377 def test_ofind_multiple_attribute_lookups(self):
378 class A(object):
378 class A(object):
379 @property
379 @property
380 def foo(self):
380 def foo(self):
381 raise NotImplementedError()
381 raise NotImplementedError()
382
382
383 a = A()
383 a = A()
384 a.a = A()
384 a.a = A()
385 a.a.a = A()
385 a.a.a = A()
386
386
387 found = ip._ofind('a.a.a.foo', [('locals', locals())])
387 found = ip._ofind('a.a.a.foo', [('locals', locals())])
388 info = dict(found=True, isalias=False, ismagic=False,
388 info = dict(found=True, isalias=False, ismagic=False,
389 namespace='locals', obj=A.foo, parent=a.a.a)
389 namespace='locals', obj=A.foo, parent=a.a.a)
390 nt.assert_equal(found, info)
390 nt.assert_equal(found, info)
391
391
392 def test_ofind_slotted_attributes(self):
392 def test_ofind_slotted_attributes(self):
393 class A(object):
393 class A(object):
394 __slots__ = ['foo']
394 __slots__ = ['foo']
395 def __init__(self):
395 def __init__(self):
396 self.foo = 'bar'
396 self.foo = 'bar'
397
397
398 a = A()
398 a = A()
399 found = ip._ofind('a.foo', [('locals', locals())])
399 found = ip._ofind('a.foo', [('locals', locals())])
400 info = dict(found=True, isalias=False, ismagic=False,
400 info = dict(found=True, isalias=False, ismagic=False,
401 namespace='locals', obj=a.foo, parent=a)
401 namespace='locals', obj=a.foo, parent=a)
402 nt.assert_equal(found, info)
402 nt.assert_equal(found, info)
403
403
404 found = ip._ofind('a.bar', [('locals', locals())])
404 found = ip._ofind('a.bar', [('locals', locals())])
405 info = dict(found=False, isalias=False, ismagic=False,
405 info = dict(found=False, isalias=False, ismagic=False,
406 namespace=None, obj=None, parent=a)
406 namespace=None, obj=None, parent=a)
407 nt.assert_equal(found, info)
407 nt.assert_equal(found, info)
408
408
409 def test_ofind_prefers_property_to_instance_level_attribute(self):
409 def test_ofind_prefers_property_to_instance_level_attribute(self):
410 class A(object):
410 class A(object):
411 @property
411 @property
412 def foo(self):
412 def foo(self):
413 return 'bar'
413 return 'bar'
414 a = A()
414 a = A()
415 a.__dict__['foo'] = 'baz'
415 a.__dict__['foo'] = 'baz'
416 nt.assert_equal(a.foo, 'bar')
416 nt.assert_equal(a.foo, 'bar')
417 found = ip._ofind('a.foo', [('locals', locals())])
417 found = ip._ofind('a.foo', [('locals', locals())])
418 nt.assert_is(found['obj'], A.foo)
418 nt.assert_is(found['obj'], A.foo)
419
419
420 def test_custom_syntaxerror_exception(self):
420 def test_custom_syntaxerror_exception(self):
421 called = []
421 called = []
422 def my_handler(shell, etype, value, tb, tb_offset=None):
422 def my_handler(shell, etype, value, tb, tb_offset=None):
423 called.append(etype)
423 called.append(etype)
424 shell.showtraceback((etype, value, tb), tb_offset=tb_offset)
424 shell.showtraceback((etype, value, tb), tb_offset=tb_offset)
425
425
426 ip.set_custom_exc((SyntaxError,), my_handler)
426 ip.set_custom_exc((SyntaxError,), my_handler)
427 try:
427 try:
428 ip.run_cell("1f")
428 ip.run_cell("1f")
429 # Check that this was called, and only once.
429 # Check that this was called, and only once.
430 self.assertEqual(called, [SyntaxError])
430 self.assertEqual(called, [SyntaxError])
431 finally:
431 finally:
432 # Reset the custom exception hook
432 # Reset the custom exception hook
433 ip.set_custom_exc((), None)
433 ip.set_custom_exc((), None)
434
434
435 def test_custom_exception(self):
435 def test_custom_exception(self):
436 called = []
436 called = []
437 def my_handler(shell, etype, value, tb, tb_offset=None):
437 def my_handler(shell, etype, value, tb, tb_offset=None):
438 called.append(etype)
438 called.append(etype)
439 shell.showtraceback((etype, value, tb), tb_offset=tb_offset)
439 shell.showtraceback((etype, value, tb), tb_offset=tb_offset)
440
440
441 ip.set_custom_exc((ValueError,), my_handler)
441 ip.set_custom_exc((ValueError,), my_handler)
442 try:
442 try:
443 res = ip.run_cell("raise ValueError('test')")
443 res = ip.run_cell("raise ValueError('test')")
444 # Check that this was called, and only once.
444 # Check that this was called, and only once.
445 self.assertEqual(called, [ValueError])
445 self.assertEqual(called, [ValueError])
446 # Check that the error is on the result object
446 # Check that the error is on the result object
447 self.assertIsInstance(res.error_in_exec, ValueError)
447 self.assertIsInstance(res.error_in_exec, ValueError)
448 finally:
448 finally:
449 # Reset the custom exception hook
449 # Reset the custom exception hook
450 ip.set_custom_exc((), None)
450 ip.set_custom_exc((), None)
451
451
452 def test_mktempfile(self):
452 def test_mktempfile(self):
453 filename = ip.mktempfile()
453 filename = ip.mktempfile()
454 # Check that we can open the file again on Windows
454 # Check that we can open the file again on Windows
455 with open(filename, 'w') as f:
455 with open(filename, 'w') as f:
456 f.write('abc')
456 f.write('abc')
457
457
458 filename = ip.mktempfile(data='blah')
458 filename = ip.mktempfile(data='blah')
459 with open(filename, 'r') as f:
459 with open(filename, 'r') as f:
460 self.assertEqual(f.read(), 'blah')
460 self.assertEqual(f.read(), 'blah')
461
461
462 def test_new_main_mod(self):
462 def test_new_main_mod(self):
463 # Smoketest to check that this accepts a unicode module name
463 # Smoketest to check that this accepts a unicode module name
464 name = u'jiefmw'
464 name = u'jiefmw'
465 mod = ip.new_main_mod(u'%s.py' % name, name)
465 mod = ip.new_main_mod(u'%s.py' % name, name)
466 self.assertEqual(mod.__name__, name)
466 self.assertEqual(mod.__name__, name)
467
467
468 def test_get_exception_only(self):
468 def test_get_exception_only(self):
469 try:
469 try:
470 raise KeyboardInterrupt
470 raise KeyboardInterrupt
471 except KeyboardInterrupt:
471 except KeyboardInterrupt:
472 msg = ip.get_exception_only()
472 msg = ip.get_exception_only()
473 self.assertEqual(msg, 'KeyboardInterrupt\n')
473 self.assertEqual(msg, 'KeyboardInterrupt\n')
474
474
475 try:
475 try:
476 raise DerivedInterrupt("foo")
476 raise DerivedInterrupt("foo")
477 except KeyboardInterrupt:
477 except KeyboardInterrupt:
478 msg = ip.get_exception_only()
478 msg = ip.get_exception_only()
479 self.assertEqual(msg, 'IPython.core.tests.test_interactiveshell.DerivedInterrupt: foo\n')
479 self.assertEqual(msg, 'IPython.core.tests.test_interactiveshell.DerivedInterrupt: foo\n')
480
480
481 def test_inspect_text(self):
481 def test_inspect_text(self):
482 ip.run_cell('a = 5')
482 ip.run_cell('a = 5')
483 text = ip.object_inspect_text('a')
483 text = ip.object_inspect_text('a')
484 self.assertIsInstance(text, str)
484 self.assertIsInstance(text, str)
485
485
486 def test_last_execution_result(self):
486 def test_last_execution_result(self):
487 """ Check that last execution result gets set correctly (GH-10702) """
487 """ Check that last execution result gets set correctly (GH-10702) """
488 result = ip.run_cell('a = 5; a')
488 result = ip.run_cell('a = 5; a')
489 self.assertTrue(ip.last_execution_succeeded)
489 self.assertTrue(ip.last_execution_succeeded)
490 self.assertEqual(ip.last_execution_result.result, 5)
490 self.assertEqual(ip.last_execution_result.result, 5)
491
491
492 result = ip.run_cell('a = x_invalid_id_x')
492 result = ip.run_cell('a = x_invalid_id_x')
493 self.assertFalse(ip.last_execution_succeeded)
493 self.assertFalse(ip.last_execution_succeeded)
494 self.assertFalse(ip.last_execution_result.success)
494 self.assertFalse(ip.last_execution_result.success)
495 self.assertIsInstance(ip.last_execution_result.error_in_exec, NameError)
495 self.assertIsInstance(ip.last_execution_result.error_in_exec, NameError)
496
496
497 def test_reset_aliasing(self):
497 def test_reset_aliasing(self):
498 """ Check that standard posix aliases work after %reset. """
498 """ Check that standard posix aliases work after %reset. """
499 if os.name != 'posix':
499 if os.name != 'posix':
500 return
500 return
501
501
502 ip.reset()
502 ip.reset()
503 for cmd in ('clear', 'more', 'less', 'man'):
503 for cmd in ('clear', 'more', 'less', 'man'):
504 res = ip.run_cell('%' + cmd)
504 res = ip.run_cell('%' + cmd)
505 self.assertEqual(res.success, True)
505 self.assertEqual(res.success, True)
506
506
507
507
508 class TestSafeExecfileNonAsciiPath(unittest.TestCase):
508 class TestSafeExecfileNonAsciiPath(unittest.TestCase):
509
509
510 @onlyif_unicode_paths
510 @onlyif_unicode_paths
511 def setUp(self):
511 def setUp(self):
512 self.BASETESTDIR = tempfile.mkdtemp()
512 self.BASETESTDIR = tempfile.mkdtemp()
513 self.TESTDIR = join(self.BASETESTDIR, u"Γ₯Àâ")
513 self.TESTDIR = join(self.BASETESTDIR, u"Γ₯Àâ")
514 os.mkdir(self.TESTDIR)
514 os.mkdir(self.TESTDIR)
515 with open(join(self.TESTDIR, u"Γ₯Àâtestscript.py"), "w") as sfile:
515 with open(join(self.TESTDIR, u"Γ₯Àâtestscript.py"), "w") as sfile:
516 sfile.write("pass\n")
516 sfile.write("pass\n")
517 self.oldpath = os.getcwd()
517 self.oldpath = os.getcwd()
518 os.chdir(self.TESTDIR)
518 os.chdir(self.TESTDIR)
519 self.fname = u"Γ₯Àâtestscript.py"
519 self.fname = u"Γ₯Àâtestscript.py"
520
520
521 def tearDown(self):
521 def tearDown(self):
522 os.chdir(self.oldpath)
522 os.chdir(self.oldpath)
523 shutil.rmtree(self.BASETESTDIR)
523 shutil.rmtree(self.BASETESTDIR)
524
524
525 @onlyif_unicode_paths
525 @onlyif_unicode_paths
526 def test_1(self):
526 def test_1(self):
527 """Test safe_execfile with non-ascii path
527 """Test safe_execfile with non-ascii path
528 """
528 """
529 ip.safe_execfile(self.fname, {}, raise_exceptions=True)
529 ip.safe_execfile(self.fname, {}, raise_exceptions=True)
530
530
531 class ExitCodeChecks(tt.TempFileMixin):
531 class ExitCodeChecks(tt.TempFileMixin):
532
532
533 def setUp(self):
533 def setUp(self):
534 self.system = ip.system_raw
534 self.system = ip.system_raw
535
535
536 def test_exit_code_ok(self):
536 def test_exit_code_ok(self):
537 self.system('exit 0')
537 self.system('exit 0')
538 self.assertEqual(ip.user_ns['_exit_code'], 0)
538 self.assertEqual(ip.user_ns['_exit_code'], 0)
539
539
540 def test_exit_code_error(self):
540 def test_exit_code_error(self):
541 self.system('exit 1')
541 self.system('exit 1')
542 self.assertEqual(ip.user_ns['_exit_code'], 1)
542 self.assertEqual(ip.user_ns['_exit_code'], 1)
543
543
544 @skipif(not hasattr(signal, 'SIGALRM'))
544 @skipif(not hasattr(signal, 'SIGALRM'))
545 def test_exit_code_signal(self):
545 def test_exit_code_signal(self):
546 self.mktmp("import signal, time\n"
546 self.mktmp("import signal, time\n"
547 "signal.setitimer(signal.ITIMER_REAL, 0.1)\n"
547 "signal.setitimer(signal.ITIMER_REAL, 0.1)\n"
548 "time.sleep(1)\n")
548 "time.sleep(1)\n")
549 self.system("%s %s" % (sys.executable, self.fname))
549 self.system("%s %s" % (sys.executable, self.fname))
550 self.assertEqual(ip.user_ns['_exit_code'], -signal.SIGALRM)
550 self.assertEqual(ip.user_ns['_exit_code'], -signal.SIGALRM)
551
551
552 @onlyif_cmds_exist("csh")
552 @onlyif_cmds_exist("csh")
553 def test_exit_code_signal_csh(self):
553 def test_exit_code_signal_csh(self):
554 SHELL = os.environ.get('SHELL', None)
554 SHELL = os.environ.get('SHELL', None)
555 os.environ['SHELL'] = find_cmd("csh")
555 os.environ['SHELL'] = find_cmd("csh")
556 try:
556 try:
557 self.test_exit_code_signal()
557 self.test_exit_code_signal()
558 finally:
558 finally:
559 if SHELL is not None:
559 if SHELL is not None:
560 os.environ['SHELL'] = SHELL
560 os.environ['SHELL'] = SHELL
561 else:
561 else:
562 del os.environ['SHELL']
562 del os.environ['SHELL']
563
563
564
564
565 class TestSystemRaw(ExitCodeChecks):
565 class TestSystemRaw(ExitCodeChecks):
566
566
567 def setUp(self):
567 def setUp(self):
568 super().setUp()
568 super().setUp()
569 self.system = ip.system_raw
569 self.system = ip.system_raw
570
570
571 @onlyif_unicode_paths
571 @onlyif_unicode_paths
572 def test_1(self):
572 def test_1(self):
573 """Test system_raw with non-ascii cmd
573 """Test system_raw with non-ascii cmd
574 """
574 """
575 cmd = u'''python -c "'Γ₯Àâ'" '''
575 cmd = u'''python -c "'Γ₯Àâ'" '''
576 ip.system_raw(cmd)
576 ip.system_raw(cmd)
577
577
578 @mock.patch('subprocess.call', side_effect=KeyboardInterrupt)
578 @mock.patch('subprocess.call', side_effect=KeyboardInterrupt)
579 @mock.patch('os.system', side_effect=KeyboardInterrupt)
579 @mock.patch('os.system', side_effect=KeyboardInterrupt)
580 def test_control_c(self, *mocks):
580 def test_control_c(self, *mocks):
581 try:
581 try:
582 self.system("sleep 1 # wont happen")
582 self.system("sleep 1 # wont happen")
583 except KeyboardInterrupt:
583 except KeyboardInterrupt:
584 self.fail("system call should intercept "
584 self.fail("system call should intercept "
585 "keyboard interrupt from subprocess.call")
585 "keyboard interrupt from subprocess.call")
586 self.assertEqual(ip.user_ns['_exit_code'], -signal.SIGINT)
586 self.assertEqual(ip.user_ns['_exit_code'], -signal.SIGINT)
587
587
588 # TODO: Exit codes are currently ignored on Windows.
588 # TODO: Exit codes are currently ignored on Windows.
589 class TestSystemPipedExitCode(ExitCodeChecks):
589 class TestSystemPipedExitCode(ExitCodeChecks):
590
590
591 def setUp(self):
591 def setUp(self):
592 super().setUp()
592 super().setUp()
593 self.system = ip.system_piped
593 self.system = ip.system_piped
594
594
595 @skip_win32
595 @skip_win32
596 def test_exit_code_ok(self):
596 def test_exit_code_ok(self):
597 ExitCodeChecks.test_exit_code_ok(self)
597 ExitCodeChecks.test_exit_code_ok(self)
598
598
599 @skip_win32
599 @skip_win32
600 def test_exit_code_error(self):
600 def test_exit_code_error(self):
601 ExitCodeChecks.test_exit_code_error(self)
601 ExitCodeChecks.test_exit_code_error(self)
602
602
603 @skip_win32
603 @skip_win32
604 def test_exit_code_signal(self):
604 def test_exit_code_signal(self):
605 ExitCodeChecks.test_exit_code_signal(self)
605 ExitCodeChecks.test_exit_code_signal(self)
606
606
607 class TestModules(tt.TempFileMixin):
607 class TestModules(tt.TempFileMixin):
608 def test_extraneous_loads(self):
608 def test_extraneous_loads(self):
609 """Test we're not loading modules on startup that we shouldn't.
609 """Test we're not loading modules on startup that we shouldn't.
610 """
610 """
611 self.mktmp("import sys\n"
611 self.mktmp("import sys\n"
612 "print('numpy' in sys.modules)\n"
612 "print('numpy' in sys.modules)\n"
613 "print('ipyparallel' in sys.modules)\n"
613 "print('ipyparallel' in sys.modules)\n"
614 "print('ipykernel' in sys.modules)\n"
614 "print('ipykernel' in sys.modules)\n"
615 )
615 )
616 out = "False\nFalse\nFalse\n"
616 out = "False\nFalse\nFalse\n"
617 tt.ipexec_validate(self.fname, out)
617 tt.ipexec_validate(self.fname, out)
618
618
619 class Negator(ast.NodeTransformer):
619 class Negator(ast.NodeTransformer):
620 """Negates all number literals in an AST."""
620 """Negates all number literals in an AST."""
621
621
622 # for python 3.7 and earlier
622 # for python 3.7 and earlier
623 def visit_Num(self, node):
623 def visit_Num(self, node):
624 node.n = -node.n
624 node.n = -node.n
625 return node
625 return node
626
626
627 # for python 3.8+
627 # for python 3.8+
628 def visit_Constant(self, node):
628 def visit_Constant(self, node):
629 if isinstance(node.value, int):
629 if isinstance(node.value, int):
630 return self.visit_Num(node)
630 return self.visit_Num(node)
631 return node
631 return node
632
632
633 class TestAstTransform(unittest.TestCase):
633 class TestAstTransform(unittest.TestCase):
634 def setUp(self):
634 def setUp(self):
635 self.negator = Negator()
635 self.negator = Negator()
636 ip.ast_transformers.append(self.negator)
636 ip.ast_transformers.append(self.negator)
637
637
638 def tearDown(self):
638 def tearDown(self):
639 ip.ast_transformers.remove(self.negator)
639 ip.ast_transformers.remove(self.negator)
640
640
641 def test_run_cell(self):
641 def test_run_cell(self):
642 with tt.AssertPrints('-34'):
642 with tt.AssertPrints('-34'):
643 ip.run_cell('print (12 + 22)')
643 ip.run_cell('print (12 + 22)')
644
644
645 # A named reference to a number shouldn't be transformed.
645 # A named reference to a number shouldn't be transformed.
646 ip.user_ns['n'] = 55
646 ip.user_ns['n'] = 55
647 with tt.AssertNotPrints('-55'):
647 with tt.AssertNotPrints('-55'):
648 ip.run_cell('print (n)')
648 ip.run_cell('print (n)')
649
649
650 def test_timeit(self):
650 def test_timeit(self):
651 called = set()
651 called = set()
652 def f(x):
652 def f(x):
653 called.add(x)
653 called.add(x)
654 ip.push({'f':f})
654 ip.push({'f':f})
655
655
656 with tt.AssertPrints("std. dev. of"):
656 with tt.AssertPrints("std. dev. of"):
657 ip.run_line_magic("timeit", "-n1 f(1)")
657 ip.run_line_magic("timeit", "-n1 f(1)")
658 self.assertEqual(called, {-1})
658 self.assertEqual(called, {-1})
659 called.clear()
659 called.clear()
660
660
661 with tt.AssertPrints("std. dev. of"):
661 with tt.AssertPrints("std. dev. of"):
662 ip.run_cell_magic("timeit", "-n1 f(2)", "f(3)")
662 ip.run_cell_magic("timeit", "-n1 f(2)", "f(3)")
663 self.assertEqual(called, {-2, -3})
663 self.assertEqual(called, {-2, -3})
664
664
665 def test_time(self):
665 def test_time(self):
666 called = []
666 called = []
667 def f(x):
667 def f(x):
668 called.append(x)
668 called.append(x)
669 ip.push({'f':f})
669 ip.push({'f':f})
670
670
671 # Test with an expression
671 # Test with an expression
672 with tt.AssertPrints("Wall time: "):
672 with tt.AssertPrints("Wall time: "):
673 ip.run_line_magic("time", "f(5+9)")
673 ip.run_line_magic("time", "f(5+9)")
674 self.assertEqual(called, [-14])
674 self.assertEqual(called, [-14])
675 called[:] = []
675 called[:] = []
676
676
677 # Test with a statement (different code path)
677 # Test with a statement (different code path)
678 with tt.AssertPrints("Wall time: "):
678 with tt.AssertPrints("Wall time: "):
679 ip.run_line_magic("time", "a = f(-3 + -2)")
679 ip.run_line_magic("time", "a = f(-3 + -2)")
680 self.assertEqual(called, [5])
680 self.assertEqual(called, [5])
681
681
682 def test_macro(self):
682 def test_macro(self):
683 ip.push({'a':10})
683 ip.push({'a':10})
684 # The AST transformation makes this do a+=-1
684 # The AST transformation makes this do a+=-1
685 ip.define_macro("amacro", "a+=1\nprint(a)")
685 ip.define_macro("amacro", "a+=1\nprint(a)")
686
686
687 with tt.AssertPrints("9"):
687 with tt.AssertPrints("9"):
688 ip.run_cell("amacro")
688 ip.run_cell("amacro")
689 with tt.AssertPrints("8"):
689 with tt.AssertPrints("8"):
690 ip.run_cell("amacro")
690 ip.run_cell("amacro")
691
691
692 class IntegerWrapper(ast.NodeTransformer):
692 class IntegerWrapper(ast.NodeTransformer):
693 """Wraps all integers in a call to Integer()"""
693 """Wraps all integers in a call to Integer()"""
694
694
695 # for Python 3.7 and earlier
695 # for Python 3.7 and earlier
696
696
697 # for Python 3.7 and earlier
697 # for Python 3.7 and earlier
698 def visit_Num(self, node):
698 def visit_Num(self, node):
699 if isinstance(node.n, int):
699 if isinstance(node.n, int):
700 return ast.Call(func=ast.Name(id='Integer', ctx=ast.Load()),
700 return ast.Call(func=ast.Name(id='Integer', ctx=ast.Load()),
701 args=[node], keywords=[])
701 args=[node], keywords=[])
702 return node
702 return node
703
703
704 # For Python 3.8+
704 # For Python 3.8+
705 def visit_Constant(self, node):
705 def visit_Constant(self, node):
706 if isinstance(node.value, int):
706 if isinstance(node.value, int):
707 return self.visit_Num(node)
707 return self.visit_Num(node)
708 return node
708 return node
709
709
710
710
711 class TestAstTransform2(unittest.TestCase):
711 class TestAstTransform2(unittest.TestCase):
712 def setUp(self):
712 def setUp(self):
713 self.intwrapper = IntegerWrapper()
713 self.intwrapper = IntegerWrapper()
714 ip.ast_transformers.append(self.intwrapper)
714 ip.ast_transformers.append(self.intwrapper)
715
715
716 self.calls = []
716 self.calls = []
717 def Integer(*args):
717 def Integer(*args):
718 self.calls.append(args)
718 self.calls.append(args)
719 return args
719 return args
720 ip.push({"Integer": Integer})
720 ip.push({"Integer": Integer})
721
721
722 def tearDown(self):
722 def tearDown(self):
723 ip.ast_transformers.remove(self.intwrapper)
723 ip.ast_transformers.remove(self.intwrapper)
724 del ip.user_ns['Integer']
724 del ip.user_ns['Integer']
725
725
726 def test_run_cell(self):
726 def test_run_cell(self):
727 ip.run_cell("n = 2")
727 ip.run_cell("n = 2")
728 self.assertEqual(self.calls, [(2,)])
728 self.assertEqual(self.calls, [(2,)])
729
729
730 # This shouldn't throw an error
730 # This shouldn't throw an error
731 ip.run_cell("o = 2.0")
731 ip.run_cell("o = 2.0")
732 self.assertEqual(ip.user_ns['o'], 2.0)
732 self.assertEqual(ip.user_ns['o'], 2.0)
733
733
734 def test_timeit(self):
734 def test_timeit(self):
735 called = set()
735 called = set()
736 def f(x):
736 def f(x):
737 called.add(x)
737 called.add(x)
738 ip.push({'f':f})
738 ip.push({'f':f})
739
739
740 with tt.AssertPrints("std. dev. of"):
740 with tt.AssertPrints("std. dev. of"):
741 ip.run_line_magic("timeit", "-n1 f(1)")
741 ip.run_line_magic("timeit", "-n1 f(1)")
742 self.assertEqual(called, {(1,)})
742 self.assertEqual(called, {(1,)})
743 called.clear()
743 called.clear()
744
744
745 with tt.AssertPrints("std. dev. of"):
745 with tt.AssertPrints("std. dev. of"):
746 ip.run_cell_magic("timeit", "-n1 f(2)", "f(3)")
746 ip.run_cell_magic("timeit", "-n1 f(2)", "f(3)")
747 self.assertEqual(called, {(2,), (3,)})
747 self.assertEqual(called, {(2,), (3,)})
748
748
749 class ErrorTransformer(ast.NodeTransformer):
749 class ErrorTransformer(ast.NodeTransformer):
750 """Throws an error when it sees a number."""
750 """Throws an error when it sees a number."""
751
751
752 # for Python 3.7 and earlier
752 # for Python 3.7 and earlier
753 def visit_Num(self, node):
753 def visit_Num(self, node):
754 raise ValueError("test")
754 raise ValueError("test")
755
755
756 # for Python 3.8+
756 # for Python 3.8+
757 def visit_Constant(self, node):
757 def visit_Constant(self, node):
758 if isinstance(node.value, int):
758 if isinstance(node.value, int):
759 return self.visit_Num(node)
759 return self.visit_Num(node)
760 return node
760 return node
761
761
762
762
763 class TestAstTransformError(unittest.TestCase):
763 class TestAstTransformError(unittest.TestCase):
764 def test_unregistering(self):
764 def test_unregistering(self):
765 err_transformer = ErrorTransformer()
765 err_transformer = ErrorTransformer()
766 ip.ast_transformers.append(err_transformer)
766 ip.ast_transformers.append(err_transformer)
767
767
768 with self.assertWarnsRegex(UserWarning, "It will be unregistered"):
768 with self.assertWarnsRegex(UserWarning, "It will be unregistered"):
769 ip.run_cell("1 + 2")
769 ip.run_cell("1 + 2")
770
770
771 # This should have been removed.
771 # This should have been removed.
772 nt.assert_not_in(err_transformer, ip.ast_transformers)
772 nt.assert_not_in(err_transformer, ip.ast_transformers)
773
773
774
774
775 class StringRejector(ast.NodeTransformer):
775 class StringRejector(ast.NodeTransformer):
776 """Throws an InputRejected when it sees a string literal.
776 """Throws an InputRejected when it sees a string literal.
777
777
778 Used to verify that NodeTransformers can signal that a piece of code should
778 Used to verify that NodeTransformers can signal that a piece of code should
779 not be executed by throwing an InputRejected.
779 not be executed by throwing an InputRejected.
780 """
780 """
781
781
782 #for python 3.7 and earlier
782 #for python 3.7 and earlier
783 def visit_Str(self, node):
783 def visit_Str(self, node):
784 raise InputRejected("test")
784 raise InputRejected("test")
785
785
786 # 3.8 only
786 # 3.8 only
787 def visit_Constant(self, node):
787 def visit_Constant(self, node):
788 if isinstance(node.value, str):
788 if isinstance(node.value, str):
789 raise InputRejected("test")
789 raise InputRejected("test")
790 return node
790 return node
791
791
792
792
793 class TestAstTransformInputRejection(unittest.TestCase):
793 class TestAstTransformInputRejection(unittest.TestCase):
794
794
795 def setUp(self):
795 def setUp(self):
796 self.transformer = StringRejector()
796 self.transformer = StringRejector()
797 ip.ast_transformers.append(self.transformer)
797 ip.ast_transformers.append(self.transformer)
798
798
799 def tearDown(self):
799 def tearDown(self):
800 ip.ast_transformers.remove(self.transformer)
800 ip.ast_transformers.remove(self.transformer)
801
801
802 def test_input_rejection(self):
802 def test_input_rejection(self):
803 """Check that NodeTransformers can reject input."""
803 """Check that NodeTransformers can reject input."""
804
804
805 expect_exception_tb = tt.AssertPrints("InputRejected: test")
805 expect_exception_tb = tt.AssertPrints("InputRejected: test")
806 expect_no_cell_output = tt.AssertNotPrints("'unsafe'", suppress=False)
806 expect_no_cell_output = tt.AssertNotPrints("'unsafe'", suppress=False)
807
807
808 # Run the same check twice to verify that the transformer is not
808 # Run the same check twice to verify that the transformer is not
809 # disabled after raising.
809 # disabled after raising.
810 with expect_exception_tb, expect_no_cell_output:
810 with expect_exception_tb, expect_no_cell_output:
811 ip.run_cell("'unsafe'")
811 ip.run_cell("'unsafe'")
812
812
813 with expect_exception_tb, expect_no_cell_output:
813 with expect_exception_tb, expect_no_cell_output:
814 res = ip.run_cell("'unsafe'")
814 res = ip.run_cell("'unsafe'")
815
815
816 self.assertIsInstance(res.error_before_exec, InputRejected)
816 self.assertIsInstance(res.error_before_exec, InputRejected)
817
817
818 def test__IPYTHON__():
818 def test__IPYTHON__():
819 # This shouldn't raise a NameError, that's all
819 # This shouldn't raise a NameError, that's all
820 __IPYTHON__
820 __IPYTHON__
821
821
822
822
823 class DummyRepr(object):
823 class DummyRepr(object):
824 def __repr__(self):
824 def __repr__(self):
825 return "DummyRepr"
825 return "DummyRepr"
826
826
827 def _repr_html_(self):
827 def _repr_html_(self):
828 return "<b>dummy</b>"
828 return "<b>dummy</b>"
829
829
830 def _repr_javascript_(self):
830 def _repr_javascript_(self):
831 return "console.log('hi');", {'key': 'value'}
831 return "console.log('hi');", {'key': 'value'}
832
832
833
833
834 def test_user_variables():
834 def test_user_variables():
835 # enable all formatters
835 # enable all formatters
836 ip.display_formatter.active_types = ip.display_formatter.format_types
836 ip.display_formatter.active_types = ip.display_formatter.format_types
837
837
838 ip.user_ns['dummy'] = d = DummyRepr()
838 ip.user_ns['dummy'] = d = DummyRepr()
839 keys = {'dummy', 'doesnotexist'}
839 keys = {'dummy', 'doesnotexist'}
840 r = ip.user_expressions({ key:key for key in keys})
840 r = ip.user_expressions({ key:key for key in keys})
841
841
842 nt.assert_equal(keys, set(r.keys()))
842 nt.assert_equal(keys, set(r.keys()))
843 dummy = r['dummy']
843 dummy = r['dummy']
844 nt.assert_equal({'status', 'data', 'metadata'}, set(dummy.keys()))
844 nt.assert_equal({'status', 'data', 'metadata'}, set(dummy.keys()))
845 nt.assert_equal(dummy['status'], 'ok')
845 nt.assert_equal(dummy['status'], 'ok')
846 data = dummy['data']
846 data = dummy['data']
847 metadata = dummy['metadata']
847 metadata = dummy['metadata']
848 nt.assert_equal(data.get('text/html'), d._repr_html_())
848 nt.assert_equal(data.get('text/html'), d._repr_html_())
849 js, jsmd = d._repr_javascript_()
849 js, jsmd = d._repr_javascript_()
850 nt.assert_equal(data.get('application/javascript'), js)
850 nt.assert_equal(data.get('application/javascript'), js)
851 nt.assert_equal(metadata.get('application/javascript'), jsmd)
851 nt.assert_equal(metadata.get('application/javascript'), jsmd)
852
852
853 dne = r['doesnotexist']
853 dne = r['doesnotexist']
854 nt.assert_equal(dne['status'], 'error')
854 nt.assert_equal(dne['status'], 'error')
855 nt.assert_equal(dne['ename'], 'NameError')
855 nt.assert_equal(dne['ename'], 'NameError')
856
856
857 # back to text only
857 # back to text only
858 ip.display_formatter.active_types = ['text/plain']
858 ip.display_formatter.active_types = ['text/plain']
859
859
860 def test_user_expression():
860 def test_user_expression():
861 # enable all formatters
861 # enable all formatters
862 ip.display_formatter.active_types = ip.display_formatter.format_types
862 ip.display_formatter.active_types = ip.display_formatter.format_types
863 query = {
863 query = {
864 'a' : '1 + 2',
864 'a' : '1 + 2',
865 'b' : '1/0',
865 'b' : '1/0',
866 }
866 }
867 r = ip.user_expressions(query)
867 r = ip.user_expressions(query)
868 import pprint
868 import pprint
869 pprint.pprint(r)
869 pprint.pprint(r)
870 nt.assert_equal(set(r.keys()), set(query.keys()))
870 nt.assert_equal(set(r.keys()), set(query.keys()))
871 a = r['a']
871 a = r['a']
872 nt.assert_equal({'status', 'data', 'metadata'}, set(a.keys()))
872 nt.assert_equal({'status', 'data', 'metadata'}, set(a.keys()))
873 nt.assert_equal(a['status'], 'ok')
873 nt.assert_equal(a['status'], 'ok')
874 data = a['data']
874 data = a['data']
875 metadata = a['metadata']
875 metadata = a['metadata']
876 nt.assert_equal(data.get('text/plain'), '3')
876 nt.assert_equal(data.get('text/plain'), '3')
877
877
878 b = r['b']
878 b = r['b']
879 nt.assert_equal(b['status'], 'error')
879 nt.assert_equal(b['status'], 'error')
880 nt.assert_equal(b['ename'], 'ZeroDivisionError')
880 nt.assert_equal(b['ename'], 'ZeroDivisionError')
881
881
882 # back to text only
882 # back to text only
883 ip.display_formatter.active_types = ['text/plain']
883 ip.display_formatter.active_types = ['text/plain']
884
884
885
885
886 class TestSyntaxErrorTransformer(unittest.TestCase):
886 class TestSyntaxErrorTransformer(unittest.TestCase):
887 """Check that SyntaxError raised by an input transformer is handled by run_cell()"""
887 """Check that SyntaxError raised by an input transformer is handled by run_cell()"""
888
888
889 @staticmethod
889 @staticmethod
890 def transformer(lines):
890 def transformer(lines):
891 for line in lines:
891 for line in lines:
892 pos = line.find('syntaxerror')
892 pos = line.find('syntaxerror')
893 if pos >= 0:
893 if pos >= 0:
894 e = SyntaxError('input contains "syntaxerror"')
894 e = SyntaxError('input contains "syntaxerror"')
895 e.text = line
895 e.text = line
896 e.offset = pos + 1
896 e.offset = pos + 1
897 raise e
897 raise e
898 return lines
898 return lines
899
899
900 def setUp(self):
900 def setUp(self):
901 ip.input_transformers_post.append(self.transformer)
901 ip.input_transformers_post.append(self.transformer)
902
902
903 def tearDown(self):
903 def tearDown(self):
904 ip.input_transformers_post.remove(self.transformer)
904 ip.input_transformers_post.remove(self.transformer)
905
905
906 def test_syntaxerror_input_transformer(self):
906 def test_syntaxerror_input_transformer(self):
907 with tt.AssertPrints('1234'):
907 with tt.AssertPrints('1234'):
908 ip.run_cell('1234')
908 ip.run_cell('1234')
909 with tt.AssertPrints('SyntaxError: invalid syntax'):
909 with tt.AssertPrints('SyntaxError: invalid syntax'):
910 ip.run_cell('1 2 3') # plain python syntax error
910 ip.run_cell('1 2 3') # plain python syntax error
911 with tt.AssertPrints('SyntaxError: input contains "syntaxerror"'):
911 with tt.AssertPrints('SyntaxError: input contains "syntaxerror"'):
912 ip.run_cell('2345 # syntaxerror') # input transformer syntax error
912 ip.run_cell('2345 # syntaxerror') # input transformer syntax error
913 with tt.AssertPrints('3456'):
913 with tt.AssertPrints('3456'):
914 ip.run_cell('3456')
914 ip.run_cell('3456')
915
915
916
916
917 class TestWarningSuppression(unittest.TestCase):
917 class TestWarningSuppression(unittest.TestCase):
918 def test_warning_suppression(self):
918 def test_warning_suppression(self):
919 ip.run_cell("import warnings")
919 ip.run_cell("import warnings")
920 try:
920 try:
921 with self.assertWarnsRegex(UserWarning, "asdf"):
921 with self.assertWarnsRegex(UserWarning, "asdf"):
922 ip.run_cell("warnings.warn('asdf')")
922 ip.run_cell("warnings.warn('asdf')")
923 # Here's the real test -- if we run that again, we should get the
923 # Here's the real test -- if we run that again, we should get the
924 # warning again. Traditionally, each warning was only issued once per
924 # warning again. Traditionally, each warning was only issued once per
925 # IPython session (approximately), even if the user typed in new and
925 # IPython session (approximately), even if the user typed in new and
926 # different code that should have also triggered the warning, leading
926 # different code that should have also triggered the warning, leading
927 # to much confusion.
927 # to much confusion.
928 with self.assertWarnsRegex(UserWarning, "asdf"):
928 with self.assertWarnsRegex(UserWarning, "asdf"):
929 ip.run_cell("warnings.warn('asdf')")
929 ip.run_cell("warnings.warn('asdf')")
930 finally:
930 finally:
931 ip.run_cell("del warnings")
931 ip.run_cell("del warnings")
932
932
933
933
934 def test_deprecation_warning(self):
934 def test_deprecation_warning(self):
935 ip.run_cell("""
935 ip.run_cell("""
936 import warnings
936 import warnings
937 def wrn():
937 def wrn():
938 warnings.warn(
938 warnings.warn(
939 "I AM A WARNING",
939 "I AM A WARNING",
940 DeprecationWarning
940 DeprecationWarning
941 )
941 )
942 """)
942 """)
943 try:
943 try:
944 with self.assertWarnsRegex(DeprecationWarning, "I AM A WARNING"):
944 with self.assertWarnsRegex(DeprecationWarning, "I AM A WARNING"):
945 ip.run_cell("wrn()")
945 ip.run_cell("wrn()")
946 finally:
946 finally:
947 ip.run_cell("del warnings")
947 ip.run_cell("del warnings")
948 ip.run_cell("del wrn")
948 ip.run_cell("del wrn")
949
949
950
950
951 class TestImportNoDeprecate(tt.TempFileMixin):
951 class TestImportNoDeprecate(tt.TempFileMixin):
952
952
953 def setUp(self):
953 def setUp(self):
954 """Make a valid python temp file."""
954 """Make a valid python temp file."""
955 self.mktmp("""
955 self.mktmp("""
956 import warnings
956 import warnings
957 def wrn():
957 def wrn():
958 warnings.warn(
958 warnings.warn(
959 "I AM A WARNING",
959 "I AM A WARNING",
960 DeprecationWarning
960 DeprecationWarning
961 )
961 )
962 """)
962 """)
963 super().setUp()
963 super().setUp()
964
964
965 def test_no_dep(self):
965 def test_no_dep(self):
966 """
966 """
967 No deprecation warning should be raised from imported functions
967 No deprecation warning should be raised from imported functions
968 """
968 """
969 ip.run_cell("from {} import wrn".format(self.fname))
969 ip.run_cell("from {} import wrn".format(self.fname))
970
970
971 with tt.AssertNotPrints("I AM A WARNING"):
971 with tt.AssertNotPrints("I AM A WARNING"):
972 ip.run_cell("wrn()")
972 ip.run_cell("wrn()")
973 ip.run_cell("del wrn")
973 ip.run_cell("del wrn")
974
974
975
975
976 def test_custom_exc_count():
976 def test_custom_exc_count():
977 hook = mock.Mock(return_value=None)
977 hook = mock.Mock(return_value=None)
978 ip.set_custom_exc((SyntaxError,), hook)
978 ip.set_custom_exc((SyntaxError,), hook)
979 before = ip.execution_count
979 before = ip.execution_count
980 ip.run_cell("def foo()", store_history=True)
980 ip.run_cell("def foo()", store_history=True)
981 # restore default excepthook
981 # restore default excepthook
982 ip.set_custom_exc((), None)
982 ip.set_custom_exc((), None)
983 nt.assert_equal(hook.call_count, 1)
983 nt.assert_equal(hook.call_count, 1)
984 nt.assert_equal(ip.execution_count, before + 1)
984 nt.assert_equal(ip.execution_count, before + 1)
985
985
986
986
987 def test_run_cell_async():
987 def test_run_cell_async():
988 loop = asyncio.get_event_loop()
988 loop = asyncio.get_event_loop()
989 ip.run_cell("import asyncio")
989 ip.run_cell("import asyncio")
990 coro = ip.run_cell_async("await asyncio.sleep(0.01)\n5")
990 coro = ip.run_cell_async("await asyncio.sleep(0.01)\n5")
991 assert asyncio.iscoroutine(coro)
991 assert asyncio.iscoroutine(coro)
992 result = loop.run_until_complete(coro)
992 result = loop.run_until_complete(coro)
993 assert isinstance(result, interactiveshell.ExecutionResult)
993 assert isinstance(result, interactiveshell.ExecutionResult)
994 assert result.result == 5
994 assert result.result == 5
995
995
996
996
997 def test_should_run_async():
997 def test_should_run_async():
998 assert not ip.should_run_async("a = 5")
998 assert not ip.should_run_async("a = 5")
999 assert ip.should_run_async("await x")
999 assert ip.should_run_async("await x")
1000 assert ip.should_run_async("import asyncio; await asyncio.sleep(1)")
1000 assert ip.should_run_async("import asyncio; await asyncio.sleep(1)")
1001
1002
1003 def test_set_custom_completer():
1004 num_completers = len(ip.Completer.matchers)
1005
1006 def foo(*args, **kwargs):
1007 return "I'm a completer!"
1008
1009 ip.set_custom_completer(foo, 0)
1010
1011 # check that we've really added a new completer
1012 assert len(ip.Completer.matchers) == num_completers + 1
1013
1014 # check that the first completer is the function we defined
1015 assert ip.Completer.matchers[0]() == "I'm a completer!"
1016
1017 # clean up
1018 ip.Completer.custom_matchers.pop()
General Comments 0
You need to be logged in to leave comments. Login now