##// END OF EJS Templates
Remove old workaround for a bug fixed in Python 3.4...
Nikita Kniazev -
Show More
@@ -1,2257 +1,2260 b''
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 Ξ±
31 Ξ±
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 Ξ±
39 Ξ±
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
44 counterpart that is to say, `F\\\\vec<tab>` is correct, not `\\\\vec<tab>F`.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press `<tab>` to expand it to its latex form.
53 and press `<tab>` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\Ξ±<tab>
57 \\Ξ±<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103 """
103 """
104
104
105
105
106 # Copyright (c) IPython Development Team.
106 # Copyright (c) IPython Development Team.
107 # Distributed under the terms of the Modified BSD License.
107 # Distributed under the terms of the Modified BSD License.
108 #
108 #
109 # Some of this code originated from rlcompleter in the Python standard library
109 # Some of this code originated from rlcompleter in the Python standard library
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
111
111
112
112
113 import builtins as builtin_mod
113 import builtins as builtin_mod
114 import glob
114 import glob
115 import inspect
115 import inspect
116 import itertools
116 import itertools
117 import keyword
117 import keyword
118 import os
118 import os
119 import re
119 import re
120 import string
120 import string
121 import sys
121 import sys
122 import time
122 import time
123 import unicodedata
123 import unicodedata
124 import uuid
124 import uuid
125 import warnings
125 import warnings
126 from contextlib import contextmanager
126 from contextlib import contextmanager
127 from importlib import import_module
127 from importlib import import_module
128 from types import SimpleNamespace
128 from types import SimpleNamespace
129 from typing import Iterable, Iterator, List, Tuple, Union, Any, Sequence, Dict, NamedTuple, Pattern, Optional
129 from typing import Iterable, Iterator, List, Tuple, Union, Any, Sequence, Dict, NamedTuple, Pattern, Optional
130
130
131 from IPython.core.error import TryNext
131 from IPython.core.error import TryNext
132 from IPython.core.inputtransformer2 import ESC_MAGIC
132 from IPython.core.inputtransformer2 import ESC_MAGIC
133 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
133 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
134 from IPython.core.oinspect import InspectColors
134 from IPython.core.oinspect import InspectColors
135 from IPython.testing.skipdoctest import skip_doctest
135 from IPython.utils import generics
136 from IPython.utils import generics
136 from IPython.utils.dir2 import dir2, get_real_method
137 from IPython.utils.dir2 import dir2, get_real_method
137 from IPython.utils.path import ensure_dir_exists
138 from IPython.utils.path import ensure_dir_exists
138 from IPython.utils.process import arg_split
139 from IPython.utils.process import arg_split
139 from traitlets import Bool, Enum, Int, List as ListTrait, Unicode, default, observe
140 from traitlets import Bool, Enum, Int, List as ListTrait, Unicode, default, observe
140 from traitlets.config.configurable import Configurable
141 from traitlets.config.configurable import Configurable
141
142
142 import __main__
143 import __main__
143
144
144 # skip module docstests
145 # skip module docstests
145 __skip_doctest__ = True
146 __skip_doctest__ = True
146
147
147 try:
148 try:
148 import jedi
149 import jedi
149 jedi.settings.case_insensitive_completion = False
150 jedi.settings.case_insensitive_completion = False
150 import jedi.api.helpers
151 import jedi.api.helpers
151 import jedi.api.classes
152 import jedi.api.classes
152 JEDI_INSTALLED = True
153 JEDI_INSTALLED = True
153 except ImportError:
154 except ImportError:
154 JEDI_INSTALLED = False
155 JEDI_INSTALLED = False
155 #-----------------------------------------------------------------------------
156 #-----------------------------------------------------------------------------
156 # Globals
157 # Globals
157 #-----------------------------------------------------------------------------
158 #-----------------------------------------------------------------------------
158
159
159 # ranges where we have most of the valid unicode names. We could be more finer
160 # ranges where we have most of the valid unicode names. We could be more finer
160 # grained but is it worth it for performance While unicode have character in the
161 # grained but is it worth it for performance While unicode have character in the
161 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
162 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
162 # write this). With below range we cover them all, with a density of ~67%
163 # write this). With below range we cover them all, with a density of ~67%
163 # biggest next gap we consider only adds up about 1% density and there are 600
164 # biggest next gap we consider only adds up about 1% density and there are 600
164 # gaps that would need hard coding.
165 # gaps that would need hard coding.
165 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
166 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
166
167
167 # Public API
168 # Public API
168 __all__ = ['Completer','IPCompleter']
169 __all__ = ['Completer','IPCompleter']
169
170
170 if sys.platform == 'win32':
171 if sys.platform == 'win32':
171 PROTECTABLES = ' '
172 PROTECTABLES = ' '
172 else:
173 else:
173 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
174 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
174
175
175 # Protect against returning an enormous number of completions which the frontend
176 # Protect against returning an enormous number of completions which the frontend
176 # may have trouble processing.
177 # may have trouble processing.
177 MATCHES_LIMIT = 500
178 MATCHES_LIMIT = 500
178
179
179 _deprecation_readline_sentinel = object()
180 _deprecation_readline_sentinel = object()
180
181
181
182
182 class ProvisionalCompleterWarning(FutureWarning):
183 class ProvisionalCompleterWarning(FutureWarning):
183 """
184 """
184 Exception raise by an experimental feature in this module.
185 Exception raise by an experimental feature in this module.
185
186
186 Wrap code in :any:`provisionalcompleter` context manager if you
187 Wrap code in :any:`provisionalcompleter` context manager if you
187 are certain you want to use an unstable feature.
188 are certain you want to use an unstable feature.
188 """
189 """
189 pass
190 pass
190
191
191 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
192 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
192
193
194
195 @skip_doctest
193 @contextmanager
196 @contextmanager
194 def provisionalcompleter(action='ignore'):
197 def provisionalcompleter(action='ignore'):
195 """
198 """
196 This context manager has to be used in any place where unstable completer
199 This context manager has to be used in any place where unstable completer
197 behavior and API may be called.
200 behavior and API may be called.
198
201
199 >>> with provisionalcompleter():
202 >>> with provisionalcompleter():
200 ... completer.do_experimental_things() # works
203 ... completer.do_experimental_things() # works
201
204
202 >>> completer.do_experimental_things() # raises.
205 >>> completer.do_experimental_things() # raises.
203
206
204 .. note::
207 .. note::
205
208
206 Unstable
209 Unstable
207
210
208 By using this context manager you agree that the API in use may change
211 By using this context manager you agree that the API in use may change
209 without warning, and that you won't complain if they do so.
212 without warning, and that you won't complain if they do so.
210
213
211 You also understand that, if the API is not to your liking, you should report
214 You also understand that, if the API is not to your liking, you should report
212 a bug to explain your use case upstream.
215 a bug to explain your use case upstream.
213
216
214 We'll be happy to get your feedback, feature requests, and improvements on
217 We'll be happy to get your feedback, feature requests, and improvements on
215 any of the unstable APIs!
218 any of the unstable APIs!
216 """
219 """
217 with warnings.catch_warnings():
220 with warnings.catch_warnings():
218 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
221 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
219 yield
222 yield
220
223
221
224
222 def has_open_quotes(s):
225 def has_open_quotes(s):
223 """Return whether a string has open quotes.
226 """Return whether a string has open quotes.
224
227
225 This simply counts whether the number of quote characters of either type in
228 This simply counts whether the number of quote characters of either type in
226 the string is odd.
229 the string is odd.
227
230
228 Returns
231 Returns
229 -------
232 -------
230 If there is an open quote, the quote character is returned. Else, return
233 If there is an open quote, the quote character is returned. Else, return
231 False.
234 False.
232 """
235 """
233 # We check " first, then ', so complex cases with nested quotes will get
236 # We check " first, then ', so complex cases with nested quotes will get
234 # the " to take precedence.
237 # the " to take precedence.
235 if s.count('"') % 2:
238 if s.count('"') % 2:
236 return '"'
239 return '"'
237 elif s.count("'") % 2:
240 elif s.count("'") % 2:
238 return "'"
241 return "'"
239 else:
242 else:
240 return False
243 return False
241
244
242
245
243 def protect_filename(s, protectables=PROTECTABLES):
246 def protect_filename(s, protectables=PROTECTABLES):
244 """Escape a string to protect certain characters."""
247 """Escape a string to protect certain characters."""
245 if set(s) & set(protectables):
248 if set(s) & set(protectables):
246 if sys.platform == "win32":
249 if sys.platform == "win32":
247 return '"' + s + '"'
250 return '"' + s + '"'
248 else:
251 else:
249 return "".join(("\\" + c if c in protectables else c) for c in s)
252 return "".join(("\\" + c if c in protectables else c) for c in s)
250 else:
253 else:
251 return s
254 return s
252
255
253
256
254 def expand_user(path:str) -> Tuple[str, bool, str]:
257 def expand_user(path:str) -> Tuple[str, bool, str]:
255 """Expand ``~``-style usernames in strings.
258 """Expand ``~``-style usernames in strings.
256
259
257 This is similar to :func:`os.path.expanduser`, but it computes and returns
260 This is similar to :func:`os.path.expanduser`, but it computes and returns
258 extra information that will be useful if the input was being used in
261 extra information that will be useful if the input was being used in
259 computing completions, and you wish to return the completions with the
262 computing completions, and you wish to return the completions with the
260 original '~' instead of its expanded value.
263 original '~' instead of its expanded value.
261
264
262 Parameters
265 Parameters
263 ----------
266 ----------
264 path : str
267 path : str
265 String to be expanded. If no ~ is present, the output is the same as the
268 String to be expanded. If no ~ is present, the output is the same as the
266 input.
269 input.
267
270
268 Returns
271 Returns
269 -------
272 -------
270 newpath : str
273 newpath : str
271 Result of ~ expansion in the input path.
274 Result of ~ expansion in the input path.
272 tilde_expand : bool
275 tilde_expand : bool
273 Whether any expansion was performed or not.
276 Whether any expansion was performed or not.
274 tilde_val : str
277 tilde_val : str
275 The value that ~ was replaced with.
278 The value that ~ was replaced with.
276 """
279 """
277 # Default values
280 # Default values
278 tilde_expand = False
281 tilde_expand = False
279 tilde_val = ''
282 tilde_val = ''
280 newpath = path
283 newpath = path
281
284
282 if path.startswith('~'):
285 if path.startswith('~'):
283 tilde_expand = True
286 tilde_expand = True
284 rest = len(path)-1
287 rest = len(path)-1
285 newpath = os.path.expanduser(path)
288 newpath = os.path.expanduser(path)
286 if rest:
289 if rest:
287 tilde_val = newpath[:-rest]
290 tilde_val = newpath[:-rest]
288 else:
291 else:
289 tilde_val = newpath
292 tilde_val = newpath
290
293
291 return newpath, tilde_expand, tilde_val
294 return newpath, tilde_expand, tilde_val
292
295
293
296
294 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
297 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
295 """Does the opposite of expand_user, with its outputs.
298 """Does the opposite of expand_user, with its outputs.
296 """
299 """
297 if tilde_expand:
300 if tilde_expand:
298 return path.replace(tilde_val, '~')
301 return path.replace(tilde_val, '~')
299 else:
302 else:
300 return path
303 return path
301
304
302
305
303 def completions_sorting_key(word):
306 def completions_sorting_key(word):
304 """key for sorting completions
307 """key for sorting completions
305
308
306 This does several things:
309 This does several things:
307
310
308 - Demote any completions starting with underscores to the end
311 - Demote any completions starting with underscores to the end
309 - Insert any %magic and %%cellmagic completions in the alphabetical order
312 - Insert any %magic and %%cellmagic completions in the alphabetical order
310 by their name
313 by their name
311 """
314 """
312 prio1, prio2 = 0, 0
315 prio1, prio2 = 0, 0
313
316
314 if word.startswith('__'):
317 if word.startswith('__'):
315 prio1 = 2
318 prio1 = 2
316 elif word.startswith('_'):
319 elif word.startswith('_'):
317 prio1 = 1
320 prio1 = 1
318
321
319 if word.endswith('='):
322 if word.endswith('='):
320 prio1 = -1
323 prio1 = -1
321
324
322 if word.startswith('%%'):
325 if word.startswith('%%'):
323 # If there's another % in there, this is something else, so leave it alone
326 # If there's another % in there, this is something else, so leave it alone
324 if not "%" in word[2:]:
327 if not "%" in word[2:]:
325 word = word[2:]
328 word = word[2:]
326 prio2 = 2
329 prio2 = 2
327 elif word.startswith('%'):
330 elif word.startswith('%'):
328 if not "%" in word[1:]:
331 if not "%" in word[1:]:
329 word = word[1:]
332 word = word[1:]
330 prio2 = 1
333 prio2 = 1
331
334
332 return prio1, word, prio2
335 return prio1, word, prio2
333
336
334
337
335 class _FakeJediCompletion:
338 class _FakeJediCompletion:
336 """
339 """
337 This is a workaround to communicate to the UI that Jedi has crashed and to
340 This is a workaround to communicate to the UI that Jedi has crashed and to
338 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
341 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
339
342
340 Added in IPython 6.0 so should likely be removed for 7.0
343 Added in IPython 6.0 so should likely be removed for 7.0
341
344
342 """
345 """
343
346
344 def __init__(self, name):
347 def __init__(self, name):
345
348
346 self.name = name
349 self.name = name
347 self.complete = name
350 self.complete = name
348 self.type = 'crashed'
351 self.type = 'crashed'
349 self.name_with_symbols = name
352 self.name_with_symbols = name
350 self.signature = ''
353 self.signature = ''
351 self._origin = 'fake'
354 self._origin = 'fake'
352
355
353 def __repr__(self):
356 def __repr__(self):
354 return '<Fake completion object jedi has crashed>'
357 return '<Fake completion object jedi has crashed>'
355
358
356
359
357 class Completion:
360 class Completion:
358 """
361 """
359 Completion object used and return by IPython completers.
362 Completion object used and return by IPython completers.
360
363
361 .. warning::
364 .. warning::
362
365
363 Unstable
366 Unstable
364
367
365 This function is unstable, API may change without warning.
368 This function is unstable, API may change without warning.
366 It will also raise unless use in proper context manager.
369 It will also raise unless use in proper context manager.
367
370
368 This act as a middle ground :any:`Completion` object between the
371 This act as a middle ground :any:`Completion` object between the
369 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
372 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
370 object. While Jedi need a lot of information about evaluator and how the
373 object. While Jedi need a lot of information about evaluator and how the
371 code should be ran/inspected, PromptToolkit (and other frontend) mostly
374 code should be ran/inspected, PromptToolkit (and other frontend) mostly
372 need user facing information.
375 need user facing information.
373
376
374 - Which range should be replaced replaced by what.
377 - Which range should be replaced replaced by what.
375 - Some metadata (like completion type), or meta information to displayed to
378 - Some metadata (like completion type), or meta information to displayed to
376 the use user.
379 the use user.
377
380
378 For debugging purpose we can also store the origin of the completion (``jedi``,
381 For debugging purpose we can also store the origin of the completion (``jedi``,
379 ``IPython.python_matches``, ``IPython.magics_matches``...).
382 ``IPython.python_matches``, ``IPython.magics_matches``...).
380 """
383 """
381
384
382 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
385 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
383
386
384 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
387 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
385 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
388 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
386 "It may change without warnings. "
389 "It may change without warnings. "
387 "Use in corresponding context manager.",
390 "Use in corresponding context manager.",
388 category=ProvisionalCompleterWarning, stacklevel=2)
391 category=ProvisionalCompleterWarning, stacklevel=2)
389
392
390 self.start = start
393 self.start = start
391 self.end = end
394 self.end = end
392 self.text = text
395 self.text = text
393 self.type = type
396 self.type = type
394 self.signature = signature
397 self.signature = signature
395 self._origin = _origin
398 self._origin = _origin
396
399
397 def __repr__(self):
400 def __repr__(self):
398 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
401 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
399 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
402 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
400
403
401 def __eq__(self, other)->Bool:
404 def __eq__(self, other)->Bool:
402 """
405 """
403 Equality and hash do not hash the type (as some completer may not be
406 Equality and hash do not hash the type (as some completer may not be
404 able to infer the type), but are use to (partially) de-duplicate
407 able to infer the type), but are use to (partially) de-duplicate
405 completion.
408 completion.
406
409
407 Completely de-duplicating completion is a bit tricker that just
410 Completely de-duplicating completion is a bit tricker that just
408 comparing as it depends on surrounding text, which Completions are not
411 comparing as it depends on surrounding text, which Completions are not
409 aware of.
412 aware of.
410 """
413 """
411 return self.start == other.start and \
414 return self.start == other.start and \
412 self.end == other.end and \
415 self.end == other.end and \
413 self.text == other.text
416 self.text == other.text
414
417
415 def __hash__(self):
418 def __hash__(self):
416 return hash((self.start, self.end, self.text))
419 return hash((self.start, self.end, self.text))
417
420
418
421
419 _IC = Iterable[Completion]
422 _IC = Iterable[Completion]
420
423
421
424
422 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
425 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
423 """
426 """
424 Deduplicate a set of completions.
427 Deduplicate a set of completions.
425
428
426 .. warning::
429 .. warning::
427
430
428 Unstable
431 Unstable
429
432
430 This function is unstable, API may change without warning.
433 This function is unstable, API may change without warning.
431
434
432 Parameters
435 Parameters
433 ----------
436 ----------
434 text : str
437 text : str
435 text that should be completed.
438 text that should be completed.
436 completions : Iterator[Completion]
439 completions : Iterator[Completion]
437 iterator over the completions to deduplicate
440 iterator over the completions to deduplicate
438
441
439 Yields
442 Yields
440 ------
443 ------
441 `Completions` objects
444 `Completions` objects
442 Completions coming from multiple sources, may be different but end up having
445 Completions coming from multiple sources, may be different but end up having
443 the same effect when applied to ``text``. If this is the case, this will
446 the same effect when applied to ``text``. If this is the case, this will
444 consider completions as equal and only emit the first encountered.
447 consider completions as equal and only emit the first encountered.
445 Not folded in `completions()` yet for debugging purpose, and to detect when
448 Not folded in `completions()` yet for debugging purpose, and to detect when
446 the IPython completer does return things that Jedi does not, but should be
449 the IPython completer does return things that Jedi does not, but should be
447 at some point.
450 at some point.
448 """
451 """
449 completions = list(completions)
452 completions = list(completions)
450 if not completions:
453 if not completions:
451 return
454 return
452
455
453 new_start = min(c.start for c in completions)
456 new_start = min(c.start for c in completions)
454 new_end = max(c.end for c in completions)
457 new_end = max(c.end for c in completions)
455
458
456 seen = set()
459 seen = set()
457 for c in completions:
460 for c in completions:
458 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
461 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
459 if new_text not in seen:
462 if new_text not in seen:
460 yield c
463 yield c
461 seen.add(new_text)
464 seen.add(new_text)
462
465
463
466
464 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
467 def rectify_completions(text: str, completions: _IC, *, _debug=False)->_IC:
465 """
468 """
466 Rectify a set of completions to all have the same ``start`` and ``end``
469 Rectify a set of completions to all have the same ``start`` and ``end``
467
470
468 .. warning::
471 .. warning::
469
472
470 Unstable
473 Unstable
471
474
472 This function is unstable, API may change without warning.
475 This function is unstable, API may change without warning.
473 It will also raise unless use in proper context manager.
476 It will also raise unless use in proper context manager.
474
477
475 Parameters
478 Parameters
476 ----------
479 ----------
477 text : str
480 text : str
478 text that should be completed.
481 text that should be completed.
479 completions : Iterator[Completion]
482 completions : Iterator[Completion]
480 iterator over the completions to rectify
483 iterator over the completions to rectify
481
484
482 Notes
485 Notes
483 -----
486 -----
484 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
487 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
485 the Jupyter Protocol requires them to behave like so. This will readjust
488 the Jupyter Protocol requires them to behave like so. This will readjust
486 the completion to have the same ``start`` and ``end`` by padding both
489 the completion to have the same ``start`` and ``end`` by padding both
487 extremities with surrounding text.
490 extremities with surrounding text.
488
491
489 During stabilisation should support a ``_debug`` option to log which
492 During stabilisation should support a ``_debug`` option to log which
490 completion are return by the IPython completer and not found in Jedi in
493 completion are return by the IPython completer and not found in Jedi in
491 order to make upstream bug report.
494 order to make upstream bug report.
492 """
495 """
493 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
496 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
494 "It may change without warnings. "
497 "It may change without warnings. "
495 "Use in corresponding context manager.",
498 "Use in corresponding context manager.",
496 category=ProvisionalCompleterWarning, stacklevel=2)
499 category=ProvisionalCompleterWarning, stacklevel=2)
497
500
498 completions = list(completions)
501 completions = list(completions)
499 if not completions:
502 if not completions:
500 return
503 return
501 starts = (c.start for c in completions)
504 starts = (c.start for c in completions)
502 ends = (c.end for c in completions)
505 ends = (c.end for c in completions)
503
506
504 new_start = min(starts)
507 new_start = min(starts)
505 new_end = max(ends)
508 new_end = max(ends)
506
509
507 seen_jedi = set()
510 seen_jedi = set()
508 seen_python_matches = set()
511 seen_python_matches = set()
509 for c in completions:
512 for c in completions:
510 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
513 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
511 if c._origin == 'jedi':
514 if c._origin == 'jedi':
512 seen_jedi.add(new_text)
515 seen_jedi.add(new_text)
513 elif c._origin == 'IPCompleter.python_matches':
516 elif c._origin == 'IPCompleter.python_matches':
514 seen_python_matches.add(new_text)
517 seen_python_matches.add(new_text)
515 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
518 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
516 diff = seen_python_matches.difference(seen_jedi)
519 diff = seen_python_matches.difference(seen_jedi)
517 if diff and _debug:
520 if diff and _debug:
518 print('IPython.python matches have extras:', diff)
521 print('IPython.python matches have extras:', diff)
519
522
520
523
521 if sys.platform == 'win32':
524 if sys.platform == 'win32':
522 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
525 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
523 else:
526 else:
524 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
527 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
525
528
526 GREEDY_DELIMS = ' =\r\n'
529 GREEDY_DELIMS = ' =\r\n'
527
530
528
531
529 class CompletionSplitter(object):
532 class CompletionSplitter(object):
530 """An object to split an input line in a manner similar to readline.
533 """An object to split an input line in a manner similar to readline.
531
534
532 By having our own implementation, we can expose readline-like completion in
535 By having our own implementation, we can expose readline-like completion in
533 a uniform manner to all frontends. This object only needs to be given the
536 a uniform manner to all frontends. This object only needs to be given the
534 line of text to be split and the cursor position on said line, and it
537 line of text to be split and the cursor position on said line, and it
535 returns the 'word' to be completed on at the cursor after splitting the
538 returns the 'word' to be completed on at the cursor after splitting the
536 entire line.
539 entire line.
537
540
538 What characters are used as splitting delimiters can be controlled by
541 What characters are used as splitting delimiters can be controlled by
539 setting the ``delims`` attribute (this is a property that internally
542 setting the ``delims`` attribute (this is a property that internally
540 automatically builds the necessary regular expression)"""
543 automatically builds the necessary regular expression)"""
541
544
542 # Private interface
545 # Private interface
543
546
544 # A string of delimiter characters. The default value makes sense for
547 # A string of delimiter characters. The default value makes sense for
545 # IPython's most typical usage patterns.
548 # IPython's most typical usage patterns.
546 _delims = DELIMS
549 _delims = DELIMS
547
550
548 # The expression (a normal string) to be compiled into a regular expression
551 # The expression (a normal string) to be compiled into a regular expression
549 # for actual splitting. We store it as an attribute mostly for ease of
552 # for actual splitting. We store it as an attribute mostly for ease of
550 # debugging, since this type of code can be so tricky to debug.
553 # debugging, since this type of code can be so tricky to debug.
551 _delim_expr = None
554 _delim_expr = None
552
555
553 # The regular expression that does the actual splitting
556 # The regular expression that does the actual splitting
554 _delim_re = None
557 _delim_re = None
555
558
556 def __init__(self, delims=None):
559 def __init__(self, delims=None):
557 delims = CompletionSplitter._delims if delims is None else delims
560 delims = CompletionSplitter._delims if delims is None else delims
558 self.delims = delims
561 self.delims = delims
559
562
560 @property
563 @property
561 def delims(self):
564 def delims(self):
562 """Return the string of delimiter characters."""
565 """Return the string of delimiter characters."""
563 return self._delims
566 return self._delims
564
567
565 @delims.setter
568 @delims.setter
566 def delims(self, delims):
569 def delims(self, delims):
567 """Set the delimiters for line splitting."""
570 """Set the delimiters for line splitting."""
568 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
571 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
569 self._delim_re = re.compile(expr)
572 self._delim_re = re.compile(expr)
570 self._delims = delims
573 self._delims = delims
571 self._delim_expr = expr
574 self._delim_expr = expr
572
575
573 def split_line(self, line, cursor_pos=None):
576 def split_line(self, line, cursor_pos=None):
574 """Split a line of text with a cursor at the given position.
577 """Split a line of text with a cursor at the given position.
575 """
578 """
576 l = line if cursor_pos is None else line[:cursor_pos]
579 l = line if cursor_pos is None else line[:cursor_pos]
577 return self._delim_re.split(l)[-1]
580 return self._delim_re.split(l)[-1]
578
581
579
582
580
583
581 class Completer(Configurable):
584 class Completer(Configurable):
582
585
583 greedy = Bool(False,
586 greedy = Bool(False,
584 help="""Activate greedy completion
587 help="""Activate greedy completion
585 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
588 PENDING DEPRECTION. this is now mostly taken care of with Jedi.
586
589
587 This will enable completion on elements of lists, results of function calls, etc.,
590 This will enable completion on elements of lists, results of function calls, etc.,
588 but can be unsafe because the code is actually evaluated on TAB.
591 but can be unsafe because the code is actually evaluated on TAB.
589 """
592 """
590 ).tag(config=True)
593 ).tag(config=True)
591
594
592 use_jedi = Bool(default_value=JEDI_INSTALLED,
595 use_jedi = Bool(default_value=JEDI_INSTALLED,
593 help="Experimental: Use Jedi to generate autocompletions. "
596 help="Experimental: Use Jedi to generate autocompletions. "
594 "Default to True if jedi is installed.").tag(config=True)
597 "Default to True if jedi is installed.").tag(config=True)
595
598
596 jedi_compute_type_timeout = Int(default_value=400,
599 jedi_compute_type_timeout = Int(default_value=400,
597 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
600 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
598 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
601 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
599 performance by preventing jedi to build its cache.
602 performance by preventing jedi to build its cache.
600 """).tag(config=True)
603 """).tag(config=True)
601
604
602 debug = Bool(default_value=False,
605 debug = Bool(default_value=False,
603 help='Enable debug for the Completer. Mostly print extra '
606 help='Enable debug for the Completer. Mostly print extra '
604 'information for experimental jedi integration.')\
607 'information for experimental jedi integration.')\
605 .tag(config=True)
608 .tag(config=True)
606
609
607 backslash_combining_completions = Bool(True,
610 backslash_combining_completions = Bool(True,
608 help="Enable unicode completions, e.g. \\alpha<tab> . "
611 help="Enable unicode completions, e.g. \\alpha<tab> . "
609 "Includes completion of latex commands, unicode names, and expanding "
612 "Includes completion of latex commands, unicode names, and expanding "
610 "unicode characters back to latex commands.").tag(config=True)
613 "unicode characters back to latex commands.").tag(config=True)
611
614
612
615
613
616
614 def __init__(self, namespace=None, global_namespace=None, **kwargs):
617 def __init__(self, namespace=None, global_namespace=None, **kwargs):
615 """Create a new completer for the command line.
618 """Create a new completer for the command line.
616
619
617 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
620 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
618
621
619 If unspecified, the default namespace where completions are performed
622 If unspecified, the default namespace where completions are performed
620 is __main__ (technically, __main__.__dict__). Namespaces should be
623 is __main__ (technically, __main__.__dict__). Namespaces should be
621 given as dictionaries.
624 given as dictionaries.
622
625
623 An optional second namespace can be given. This allows the completer
626 An optional second namespace can be given. This allows the completer
624 to handle cases where both the local and global scopes need to be
627 to handle cases where both the local and global scopes need to be
625 distinguished.
628 distinguished.
626 """
629 """
627
630
628 # Don't bind to namespace quite yet, but flag whether the user wants a
631 # Don't bind to namespace quite yet, but flag whether the user wants a
629 # specific namespace or to use __main__.__dict__. This will allow us
632 # specific namespace or to use __main__.__dict__. This will allow us
630 # to bind to __main__.__dict__ at completion time, not now.
633 # to bind to __main__.__dict__ at completion time, not now.
631 if namespace is None:
634 if namespace is None:
632 self.use_main_ns = True
635 self.use_main_ns = True
633 else:
636 else:
634 self.use_main_ns = False
637 self.use_main_ns = False
635 self.namespace = namespace
638 self.namespace = namespace
636
639
637 # The global namespace, if given, can be bound directly
640 # The global namespace, if given, can be bound directly
638 if global_namespace is None:
641 if global_namespace is None:
639 self.global_namespace = {}
642 self.global_namespace = {}
640 else:
643 else:
641 self.global_namespace = global_namespace
644 self.global_namespace = global_namespace
642
645
643 self.custom_matchers = []
646 self.custom_matchers = []
644
647
645 super(Completer, self).__init__(**kwargs)
648 super(Completer, self).__init__(**kwargs)
646
649
647 def complete(self, text, state):
650 def complete(self, text, state):
648 """Return the next possible completion for 'text'.
651 """Return the next possible completion for 'text'.
649
652
650 This is called successively with state == 0, 1, 2, ... until it
653 This is called successively with state == 0, 1, 2, ... until it
651 returns None. The completion should begin with 'text'.
654 returns None. The completion should begin with 'text'.
652
655
653 """
656 """
654 if self.use_main_ns:
657 if self.use_main_ns:
655 self.namespace = __main__.__dict__
658 self.namespace = __main__.__dict__
656
659
657 if state == 0:
660 if state == 0:
658 if "." in text:
661 if "." in text:
659 self.matches = self.attr_matches(text)
662 self.matches = self.attr_matches(text)
660 else:
663 else:
661 self.matches = self.global_matches(text)
664 self.matches = self.global_matches(text)
662 try:
665 try:
663 return self.matches[state]
666 return self.matches[state]
664 except IndexError:
667 except IndexError:
665 return None
668 return None
666
669
667 def global_matches(self, text):
670 def global_matches(self, text):
668 """Compute matches when text is a simple name.
671 """Compute matches when text is a simple name.
669
672
670 Return a list of all keywords, built-in functions and names currently
673 Return a list of all keywords, built-in functions and names currently
671 defined in self.namespace or self.global_namespace that match.
674 defined in self.namespace or self.global_namespace that match.
672
675
673 """
676 """
674 matches = []
677 matches = []
675 match_append = matches.append
678 match_append = matches.append
676 n = len(text)
679 n = len(text)
677 for lst in [keyword.kwlist,
680 for lst in [keyword.kwlist,
678 builtin_mod.__dict__.keys(),
681 builtin_mod.__dict__.keys(),
679 self.namespace.keys(),
682 self.namespace.keys(),
680 self.global_namespace.keys()]:
683 self.global_namespace.keys()]:
681 for word in lst:
684 for word in lst:
682 if word[:n] == text and word != "__builtins__":
685 if word[:n] == text and word != "__builtins__":
683 match_append(word)
686 match_append(word)
684
687
685 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
688 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
686 for lst in [self.namespace.keys(),
689 for lst in [self.namespace.keys(),
687 self.global_namespace.keys()]:
690 self.global_namespace.keys()]:
688 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
691 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
689 for word in lst if snake_case_re.match(word)}
692 for word in lst if snake_case_re.match(word)}
690 for word in shortened.keys():
693 for word in shortened.keys():
691 if word[:n] == text and word != "__builtins__":
694 if word[:n] == text and word != "__builtins__":
692 match_append(shortened[word])
695 match_append(shortened[word])
693 return matches
696 return matches
694
697
695 def attr_matches(self, text):
698 def attr_matches(self, text):
696 """Compute matches when text contains a dot.
699 """Compute matches when text contains a dot.
697
700
698 Assuming the text is of the form NAME.NAME....[NAME], and is
701 Assuming the text is of the form NAME.NAME....[NAME], and is
699 evaluatable in self.namespace or self.global_namespace, it will be
702 evaluatable in self.namespace or self.global_namespace, it will be
700 evaluated and its attributes (as revealed by dir()) are used as
703 evaluated and its attributes (as revealed by dir()) are used as
701 possible completions. (For class instances, class members are
704 possible completions. (For class instances, class members are
702 also considered.)
705 also considered.)
703
706
704 WARNING: this can still invoke arbitrary C code, if an object
707 WARNING: this can still invoke arbitrary C code, if an object
705 with a __getattr__ hook is evaluated.
708 with a __getattr__ hook is evaluated.
706
709
707 """
710 """
708
711
709 # Another option, seems to work great. Catches things like ''.<tab>
712 # Another option, seems to work great. Catches things like ''.<tab>
710 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
713 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
711
714
712 if m:
715 if m:
713 expr, attr = m.group(1, 3)
716 expr, attr = m.group(1, 3)
714 elif self.greedy:
717 elif self.greedy:
715 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
718 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
716 if not m2:
719 if not m2:
717 return []
720 return []
718 expr, attr = m2.group(1,2)
721 expr, attr = m2.group(1,2)
719 else:
722 else:
720 return []
723 return []
721
724
722 try:
725 try:
723 obj = eval(expr, self.namespace)
726 obj = eval(expr, self.namespace)
724 except:
727 except:
725 try:
728 try:
726 obj = eval(expr, self.global_namespace)
729 obj = eval(expr, self.global_namespace)
727 except:
730 except:
728 return []
731 return []
729
732
730 if self.limit_to__all__ and hasattr(obj, '__all__'):
733 if self.limit_to__all__ and hasattr(obj, '__all__'):
731 words = get__all__entries(obj)
734 words = get__all__entries(obj)
732 else:
735 else:
733 words = dir2(obj)
736 words = dir2(obj)
734
737
735 try:
738 try:
736 words = generics.complete_object(obj, words)
739 words = generics.complete_object(obj, words)
737 except TryNext:
740 except TryNext:
738 pass
741 pass
739 except AssertionError:
742 except AssertionError:
740 raise
743 raise
741 except Exception:
744 except Exception:
742 # Silence errors from completion function
745 # Silence errors from completion function
743 #raise # dbg
746 #raise # dbg
744 pass
747 pass
745 # Build match list to return
748 # Build match list to return
746 n = len(attr)
749 n = len(attr)
747 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
750 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
748
751
749
752
750 def get__all__entries(obj):
753 def get__all__entries(obj):
751 """returns the strings in the __all__ attribute"""
754 """returns the strings in the __all__ attribute"""
752 try:
755 try:
753 words = getattr(obj, '__all__')
756 words = getattr(obj, '__all__')
754 except:
757 except:
755 return []
758 return []
756
759
757 return [w for w in words if isinstance(w, str)]
760 return [w for w in words if isinstance(w, str)]
758
761
759
762
760 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
763 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
761 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
764 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
762 """Used by dict_key_matches, matching the prefix to a list of keys
765 """Used by dict_key_matches, matching the prefix to a list of keys
763
766
764 Parameters
767 Parameters
765 ----------
768 ----------
766 keys
769 keys
767 list of keys in dictionary currently being completed.
770 list of keys in dictionary currently being completed.
768 prefix
771 prefix
769 Part of the text already typed by the user. E.g. `mydict[b'fo`
772 Part of the text already typed by the user. E.g. `mydict[b'fo`
770 delims
773 delims
771 String of delimiters to consider when finding the current key.
774 String of delimiters to consider when finding the current key.
772 extra_prefix : optional
775 extra_prefix : optional
773 Part of the text already typed in multi-key index cases. E.g. for
776 Part of the text already typed in multi-key index cases. E.g. for
774 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
777 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
775
778
776 Returns
779 Returns
777 -------
780 -------
778 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
781 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
779 ``quote`` being the quote that need to be used to close current string.
782 ``quote`` being the quote that need to be used to close current string.
780 ``token_start`` the position where the replacement should start occurring,
783 ``token_start`` the position where the replacement should start occurring,
781 ``matches`` a list of replacement/completion
784 ``matches`` a list of replacement/completion
782
785
783 """
786 """
784 prefix_tuple = extra_prefix if extra_prefix else ()
787 prefix_tuple = extra_prefix if extra_prefix else ()
785 Nprefix = len(prefix_tuple)
788 Nprefix = len(prefix_tuple)
786 def filter_prefix_tuple(key):
789 def filter_prefix_tuple(key):
787 # Reject too short keys
790 # Reject too short keys
788 if len(key) <= Nprefix:
791 if len(key) <= Nprefix:
789 return False
792 return False
790 # Reject keys with non str/bytes in it
793 # Reject keys with non str/bytes in it
791 for k in key:
794 for k in key:
792 if not isinstance(k, (str, bytes)):
795 if not isinstance(k, (str, bytes)):
793 return False
796 return False
794 # Reject keys that do not match the prefix
797 # Reject keys that do not match the prefix
795 for k, pt in zip(key, prefix_tuple):
798 for k, pt in zip(key, prefix_tuple):
796 if k != pt:
799 if k != pt:
797 return False
800 return False
798 # All checks passed!
801 # All checks passed!
799 return True
802 return True
800
803
801 filtered_keys:List[Union[str,bytes]] = []
804 filtered_keys:List[Union[str,bytes]] = []
802 def _add_to_filtered_keys(key):
805 def _add_to_filtered_keys(key):
803 if isinstance(key, (str, bytes)):
806 if isinstance(key, (str, bytes)):
804 filtered_keys.append(key)
807 filtered_keys.append(key)
805
808
806 for k in keys:
809 for k in keys:
807 if isinstance(k, tuple):
810 if isinstance(k, tuple):
808 if filter_prefix_tuple(k):
811 if filter_prefix_tuple(k):
809 _add_to_filtered_keys(k[Nprefix])
812 _add_to_filtered_keys(k[Nprefix])
810 else:
813 else:
811 _add_to_filtered_keys(k)
814 _add_to_filtered_keys(k)
812
815
813 if not prefix:
816 if not prefix:
814 return '', 0, [repr(k) for k in filtered_keys]
817 return '', 0, [repr(k) for k in filtered_keys]
815 quote_match = re.search('["\']', prefix)
818 quote_match = re.search('["\']', prefix)
816 assert quote_match is not None # silence mypy
819 assert quote_match is not None # silence mypy
817 quote = quote_match.group()
820 quote = quote_match.group()
818 try:
821 try:
819 prefix_str = eval(prefix + quote, {})
822 prefix_str = eval(prefix + quote, {})
820 except Exception:
823 except Exception:
821 return '', 0, []
824 return '', 0, []
822
825
823 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
826 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
824 token_match = re.search(pattern, prefix, re.UNICODE)
827 token_match = re.search(pattern, prefix, re.UNICODE)
825 assert token_match is not None # silence mypy
828 assert token_match is not None # silence mypy
826 token_start = token_match.start()
829 token_start = token_match.start()
827 token_prefix = token_match.group()
830 token_prefix = token_match.group()
828
831
829 matched:List[str] = []
832 matched:List[str] = []
830 for key in filtered_keys:
833 for key in filtered_keys:
831 try:
834 try:
832 if not key.startswith(prefix_str):
835 if not key.startswith(prefix_str):
833 continue
836 continue
834 except (AttributeError, TypeError, UnicodeError):
837 except (AttributeError, TypeError, UnicodeError):
835 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
838 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
836 continue
839 continue
837
840
838 # reformat remainder of key to begin with prefix
841 # reformat remainder of key to begin with prefix
839 rem = key[len(prefix_str):]
842 rem = key[len(prefix_str):]
840 # force repr wrapped in '
843 # force repr wrapped in '
841 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
844 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
842 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
845 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
843 if quote == '"':
846 if quote == '"':
844 # The entered prefix is quoted with ",
847 # The entered prefix is quoted with ",
845 # but the match is quoted with '.
848 # but the match is quoted with '.
846 # A contained " hence needs escaping for comparison:
849 # A contained " hence needs escaping for comparison:
847 rem_repr = rem_repr.replace('"', '\\"')
850 rem_repr = rem_repr.replace('"', '\\"')
848
851
849 # then reinsert prefix from start of token
852 # then reinsert prefix from start of token
850 matched.append('%s%s' % (token_prefix, rem_repr))
853 matched.append('%s%s' % (token_prefix, rem_repr))
851 return quote, token_start, matched
854 return quote, token_start, matched
852
855
853
856
854 def cursor_to_position(text:str, line:int, column:int)->int:
857 def cursor_to_position(text:str, line:int, column:int)->int:
855 """
858 """
856 Convert the (line,column) position of the cursor in text to an offset in a
859 Convert the (line,column) position of the cursor in text to an offset in a
857 string.
860 string.
858
861
859 Parameters
862 Parameters
860 ----------
863 ----------
861 text : str
864 text : str
862 The text in which to calculate the cursor offset
865 The text in which to calculate the cursor offset
863 line : int
866 line : int
864 Line of the cursor; 0-indexed
867 Line of the cursor; 0-indexed
865 column : int
868 column : int
866 Column of the cursor 0-indexed
869 Column of the cursor 0-indexed
867
870
868 Returns
871 Returns
869 -------
872 -------
870 Position of the cursor in ``text``, 0-indexed.
873 Position of the cursor in ``text``, 0-indexed.
871
874
872 See Also
875 See Also
873 --------
876 --------
874 position_to_cursor : reciprocal of this function
877 position_to_cursor : reciprocal of this function
875
878
876 """
879 """
877 lines = text.split('\n')
880 lines = text.split('\n')
878 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
881 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
879
882
880 return sum(len(l) + 1 for l in lines[:line]) + column
883 return sum(len(l) + 1 for l in lines[:line]) + column
881
884
882 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
885 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
883 """
886 """
884 Convert the position of the cursor in text (0 indexed) to a line
887 Convert the position of the cursor in text (0 indexed) to a line
885 number(0-indexed) and a column number (0-indexed) pair
888 number(0-indexed) and a column number (0-indexed) pair
886
889
887 Position should be a valid position in ``text``.
890 Position should be a valid position in ``text``.
888
891
889 Parameters
892 Parameters
890 ----------
893 ----------
891 text : str
894 text : str
892 The text in which to calculate the cursor offset
895 The text in which to calculate the cursor offset
893 offset : int
896 offset : int
894 Position of the cursor in ``text``, 0-indexed.
897 Position of the cursor in ``text``, 0-indexed.
895
898
896 Returns
899 Returns
897 -------
900 -------
898 (line, column) : (int, int)
901 (line, column) : (int, int)
899 Line of the cursor; 0-indexed, column of the cursor 0-indexed
902 Line of the cursor; 0-indexed, column of the cursor 0-indexed
900
903
901 See Also
904 See Also
902 --------
905 --------
903 cursor_to_position : reciprocal of this function
906 cursor_to_position : reciprocal of this function
904
907
905 """
908 """
906
909
907 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
910 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
908
911
909 before = text[:offset]
912 before = text[:offset]
910 blines = before.split('\n') # ! splitnes trim trailing \n
913 blines = before.split('\n') # ! splitnes trim trailing \n
911 line = before.count('\n')
914 line = before.count('\n')
912 col = len(blines[-1])
915 col = len(blines[-1])
913 return line, col
916 return line, col
914
917
915
918
916 def _safe_isinstance(obj, module, class_name):
919 def _safe_isinstance(obj, module, class_name):
917 """Checks if obj is an instance of module.class_name if loaded
920 """Checks if obj is an instance of module.class_name if loaded
918 """
921 """
919 return (module in sys.modules and
922 return (module in sys.modules and
920 isinstance(obj, getattr(import_module(module), class_name)))
923 isinstance(obj, getattr(import_module(module), class_name)))
921
924
922 def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
925 def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
923 """Match Unicode characters back to Unicode name
926 """Match Unicode characters back to Unicode name
924
927
925 This does ``β˜ƒ`` -> ``\\snowman``
928 This does ``β˜ƒ`` -> ``\\snowman``
926
929
927 Note that snowman is not a valid python3 combining character but will be expanded.
930 Note that snowman is not a valid python3 combining character but will be expanded.
928 Though it will not recombine back to the snowman character by the completion machinery.
931 Though it will not recombine back to the snowman character by the completion machinery.
929
932
930 This will not either back-complete standard sequences like \\n, \\b ...
933 This will not either back-complete standard sequences like \\n, \\b ...
931
934
932 Returns
935 Returns
933 =======
936 =======
934
937
935 Return a tuple with two elements:
938 Return a tuple with two elements:
936
939
937 - The Unicode character that was matched (preceded with a backslash), or
940 - The Unicode character that was matched (preceded with a backslash), or
938 empty string,
941 empty string,
939 - a sequence (of 1), name for the match Unicode character, preceded by
942 - a sequence (of 1), name for the match Unicode character, preceded by
940 backslash, or empty if no match.
943 backslash, or empty if no match.
941
944
942 """
945 """
943 if len(text)<2:
946 if len(text)<2:
944 return '', ()
947 return '', ()
945 maybe_slash = text[-2]
948 maybe_slash = text[-2]
946 if maybe_slash != '\\':
949 if maybe_slash != '\\':
947 return '', ()
950 return '', ()
948
951
949 char = text[-1]
952 char = text[-1]
950 # no expand on quote for completion in strings.
953 # no expand on quote for completion in strings.
951 # nor backcomplete standard ascii keys
954 # nor backcomplete standard ascii keys
952 if char in string.ascii_letters or char in ('"',"'"):
955 if char in string.ascii_letters or char in ('"',"'"):
953 return '', ()
956 return '', ()
954 try :
957 try :
955 unic = unicodedata.name(char)
958 unic = unicodedata.name(char)
956 return '\\'+char,('\\'+unic,)
959 return '\\'+char,('\\'+unic,)
957 except KeyError:
960 except KeyError:
958 pass
961 pass
959 return '', ()
962 return '', ()
960
963
961 def back_latex_name_matches(text:str) -> Tuple[str, Sequence[str]] :
964 def back_latex_name_matches(text:str) -> Tuple[str, Sequence[str]] :
962 """Match latex characters back to unicode name
965 """Match latex characters back to unicode name
963
966
964 This does ``\\β„΅`` -> ``\\aleph``
967 This does ``\\β„΅`` -> ``\\aleph``
965
968
966 """
969 """
967 if len(text)<2:
970 if len(text)<2:
968 return '', ()
971 return '', ()
969 maybe_slash = text[-2]
972 maybe_slash = text[-2]
970 if maybe_slash != '\\':
973 if maybe_slash != '\\':
971 return '', ()
974 return '', ()
972
975
973
976
974 char = text[-1]
977 char = text[-1]
975 # no expand on quote for completion in strings.
978 # no expand on quote for completion in strings.
976 # nor backcomplete standard ascii keys
979 # nor backcomplete standard ascii keys
977 if char in string.ascii_letters or char in ('"',"'"):
980 if char in string.ascii_letters or char in ('"',"'"):
978 return '', ()
981 return '', ()
979 try :
982 try :
980 latex = reverse_latex_symbol[char]
983 latex = reverse_latex_symbol[char]
981 # '\\' replace the \ as well
984 # '\\' replace the \ as well
982 return '\\'+char,[latex]
985 return '\\'+char,[latex]
983 except KeyError:
986 except KeyError:
984 pass
987 pass
985 return '', ()
988 return '', ()
986
989
987
990
988 def _formatparamchildren(parameter) -> str:
991 def _formatparamchildren(parameter) -> str:
989 """
992 """
990 Get parameter name and value from Jedi Private API
993 Get parameter name and value from Jedi Private API
991
994
992 Jedi does not expose a simple way to get `param=value` from its API.
995 Jedi does not expose a simple way to get `param=value` from its API.
993
996
994 Parameters
997 Parameters
995 ----------
998 ----------
996 parameter
999 parameter
997 Jedi's function `Param`
1000 Jedi's function `Param`
998
1001
999 Returns
1002 Returns
1000 -------
1003 -------
1001 A string like 'a', 'b=1', '*args', '**kwargs'
1004 A string like 'a', 'b=1', '*args', '**kwargs'
1002
1005
1003 """
1006 """
1004 description = parameter.description
1007 description = parameter.description
1005 if not description.startswith('param '):
1008 if not description.startswith('param '):
1006 raise ValueError('Jedi function parameter description have change format.'
1009 raise ValueError('Jedi function parameter description have change format.'
1007 'Expected "param ...", found %r".' % description)
1010 'Expected "param ...", found %r".' % description)
1008 return description[6:]
1011 return description[6:]
1009
1012
1010 def _make_signature(completion)-> str:
1013 def _make_signature(completion)-> str:
1011 """
1014 """
1012 Make the signature from a jedi completion
1015 Make the signature from a jedi completion
1013
1016
1014 Parameters
1017 Parameters
1015 ----------
1018 ----------
1016 completion : jedi.Completion
1019 completion : jedi.Completion
1017 object does not complete a function type
1020 object does not complete a function type
1018
1021
1019 Returns
1022 Returns
1020 -------
1023 -------
1021 a string consisting of the function signature, with the parenthesis but
1024 a string consisting of the function signature, with the parenthesis but
1022 without the function name. example:
1025 without the function name. example:
1023 `(a, *args, b=1, **kwargs)`
1026 `(a, *args, b=1, **kwargs)`
1024
1027
1025 """
1028 """
1026
1029
1027 # it looks like this might work on jedi 0.17
1030 # it looks like this might work on jedi 0.17
1028 if hasattr(completion, 'get_signatures'):
1031 if hasattr(completion, 'get_signatures'):
1029 signatures = completion.get_signatures()
1032 signatures = completion.get_signatures()
1030 if not signatures:
1033 if not signatures:
1031 return '(?)'
1034 return '(?)'
1032
1035
1033 c0 = completion.get_signatures()[0]
1036 c0 = completion.get_signatures()[0]
1034 return '('+c0.to_string().split('(', maxsplit=1)[1]
1037 return '('+c0.to_string().split('(', maxsplit=1)[1]
1035
1038
1036 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1039 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1037 for p in signature.defined_names()) if f])
1040 for p in signature.defined_names()) if f])
1038
1041
1039
1042
1040 class _CompleteResult(NamedTuple):
1043 class _CompleteResult(NamedTuple):
1041 matched_text : str
1044 matched_text : str
1042 matches: Sequence[str]
1045 matches: Sequence[str]
1043 matches_origin: Sequence[str]
1046 matches_origin: Sequence[str]
1044 jedi_matches: Any
1047 jedi_matches: Any
1045
1048
1046
1049
1047 class IPCompleter(Completer):
1050 class IPCompleter(Completer):
1048 """Extension of the completer class with IPython-specific features"""
1051 """Extension of the completer class with IPython-specific features"""
1049
1052
1050 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1053 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1051
1054
1052 @observe('greedy')
1055 @observe('greedy')
1053 def _greedy_changed(self, change):
1056 def _greedy_changed(self, change):
1054 """update the splitter and readline delims when greedy is changed"""
1057 """update the splitter and readline delims when greedy is changed"""
1055 if change['new']:
1058 if change['new']:
1056 self.splitter.delims = GREEDY_DELIMS
1059 self.splitter.delims = GREEDY_DELIMS
1057 else:
1060 else:
1058 self.splitter.delims = DELIMS
1061 self.splitter.delims = DELIMS
1059
1062
1060 dict_keys_only = Bool(False,
1063 dict_keys_only = Bool(False,
1061 help="""Whether to show dict key matches only""")
1064 help="""Whether to show dict key matches only""")
1062
1065
1063 merge_completions = Bool(True,
1066 merge_completions = Bool(True,
1064 help="""Whether to merge completion results into a single list
1067 help="""Whether to merge completion results into a single list
1065
1068
1066 If False, only the completion results from the first non-empty
1069 If False, only the completion results from the first non-empty
1067 completer will be returned.
1070 completer will be returned.
1068 """
1071 """
1069 ).tag(config=True)
1072 ).tag(config=True)
1070 omit__names = Enum((0,1,2), default_value=2,
1073 omit__names = Enum((0,1,2), default_value=2,
1071 help="""Instruct the completer to omit private method names
1074 help="""Instruct the completer to omit private method names
1072
1075
1073 Specifically, when completing on ``object.<tab>``.
1076 Specifically, when completing on ``object.<tab>``.
1074
1077
1075 When 2 [default]: all names that start with '_' will be excluded.
1078 When 2 [default]: all names that start with '_' will be excluded.
1076
1079
1077 When 1: all 'magic' names (``__foo__``) will be excluded.
1080 When 1: all 'magic' names (``__foo__``) will be excluded.
1078
1081
1079 When 0: nothing will be excluded.
1082 When 0: nothing will be excluded.
1080 """
1083 """
1081 ).tag(config=True)
1084 ).tag(config=True)
1082 limit_to__all__ = Bool(False,
1085 limit_to__all__ = Bool(False,
1083 help="""
1086 help="""
1084 DEPRECATED as of version 5.0.
1087 DEPRECATED as of version 5.0.
1085
1088
1086 Instruct the completer to use __all__ for the completion
1089 Instruct the completer to use __all__ for the completion
1087
1090
1088 Specifically, when completing on ``object.<tab>``.
1091 Specifically, when completing on ``object.<tab>``.
1089
1092
1090 When True: only those names in obj.__all__ will be included.
1093 When True: only those names in obj.__all__ will be included.
1091
1094
1092 When False [default]: the __all__ attribute is ignored
1095 When False [default]: the __all__ attribute is ignored
1093 """,
1096 """,
1094 ).tag(config=True)
1097 ).tag(config=True)
1095
1098
1096 profile_completions = Bool(
1099 profile_completions = Bool(
1097 default_value=False,
1100 default_value=False,
1098 help="If True, emit profiling data for completion subsystem using cProfile."
1101 help="If True, emit profiling data for completion subsystem using cProfile."
1099 ).tag(config=True)
1102 ).tag(config=True)
1100
1103
1101 profiler_output_dir = Unicode(
1104 profiler_output_dir = Unicode(
1102 default_value=".completion_profiles",
1105 default_value=".completion_profiles",
1103 help="Template for path at which to output profile data for completions."
1106 help="Template for path at which to output profile data for completions."
1104 ).tag(config=True)
1107 ).tag(config=True)
1105
1108
1106 @observe('limit_to__all__')
1109 @observe('limit_to__all__')
1107 def _limit_to_all_changed(self, change):
1110 def _limit_to_all_changed(self, change):
1108 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1111 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1109 'value has been deprecated since IPython 5.0, will be made to have '
1112 'value has been deprecated since IPython 5.0, will be made to have '
1110 'no effects and then removed in future version of IPython.',
1113 'no effects and then removed in future version of IPython.',
1111 UserWarning)
1114 UserWarning)
1112
1115
1113 def __init__(self, shell=None, namespace=None, global_namespace=None,
1116 def __init__(self, shell=None, namespace=None, global_namespace=None,
1114 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1117 use_readline=_deprecation_readline_sentinel, config=None, **kwargs):
1115 """IPCompleter() -> completer
1118 """IPCompleter() -> completer
1116
1119
1117 Return a completer object.
1120 Return a completer object.
1118
1121
1119 Parameters
1122 Parameters
1120 ----------
1123 ----------
1121 shell
1124 shell
1122 a pointer to the ipython shell itself. This is needed
1125 a pointer to the ipython shell itself. This is needed
1123 because this completer knows about magic functions, and those can
1126 because this completer knows about magic functions, and those can
1124 only be accessed via the ipython instance.
1127 only be accessed via the ipython instance.
1125 namespace : dict, optional
1128 namespace : dict, optional
1126 an optional dict where completions are performed.
1129 an optional dict where completions are performed.
1127 global_namespace : dict, optional
1130 global_namespace : dict, optional
1128 secondary optional dict for completions, to
1131 secondary optional dict for completions, to
1129 handle cases (such as IPython embedded inside functions) where
1132 handle cases (such as IPython embedded inside functions) where
1130 both Python scopes are visible.
1133 both Python scopes are visible.
1131 use_readline : bool, optional
1134 use_readline : bool, optional
1132 DEPRECATED, ignored since IPython 6.0, will have no effects
1135 DEPRECATED, ignored since IPython 6.0, will have no effects
1133 """
1136 """
1134
1137
1135 self.magic_escape = ESC_MAGIC
1138 self.magic_escape = ESC_MAGIC
1136 self.splitter = CompletionSplitter()
1139 self.splitter = CompletionSplitter()
1137
1140
1138 if use_readline is not _deprecation_readline_sentinel:
1141 if use_readline is not _deprecation_readline_sentinel:
1139 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1142 warnings.warn('The `use_readline` parameter is deprecated and ignored since IPython 6.0.',
1140 DeprecationWarning, stacklevel=2)
1143 DeprecationWarning, stacklevel=2)
1141
1144
1142 # _greedy_changed() depends on splitter and readline being defined:
1145 # _greedy_changed() depends on splitter and readline being defined:
1143 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1146 Completer.__init__(self, namespace=namespace, global_namespace=global_namespace,
1144 config=config, **kwargs)
1147 config=config, **kwargs)
1145
1148
1146 # List where completion matches will be stored
1149 # List where completion matches will be stored
1147 self.matches = []
1150 self.matches = []
1148 self.shell = shell
1151 self.shell = shell
1149 # Regexp to split filenames with spaces in them
1152 # Regexp to split filenames with spaces in them
1150 self.space_name_re = re.compile(r'([^\\] )')
1153 self.space_name_re = re.compile(r'([^\\] )')
1151 # Hold a local ref. to glob.glob for speed
1154 # Hold a local ref. to glob.glob for speed
1152 self.glob = glob.glob
1155 self.glob = glob.glob
1153
1156
1154 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1157 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1155 # buffers, to avoid completion problems.
1158 # buffers, to avoid completion problems.
1156 term = os.environ.get('TERM','xterm')
1159 term = os.environ.get('TERM','xterm')
1157 self.dumb_terminal = term in ['dumb','emacs']
1160 self.dumb_terminal = term in ['dumb','emacs']
1158
1161
1159 # Special handling of backslashes needed in win32 platforms
1162 # Special handling of backslashes needed in win32 platforms
1160 if sys.platform == "win32":
1163 if sys.platform == "win32":
1161 self.clean_glob = self._clean_glob_win32
1164 self.clean_glob = self._clean_glob_win32
1162 else:
1165 else:
1163 self.clean_glob = self._clean_glob
1166 self.clean_glob = self._clean_glob
1164
1167
1165 #regexp to parse docstring for function signature
1168 #regexp to parse docstring for function signature
1166 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1169 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1167 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1170 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1168 #use this if positional argument name is also needed
1171 #use this if positional argument name is also needed
1169 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1172 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1170
1173
1171 self.magic_arg_matchers = [
1174 self.magic_arg_matchers = [
1172 self.magic_config_matches,
1175 self.magic_config_matches,
1173 self.magic_color_matches,
1176 self.magic_color_matches,
1174 ]
1177 ]
1175
1178
1176 # This is set externally by InteractiveShell
1179 # This is set externally by InteractiveShell
1177 self.custom_completers = None
1180 self.custom_completers = None
1178
1181
1179 # This is a list of names of unicode characters that can be completed
1182 # This is a list of names of unicode characters that can be completed
1180 # into their corresponding unicode value. The list is large, so we
1183 # into their corresponding unicode value. The list is large, so we
1181 # laziliy initialize it on first use. Consuming code should access this
1184 # laziliy initialize it on first use. Consuming code should access this
1182 # attribute through the `@unicode_names` property.
1185 # attribute through the `@unicode_names` property.
1183 self._unicode_names = None
1186 self._unicode_names = None
1184
1187
1185 @property
1188 @property
1186 def matchers(self) -> List[Any]:
1189 def matchers(self) -> List[Any]:
1187 """All active matcher routines for completion"""
1190 """All active matcher routines for completion"""
1188 if self.dict_keys_only:
1191 if self.dict_keys_only:
1189 return [self.dict_key_matches]
1192 return [self.dict_key_matches]
1190
1193
1191 if self.use_jedi:
1194 if self.use_jedi:
1192 return [
1195 return [
1193 *self.custom_matchers,
1196 *self.custom_matchers,
1194 self.dict_key_matches,
1197 self.dict_key_matches,
1195 self.file_matches,
1198 self.file_matches,
1196 self.magic_matches,
1199 self.magic_matches,
1197 ]
1200 ]
1198 else:
1201 else:
1199 return [
1202 return [
1200 *self.custom_matchers,
1203 *self.custom_matchers,
1201 self.dict_key_matches,
1204 self.dict_key_matches,
1202 self.python_matches,
1205 self.python_matches,
1203 self.file_matches,
1206 self.file_matches,
1204 self.magic_matches,
1207 self.magic_matches,
1205 self.python_func_kw_matches,
1208 self.python_func_kw_matches,
1206 ]
1209 ]
1207
1210
1208 def all_completions(self, text:str) -> List[str]:
1211 def all_completions(self, text:str) -> List[str]:
1209 """
1212 """
1210 Wrapper around the completion methods for the benefit of emacs.
1213 Wrapper around the completion methods for the benefit of emacs.
1211 """
1214 """
1212 prefix = text.rpartition('.')[0]
1215 prefix = text.rpartition('.')[0]
1213 with provisionalcompleter():
1216 with provisionalcompleter():
1214 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1217 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1215 for c in self.completions(text, len(text))]
1218 for c in self.completions(text, len(text))]
1216
1219
1217 return self.complete(text)[1]
1220 return self.complete(text)[1]
1218
1221
1219 def _clean_glob(self, text:str):
1222 def _clean_glob(self, text:str):
1220 return self.glob("%s*" % text)
1223 return self.glob("%s*" % text)
1221
1224
1222 def _clean_glob_win32(self, text:str):
1225 def _clean_glob_win32(self, text:str):
1223 return [f.replace("\\","/")
1226 return [f.replace("\\","/")
1224 for f in self.glob("%s*" % text)]
1227 for f in self.glob("%s*" % text)]
1225
1228
1226 def file_matches(self, text:str)->List[str]:
1229 def file_matches(self, text:str)->List[str]:
1227 """Match filenames, expanding ~USER type strings.
1230 """Match filenames, expanding ~USER type strings.
1228
1231
1229 Most of the seemingly convoluted logic in this completer is an
1232 Most of the seemingly convoluted logic in this completer is an
1230 attempt to handle filenames with spaces in them. And yet it's not
1233 attempt to handle filenames with spaces in them. And yet it's not
1231 quite perfect, because Python's readline doesn't expose all of the
1234 quite perfect, because Python's readline doesn't expose all of the
1232 GNU readline details needed for this to be done correctly.
1235 GNU readline details needed for this to be done correctly.
1233
1236
1234 For a filename with a space in it, the printed completions will be
1237 For a filename with a space in it, the printed completions will be
1235 only the parts after what's already been typed (instead of the
1238 only the parts after what's already been typed (instead of the
1236 full completions, as is normally done). I don't think with the
1239 full completions, as is normally done). I don't think with the
1237 current (as of Python 2.3) Python readline it's possible to do
1240 current (as of Python 2.3) Python readline it's possible to do
1238 better."""
1241 better."""
1239
1242
1240 # chars that require escaping with backslash - i.e. chars
1243 # chars that require escaping with backslash - i.e. chars
1241 # that readline treats incorrectly as delimiters, but we
1244 # that readline treats incorrectly as delimiters, but we
1242 # don't want to treat as delimiters in filename matching
1245 # don't want to treat as delimiters in filename matching
1243 # when escaped with backslash
1246 # when escaped with backslash
1244 if text.startswith('!'):
1247 if text.startswith('!'):
1245 text = text[1:]
1248 text = text[1:]
1246 text_prefix = u'!'
1249 text_prefix = u'!'
1247 else:
1250 else:
1248 text_prefix = u''
1251 text_prefix = u''
1249
1252
1250 text_until_cursor = self.text_until_cursor
1253 text_until_cursor = self.text_until_cursor
1251 # track strings with open quotes
1254 # track strings with open quotes
1252 open_quotes = has_open_quotes(text_until_cursor)
1255 open_quotes = has_open_quotes(text_until_cursor)
1253
1256
1254 if '(' in text_until_cursor or '[' in text_until_cursor:
1257 if '(' in text_until_cursor or '[' in text_until_cursor:
1255 lsplit = text
1258 lsplit = text
1256 else:
1259 else:
1257 try:
1260 try:
1258 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1261 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1259 lsplit = arg_split(text_until_cursor)[-1]
1262 lsplit = arg_split(text_until_cursor)[-1]
1260 except ValueError:
1263 except ValueError:
1261 # typically an unmatched ", or backslash without escaped char.
1264 # typically an unmatched ", or backslash without escaped char.
1262 if open_quotes:
1265 if open_quotes:
1263 lsplit = text_until_cursor.split(open_quotes)[-1]
1266 lsplit = text_until_cursor.split(open_quotes)[-1]
1264 else:
1267 else:
1265 return []
1268 return []
1266 except IndexError:
1269 except IndexError:
1267 # tab pressed on empty line
1270 # tab pressed on empty line
1268 lsplit = ""
1271 lsplit = ""
1269
1272
1270 if not open_quotes and lsplit != protect_filename(lsplit):
1273 if not open_quotes and lsplit != protect_filename(lsplit):
1271 # if protectables are found, do matching on the whole escaped name
1274 # if protectables are found, do matching on the whole escaped name
1272 has_protectables = True
1275 has_protectables = True
1273 text0,text = text,lsplit
1276 text0,text = text,lsplit
1274 else:
1277 else:
1275 has_protectables = False
1278 has_protectables = False
1276 text = os.path.expanduser(text)
1279 text = os.path.expanduser(text)
1277
1280
1278 if text == "":
1281 if text == "":
1279 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1282 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1280
1283
1281 # Compute the matches from the filesystem
1284 # Compute the matches from the filesystem
1282 if sys.platform == 'win32':
1285 if sys.platform == 'win32':
1283 m0 = self.clean_glob(text)
1286 m0 = self.clean_glob(text)
1284 else:
1287 else:
1285 m0 = self.clean_glob(text.replace('\\', ''))
1288 m0 = self.clean_glob(text.replace('\\', ''))
1286
1289
1287 if has_protectables:
1290 if has_protectables:
1288 # If we had protectables, we need to revert our changes to the
1291 # If we had protectables, we need to revert our changes to the
1289 # beginning of filename so that we don't double-write the part
1292 # beginning of filename so that we don't double-write the part
1290 # of the filename we have so far
1293 # of the filename we have so far
1291 len_lsplit = len(lsplit)
1294 len_lsplit = len(lsplit)
1292 matches = [text_prefix + text0 +
1295 matches = [text_prefix + text0 +
1293 protect_filename(f[len_lsplit:]) for f in m0]
1296 protect_filename(f[len_lsplit:]) for f in m0]
1294 else:
1297 else:
1295 if open_quotes:
1298 if open_quotes:
1296 # if we have a string with an open quote, we don't need to
1299 # if we have a string with an open quote, we don't need to
1297 # protect the names beyond the quote (and we _shouldn't_, as
1300 # protect the names beyond the quote (and we _shouldn't_, as
1298 # it would cause bugs when the filesystem call is made).
1301 # it would cause bugs when the filesystem call is made).
1299 matches = m0 if sys.platform == "win32" else\
1302 matches = m0 if sys.platform == "win32" else\
1300 [protect_filename(f, open_quotes) for f in m0]
1303 [protect_filename(f, open_quotes) for f in m0]
1301 else:
1304 else:
1302 matches = [text_prefix +
1305 matches = [text_prefix +
1303 protect_filename(f) for f in m0]
1306 protect_filename(f) for f in m0]
1304
1307
1305 # Mark directories in input list by appending '/' to their names.
1308 # Mark directories in input list by appending '/' to their names.
1306 return [x+'/' if os.path.isdir(x) else x for x in matches]
1309 return [x+'/' if os.path.isdir(x) else x for x in matches]
1307
1310
1308 def magic_matches(self, text:str):
1311 def magic_matches(self, text:str):
1309 """Match magics"""
1312 """Match magics"""
1310 # Get all shell magics now rather than statically, so magics loaded at
1313 # Get all shell magics now rather than statically, so magics loaded at
1311 # runtime show up too.
1314 # runtime show up too.
1312 lsm = self.shell.magics_manager.lsmagic()
1315 lsm = self.shell.magics_manager.lsmagic()
1313 line_magics = lsm['line']
1316 line_magics = lsm['line']
1314 cell_magics = lsm['cell']
1317 cell_magics = lsm['cell']
1315 pre = self.magic_escape
1318 pre = self.magic_escape
1316 pre2 = pre+pre
1319 pre2 = pre+pre
1317
1320
1318 explicit_magic = text.startswith(pre)
1321 explicit_magic = text.startswith(pre)
1319
1322
1320 # Completion logic:
1323 # Completion logic:
1321 # - user gives %%: only do cell magics
1324 # - user gives %%: only do cell magics
1322 # - user gives %: do both line and cell magics
1325 # - user gives %: do both line and cell magics
1323 # - no prefix: do both
1326 # - no prefix: do both
1324 # In other words, line magics are skipped if the user gives %% explicitly
1327 # In other words, line magics are skipped if the user gives %% explicitly
1325 #
1328 #
1326 # We also exclude magics that match any currently visible names:
1329 # We also exclude magics that match any currently visible names:
1327 # https://github.com/ipython/ipython/issues/4877, unless the user has
1330 # https://github.com/ipython/ipython/issues/4877, unless the user has
1328 # typed a %:
1331 # typed a %:
1329 # https://github.com/ipython/ipython/issues/10754
1332 # https://github.com/ipython/ipython/issues/10754
1330 bare_text = text.lstrip(pre)
1333 bare_text = text.lstrip(pre)
1331 global_matches = self.global_matches(bare_text)
1334 global_matches = self.global_matches(bare_text)
1332 if not explicit_magic:
1335 if not explicit_magic:
1333 def matches(magic):
1336 def matches(magic):
1334 """
1337 """
1335 Filter magics, in particular remove magics that match
1338 Filter magics, in particular remove magics that match
1336 a name present in global namespace.
1339 a name present in global namespace.
1337 """
1340 """
1338 return ( magic.startswith(bare_text) and
1341 return ( magic.startswith(bare_text) and
1339 magic not in global_matches )
1342 magic not in global_matches )
1340 else:
1343 else:
1341 def matches(magic):
1344 def matches(magic):
1342 return magic.startswith(bare_text)
1345 return magic.startswith(bare_text)
1343
1346
1344 comp = [ pre2+m for m in cell_magics if matches(m)]
1347 comp = [ pre2+m for m in cell_magics if matches(m)]
1345 if not text.startswith(pre2):
1348 if not text.startswith(pre2):
1346 comp += [ pre+m for m in line_magics if matches(m)]
1349 comp += [ pre+m for m in line_magics if matches(m)]
1347
1350
1348 return comp
1351 return comp
1349
1352
1350 def magic_config_matches(self, text:str) -> List[str]:
1353 def magic_config_matches(self, text:str) -> List[str]:
1351 """ Match class names and attributes for %config magic """
1354 """ Match class names and attributes for %config magic """
1352 texts = text.strip().split()
1355 texts = text.strip().split()
1353
1356
1354 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1357 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1355 # get all configuration classes
1358 # get all configuration classes
1356 classes = sorted(set([ c for c in self.shell.configurables
1359 classes = sorted(set([ c for c in self.shell.configurables
1357 if c.__class__.class_traits(config=True)
1360 if c.__class__.class_traits(config=True)
1358 ]), key=lambda x: x.__class__.__name__)
1361 ]), key=lambda x: x.__class__.__name__)
1359 classnames = [ c.__class__.__name__ for c in classes ]
1362 classnames = [ c.__class__.__name__ for c in classes ]
1360
1363
1361 # return all classnames if config or %config is given
1364 # return all classnames if config or %config is given
1362 if len(texts) == 1:
1365 if len(texts) == 1:
1363 return classnames
1366 return classnames
1364
1367
1365 # match classname
1368 # match classname
1366 classname_texts = texts[1].split('.')
1369 classname_texts = texts[1].split('.')
1367 classname = classname_texts[0]
1370 classname = classname_texts[0]
1368 classname_matches = [ c for c in classnames
1371 classname_matches = [ c for c in classnames
1369 if c.startswith(classname) ]
1372 if c.startswith(classname) ]
1370
1373
1371 # return matched classes or the matched class with attributes
1374 # return matched classes or the matched class with attributes
1372 if texts[1].find('.') < 0:
1375 if texts[1].find('.') < 0:
1373 return classname_matches
1376 return classname_matches
1374 elif len(classname_matches) == 1 and \
1377 elif len(classname_matches) == 1 and \
1375 classname_matches[0] == classname:
1378 classname_matches[0] == classname:
1376 cls = classes[classnames.index(classname)].__class__
1379 cls = classes[classnames.index(classname)].__class__
1377 help = cls.class_get_help()
1380 help = cls.class_get_help()
1378 # strip leading '--' from cl-args:
1381 # strip leading '--' from cl-args:
1379 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1382 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1380 return [ attr.split('=')[0]
1383 return [ attr.split('=')[0]
1381 for attr in help.strip().splitlines()
1384 for attr in help.strip().splitlines()
1382 if attr.startswith(texts[1]) ]
1385 if attr.startswith(texts[1]) ]
1383 return []
1386 return []
1384
1387
1385 def magic_color_matches(self, text:str) -> List[str] :
1388 def magic_color_matches(self, text:str) -> List[str] :
1386 """ Match color schemes for %colors magic"""
1389 """ Match color schemes for %colors magic"""
1387 texts = text.split()
1390 texts = text.split()
1388 if text.endswith(' '):
1391 if text.endswith(' '):
1389 # .split() strips off the trailing whitespace. Add '' back
1392 # .split() strips off the trailing whitespace. Add '' back
1390 # so that: '%colors ' -> ['%colors', '']
1393 # so that: '%colors ' -> ['%colors', '']
1391 texts.append('')
1394 texts.append('')
1392
1395
1393 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1396 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1394 prefix = texts[1]
1397 prefix = texts[1]
1395 return [ color for color in InspectColors.keys()
1398 return [ color for color in InspectColors.keys()
1396 if color.startswith(prefix) ]
1399 if color.startswith(prefix) ]
1397 return []
1400 return []
1398
1401
1399 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterable[Any]:
1402 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterable[Any]:
1400 """
1403 """
1401 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1404 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1402 cursor position.
1405 cursor position.
1403
1406
1404 Parameters
1407 Parameters
1405 ----------
1408 ----------
1406 cursor_column : int
1409 cursor_column : int
1407 column position of the cursor in ``text``, 0-indexed.
1410 column position of the cursor in ``text``, 0-indexed.
1408 cursor_line : int
1411 cursor_line : int
1409 line position of the cursor in ``text``, 0-indexed
1412 line position of the cursor in ``text``, 0-indexed
1410 text : str
1413 text : str
1411 text to complete
1414 text to complete
1412
1415
1413 Notes
1416 Notes
1414 -----
1417 -----
1415 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1418 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1416 object containing a string with the Jedi debug information attached.
1419 object containing a string with the Jedi debug information attached.
1417 """
1420 """
1418 namespaces = [self.namespace]
1421 namespaces = [self.namespace]
1419 if self.global_namespace is not None:
1422 if self.global_namespace is not None:
1420 namespaces.append(self.global_namespace)
1423 namespaces.append(self.global_namespace)
1421
1424
1422 completion_filter = lambda x:x
1425 completion_filter = lambda x:x
1423 offset = cursor_to_position(text, cursor_line, cursor_column)
1426 offset = cursor_to_position(text, cursor_line, cursor_column)
1424 # filter output if we are completing for object members
1427 # filter output if we are completing for object members
1425 if offset:
1428 if offset:
1426 pre = text[offset-1]
1429 pre = text[offset-1]
1427 if pre == '.':
1430 if pre == '.':
1428 if self.omit__names == 2:
1431 if self.omit__names == 2:
1429 completion_filter = lambda c:not c.name.startswith('_')
1432 completion_filter = lambda c:not c.name.startswith('_')
1430 elif self.omit__names == 1:
1433 elif self.omit__names == 1:
1431 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1434 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1432 elif self.omit__names == 0:
1435 elif self.omit__names == 0:
1433 completion_filter = lambda x:x
1436 completion_filter = lambda x:x
1434 else:
1437 else:
1435 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1438 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1436
1439
1437 interpreter = jedi.Interpreter(text[:offset], namespaces)
1440 interpreter = jedi.Interpreter(text[:offset], namespaces)
1438 try_jedi = True
1441 try_jedi = True
1439
1442
1440 try:
1443 try:
1441 # find the first token in the current tree -- if it is a ' or " then we are in a string
1444 # find the first token in the current tree -- if it is a ' or " then we are in a string
1442 completing_string = False
1445 completing_string = False
1443 try:
1446 try:
1444 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1447 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1445 except StopIteration:
1448 except StopIteration:
1446 pass
1449 pass
1447 else:
1450 else:
1448 # note the value may be ', ", or it may also be ''' or """, or
1451 # note the value may be ', ", or it may also be ''' or """, or
1449 # in some cases, """what/you/typed..., but all of these are
1452 # in some cases, """what/you/typed..., but all of these are
1450 # strings.
1453 # strings.
1451 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1454 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1452
1455
1453 # if we are in a string jedi is likely not the right candidate for
1456 # if we are in a string jedi is likely not the right candidate for
1454 # now. Skip it.
1457 # now. Skip it.
1455 try_jedi = not completing_string
1458 try_jedi = not completing_string
1456 except Exception as e:
1459 except Exception as e:
1457 # many of things can go wrong, we are using private API just don't crash.
1460 # many of things can go wrong, we are using private API just don't crash.
1458 if self.debug:
1461 if self.debug:
1459 print("Error detecting if completing a non-finished string :", e, '|')
1462 print("Error detecting if completing a non-finished string :", e, '|')
1460
1463
1461 if not try_jedi:
1464 if not try_jedi:
1462 return []
1465 return []
1463 try:
1466 try:
1464 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1467 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1465 except Exception as e:
1468 except Exception as e:
1466 if self.debug:
1469 if self.debug:
1467 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1470 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1468 else:
1471 else:
1469 return []
1472 return []
1470
1473
1471 def python_matches(self, text:str)->List[str]:
1474 def python_matches(self, text:str)->List[str]:
1472 """Match attributes or global python names"""
1475 """Match attributes or global python names"""
1473 if "." in text:
1476 if "." in text:
1474 try:
1477 try:
1475 matches = self.attr_matches(text)
1478 matches = self.attr_matches(text)
1476 if text.endswith('.') and self.omit__names:
1479 if text.endswith('.') and self.omit__names:
1477 if self.omit__names == 1:
1480 if self.omit__names == 1:
1478 # true if txt is _not_ a __ name, false otherwise:
1481 # true if txt is _not_ a __ name, false otherwise:
1479 no__name = (lambda txt:
1482 no__name = (lambda txt:
1480 re.match(r'.*\.__.*?__',txt) is None)
1483 re.match(r'.*\.__.*?__',txt) is None)
1481 else:
1484 else:
1482 # true if txt is _not_ a _ name, false otherwise:
1485 # true if txt is _not_ a _ name, false otherwise:
1483 no__name = (lambda txt:
1486 no__name = (lambda txt:
1484 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1487 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1485 matches = filter(no__name, matches)
1488 matches = filter(no__name, matches)
1486 except NameError:
1489 except NameError:
1487 # catches <undefined attributes>.<tab>
1490 # catches <undefined attributes>.<tab>
1488 matches = []
1491 matches = []
1489 else:
1492 else:
1490 matches = self.global_matches(text)
1493 matches = self.global_matches(text)
1491 return matches
1494 return matches
1492
1495
1493 def _default_arguments_from_docstring(self, doc):
1496 def _default_arguments_from_docstring(self, doc):
1494 """Parse the first line of docstring for call signature.
1497 """Parse the first line of docstring for call signature.
1495
1498
1496 Docstring should be of the form 'min(iterable[, key=func])\n'.
1499 Docstring should be of the form 'min(iterable[, key=func])\n'.
1497 It can also parse cython docstring of the form
1500 It can also parse cython docstring of the form
1498 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1501 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1499 """
1502 """
1500 if doc is None:
1503 if doc is None:
1501 return []
1504 return []
1502
1505
1503 #care only the firstline
1506 #care only the firstline
1504 line = doc.lstrip().splitlines()[0]
1507 line = doc.lstrip().splitlines()[0]
1505
1508
1506 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1509 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1507 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1510 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1508 sig = self.docstring_sig_re.search(line)
1511 sig = self.docstring_sig_re.search(line)
1509 if sig is None:
1512 if sig is None:
1510 return []
1513 return []
1511 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1514 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1512 sig = sig.groups()[0].split(',')
1515 sig = sig.groups()[0].split(',')
1513 ret = []
1516 ret = []
1514 for s in sig:
1517 for s in sig:
1515 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1518 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1516 ret += self.docstring_kwd_re.findall(s)
1519 ret += self.docstring_kwd_re.findall(s)
1517 return ret
1520 return ret
1518
1521
1519 def _default_arguments(self, obj):
1522 def _default_arguments(self, obj):
1520 """Return the list of default arguments of obj if it is callable,
1523 """Return the list of default arguments of obj if it is callable,
1521 or empty list otherwise."""
1524 or empty list otherwise."""
1522 call_obj = obj
1525 call_obj = obj
1523 ret = []
1526 ret = []
1524 if inspect.isbuiltin(obj):
1527 if inspect.isbuiltin(obj):
1525 pass
1528 pass
1526 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1529 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1527 if inspect.isclass(obj):
1530 if inspect.isclass(obj):
1528 #for cython embedsignature=True the constructor docstring
1531 #for cython embedsignature=True the constructor docstring
1529 #belongs to the object itself not __init__
1532 #belongs to the object itself not __init__
1530 ret += self._default_arguments_from_docstring(
1533 ret += self._default_arguments_from_docstring(
1531 getattr(obj, '__doc__', ''))
1534 getattr(obj, '__doc__', ''))
1532 # for classes, check for __init__,__new__
1535 # for classes, check for __init__,__new__
1533 call_obj = (getattr(obj, '__init__', None) or
1536 call_obj = (getattr(obj, '__init__', None) or
1534 getattr(obj, '__new__', None))
1537 getattr(obj, '__new__', None))
1535 # for all others, check if they are __call__able
1538 # for all others, check if they are __call__able
1536 elif hasattr(obj, '__call__'):
1539 elif hasattr(obj, '__call__'):
1537 call_obj = obj.__call__
1540 call_obj = obj.__call__
1538 ret += self._default_arguments_from_docstring(
1541 ret += self._default_arguments_from_docstring(
1539 getattr(call_obj, '__doc__', ''))
1542 getattr(call_obj, '__doc__', ''))
1540
1543
1541 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1544 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1542 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1545 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1543
1546
1544 try:
1547 try:
1545 sig = inspect.signature(obj)
1548 sig = inspect.signature(obj)
1546 ret.extend(k for k, v in sig.parameters.items() if
1549 ret.extend(k for k, v in sig.parameters.items() if
1547 v.kind in _keeps)
1550 v.kind in _keeps)
1548 except ValueError:
1551 except ValueError:
1549 pass
1552 pass
1550
1553
1551 return list(set(ret))
1554 return list(set(ret))
1552
1555
1553 def python_func_kw_matches(self, text):
1556 def python_func_kw_matches(self, text):
1554 """Match named parameters (kwargs) of the last open function"""
1557 """Match named parameters (kwargs) of the last open function"""
1555
1558
1556 if "." in text: # a parameter cannot be dotted
1559 if "." in text: # a parameter cannot be dotted
1557 return []
1560 return []
1558 try: regexp = self.__funcParamsRegex
1561 try: regexp = self.__funcParamsRegex
1559 except AttributeError:
1562 except AttributeError:
1560 regexp = self.__funcParamsRegex = re.compile(r'''
1563 regexp = self.__funcParamsRegex = re.compile(r'''
1561 '.*?(?<!\\)' | # single quoted strings or
1564 '.*?(?<!\\)' | # single quoted strings or
1562 ".*?(?<!\\)" | # double quoted strings or
1565 ".*?(?<!\\)" | # double quoted strings or
1563 \w+ | # identifier
1566 \w+ | # identifier
1564 \S # other characters
1567 \S # other characters
1565 ''', re.VERBOSE | re.DOTALL)
1568 ''', re.VERBOSE | re.DOTALL)
1566 # 1. find the nearest identifier that comes before an unclosed
1569 # 1. find the nearest identifier that comes before an unclosed
1567 # parenthesis before the cursor
1570 # parenthesis before the cursor
1568 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1571 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1569 tokens = regexp.findall(self.text_until_cursor)
1572 tokens = regexp.findall(self.text_until_cursor)
1570 iterTokens = reversed(tokens); openPar = 0
1573 iterTokens = reversed(tokens); openPar = 0
1571
1574
1572 for token in iterTokens:
1575 for token in iterTokens:
1573 if token == ')':
1576 if token == ')':
1574 openPar -= 1
1577 openPar -= 1
1575 elif token == '(':
1578 elif token == '(':
1576 openPar += 1
1579 openPar += 1
1577 if openPar > 0:
1580 if openPar > 0:
1578 # found the last unclosed parenthesis
1581 # found the last unclosed parenthesis
1579 break
1582 break
1580 else:
1583 else:
1581 return []
1584 return []
1582 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1585 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1583 ids = []
1586 ids = []
1584 isId = re.compile(r'\w+$').match
1587 isId = re.compile(r'\w+$').match
1585
1588
1586 while True:
1589 while True:
1587 try:
1590 try:
1588 ids.append(next(iterTokens))
1591 ids.append(next(iterTokens))
1589 if not isId(ids[-1]):
1592 if not isId(ids[-1]):
1590 ids.pop(); break
1593 ids.pop(); break
1591 if not next(iterTokens) == '.':
1594 if not next(iterTokens) == '.':
1592 break
1595 break
1593 except StopIteration:
1596 except StopIteration:
1594 break
1597 break
1595
1598
1596 # Find all named arguments already assigned to, as to avoid suggesting
1599 # Find all named arguments already assigned to, as to avoid suggesting
1597 # them again
1600 # them again
1598 usedNamedArgs = set()
1601 usedNamedArgs = set()
1599 par_level = -1
1602 par_level = -1
1600 for token, next_token in zip(tokens, tokens[1:]):
1603 for token, next_token in zip(tokens, tokens[1:]):
1601 if token == '(':
1604 if token == '(':
1602 par_level += 1
1605 par_level += 1
1603 elif token == ')':
1606 elif token == ')':
1604 par_level -= 1
1607 par_level -= 1
1605
1608
1606 if par_level != 0:
1609 if par_level != 0:
1607 continue
1610 continue
1608
1611
1609 if next_token != '=':
1612 if next_token != '=':
1610 continue
1613 continue
1611
1614
1612 usedNamedArgs.add(token)
1615 usedNamedArgs.add(token)
1613
1616
1614 argMatches = []
1617 argMatches = []
1615 try:
1618 try:
1616 callableObj = '.'.join(ids[::-1])
1619 callableObj = '.'.join(ids[::-1])
1617 namedArgs = self._default_arguments(eval(callableObj,
1620 namedArgs = self._default_arguments(eval(callableObj,
1618 self.namespace))
1621 self.namespace))
1619
1622
1620 # Remove used named arguments from the list, no need to show twice
1623 # Remove used named arguments from the list, no need to show twice
1621 for namedArg in set(namedArgs) - usedNamedArgs:
1624 for namedArg in set(namedArgs) - usedNamedArgs:
1622 if namedArg.startswith(text):
1625 if namedArg.startswith(text):
1623 argMatches.append("%s=" %namedArg)
1626 argMatches.append("%s=" %namedArg)
1624 except:
1627 except:
1625 pass
1628 pass
1626
1629
1627 return argMatches
1630 return argMatches
1628
1631
1629 @staticmethod
1632 @staticmethod
1630 def _get_keys(obj: Any) -> List[Any]:
1633 def _get_keys(obj: Any) -> List[Any]:
1631 # Objects can define their own completions by defining an
1634 # Objects can define their own completions by defining an
1632 # _ipy_key_completions_() method.
1635 # _ipy_key_completions_() method.
1633 method = get_real_method(obj, '_ipython_key_completions_')
1636 method = get_real_method(obj, '_ipython_key_completions_')
1634 if method is not None:
1637 if method is not None:
1635 return method()
1638 return method()
1636
1639
1637 # Special case some common in-memory dict-like types
1640 # Special case some common in-memory dict-like types
1638 if isinstance(obj, dict) or\
1641 if isinstance(obj, dict) or\
1639 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1642 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1640 try:
1643 try:
1641 return list(obj.keys())
1644 return list(obj.keys())
1642 except Exception:
1645 except Exception:
1643 return []
1646 return []
1644 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1647 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1645 _safe_isinstance(obj, 'numpy', 'void'):
1648 _safe_isinstance(obj, 'numpy', 'void'):
1646 return obj.dtype.names or []
1649 return obj.dtype.names or []
1647 return []
1650 return []
1648
1651
1649 def dict_key_matches(self, text:str) -> List[str]:
1652 def dict_key_matches(self, text:str) -> List[str]:
1650 "Match string keys in a dictionary, after e.g. 'foo[' "
1653 "Match string keys in a dictionary, after e.g. 'foo[' "
1651
1654
1652
1655
1653 if self.__dict_key_regexps is not None:
1656 if self.__dict_key_regexps is not None:
1654 regexps = self.__dict_key_regexps
1657 regexps = self.__dict_key_regexps
1655 else:
1658 else:
1656 dict_key_re_fmt = r'''(?x)
1659 dict_key_re_fmt = r'''(?x)
1657 ( # match dict-referring expression wrt greedy setting
1660 ( # match dict-referring expression wrt greedy setting
1658 %s
1661 %s
1659 )
1662 )
1660 \[ # open bracket
1663 \[ # open bracket
1661 \s* # and optional whitespace
1664 \s* # and optional whitespace
1662 # Capture any number of str-like objects (e.g. "a", "b", 'c')
1665 # Capture any number of str-like objects (e.g. "a", "b", 'c')
1663 ((?:[uUbB]? # string prefix (r not handled)
1666 ((?:[uUbB]? # string prefix (r not handled)
1664 (?:
1667 (?:
1665 '(?:[^']|(?<!\\)\\')*'
1668 '(?:[^']|(?<!\\)\\')*'
1666 |
1669 |
1667 "(?:[^"]|(?<!\\)\\")*"
1670 "(?:[^"]|(?<!\\)\\")*"
1668 )
1671 )
1669 \s*,\s*
1672 \s*,\s*
1670 )*)
1673 )*)
1671 ([uUbB]? # string prefix (r not handled)
1674 ([uUbB]? # string prefix (r not handled)
1672 (?: # unclosed string
1675 (?: # unclosed string
1673 '(?:[^']|(?<!\\)\\')*
1676 '(?:[^']|(?<!\\)\\')*
1674 |
1677 |
1675 "(?:[^"]|(?<!\\)\\")*
1678 "(?:[^"]|(?<!\\)\\")*
1676 )
1679 )
1677 )?
1680 )?
1678 $
1681 $
1679 '''
1682 '''
1680 regexps = self.__dict_key_regexps = {
1683 regexps = self.__dict_key_regexps = {
1681 False: re.compile(dict_key_re_fmt % r'''
1684 False: re.compile(dict_key_re_fmt % r'''
1682 # identifiers separated by .
1685 # identifiers separated by .
1683 (?!\d)\w+
1686 (?!\d)\w+
1684 (?:\.(?!\d)\w+)*
1687 (?:\.(?!\d)\w+)*
1685 '''),
1688 '''),
1686 True: re.compile(dict_key_re_fmt % '''
1689 True: re.compile(dict_key_re_fmt % '''
1687 .+
1690 .+
1688 ''')
1691 ''')
1689 }
1692 }
1690
1693
1691 match = regexps[self.greedy].search(self.text_until_cursor)
1694 match = regexps[self.greedy].search(self.text_until_cursor)
1692
1695
1693 if match is None:
1696 if match is None:
1694 return []
1697 return []
1695
1698
1696 expr, prefix0, prefix = match.groups()
1699 expr, prefix0, prefix = match.groups()
1697 try:
1700 try:
1698 obj = eval(expr, self.namespace)
1701 obj = eval(expr, self.namespace)
1699 except Exception:
1702 except Exception:
1700 try:
1703 try:
1701 obj = eval(expr, self.global_namespace)
1704 obj = eval(expr, self.global_namespace)
1702 except Exception:
1705 except Exception:
1703 return []
1706 return []
1704
1707
1705 keys = self._get_keys(obj)
1708 keys = self._get_keys(obj)
1706 if not keys:
1709 if not keys:
1707 return keys
1710 return keys
1708
1711
1709 extra_prefix = eval(prefix0) if prefix0 != '' else None
1712 extra_prefix = eval(prefix0) if prefix0 != '' else None
1710
1713
1711 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
1714 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
1712 if not matches:
1715 if not matches:
1713 return matches
1716 return matches
1714
1717
1715 # get the cursor position of
1718 # get the cursor position of
1716 # - the text being completed
1719 # - the text being completed
1717 # - the start of the key text
1720 # - the start of the key text
1718 # - the start of the completion
1721 # - the start of the completion
1719 text_start = len(self.text_until_cursor) - len(text)
1722 text_start = len(self.text_until_cursor) - len(text)
1720 if prefix:
1723 if prefix:
1721 key_start = match.start(3)
1724 key_start = match.start(3)
1722 completion_start = key_start + token_offset
1725 completion_start = key_start + token_offset
1723 else:
1726 else:
1724 key_start = completion_start = match.end()
1727 key_start = completion_start = match.end()
1725
1728
1726 # grab the leading prefix, to make sure all completions start with `text`
1729 # grab the leading prefix, to make sure all completions start with `text`
1727 if text_start > key_start:
1730 if text_start > key_start:
1728 leading = ''
1731 leading = ''
1729 else:
1732 else:
1730 leading = text[text_start:completion_start]
1733 leading = text[text_start:completion_start]
1731
1734
1732 # the index of the `[` character
1735 # the index of the `[` character
1733 bracket_idx = match.end(1)
1736 bracket_idx = match.end(1)
1734
1737
1735 # append closing quote and bracket as appropriate
1738 # append closing quote and bracket as appropriate
1736 # this is *not* appropriate if the opening quote or bracket is outside
1739 # this is *not* appropriate if the opening quote or bracket is outside
1737 # the text given to this method
1740 # the text given to this method
1738 suf = ''
1741 suf = ''
1739 continuation = self.line_buffer[len(self.text_until_cursor):]
1742 continuation = self.line_buffer[len(self.text_until_cursor):]
1740 if key_start > text_start and closing_quote:
1743 if key_start > text_start and closing_quote:
1741 # quotes were opened inside text, maybe close them
1744 # quotes were opened inside text, maybe close them
1742 if continuation.startswith(closing_quote):
1745 if continuation.startswith(closing_quote):
1743 continuation = continuation[len(closing_quote):]
1746 continuation = continuation[len(closing_quote):]
1744 else:
1747 else:
1745 suf += closing_quote
1748 suf += closing_quote
1746 if bracket_idx > text_start:
1749 if bracket_idx > text_start:
1747 # brackets were opened inside text, maybe close them
1750 # brackets were opened inside text, maybe close them
1748 if not continuation.startswith(']'):
1751 if not continuation.startswith(']'):
1749 suf += ']'
1752 suf += ']'
1750
1753
1751 return [leading + k + suf for k in matches]
1754 return [leading + k + suf for k in matches]
1752
1755
1753 @staticmethod
1756 @staticmethod
1754 def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
1757 def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
1755 """Match Latex-like syntax for unicode characters base
1758 """Match Latex-like syntax for unicode characters base
1756 on the name of the character.
1759 on the name of the character.
1757
1760
1758 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1761 This does ``\\GREEK SMALL LETTER ETA`` -> ``Ξ·``
1759
1762
1760 Works only on valid python 3 identifier, or on combining characters that
1763 Works only on valid python 3 identifier, or on combining characters that
1761 will combine to form a valid identifier.
1764 will combine to form a valid identifier.
1762 """
1765 """
1763 slashpos = text.rfind('\\')
1766 slashpos = text.rfind('\\')
1764 if slashpos > -1:
1767 if slashpos > -1:
1765 s = text[slashpos+1:]
1768 s = text[slashpos+1:]
1766 try :
1769 try :
1767 unic = unicodedata.lookup(s)
1770 unic = unicodedata.lookup(s)
1768 # allow combining chars
1771 # allow combining chars
1769 if ('a'+unic).isidentifier():
1772 if ('a'+unic).isidentifier():
1770 return '\\'+s,[unic]
1773 return '\\'+s,[unic]
1771 except KeyError:
1774 except KeyError:
1772 pass
1775 pass
1773 return '', []
1776 return '', []
1774
1777
1775
1778
1776 def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
1779 def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
1777 """Match Latex syntax for unicode characters.
1780 """Match Latex syntax for unicode characters.
1778
1781
1779 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1782 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``Ξ±``
1780 """
1783 """
1781 slashpos = text.rfind('\\')
1784 slashpos = text.rfind('\\')
1782 if slashpos > -1:
1785 if slashpos > -1:
1783 s = text[slashpos:]
1786 s = text[slashpos:]
1784 if s in latex_symbols:
1787 if s in latex_symbols:
1785 # Try to complete a full latex symbol to unicode
1788 # Try to complete a full latex symbol to unicode
1786 # \\alpha -> Ξ±
1789 # \\alpha -> Ξ±
1787 return s, [latex_symbols[s]]
1790 return s, [latex_symbols[s]]
1788 else:
1791 else:
1789 # If a user has partially typed a latex symbol, give them
1792 # If a user has partially typed a latex symbol, give them
1790 # a full list of options \al -> [\aleph, \alpha]
1793 # a full list of options \al -> [\aleph, \alpha]
1791 matches = [k for k in latex_symbols if k.startswith(s)]
1794 matches = [k for k in latex_symbols if k.startswith(s)]
1792 if matches:
1795 if matches:
1793 return s, matches
1796 return s, matches
1794 return '', ()
1797 return '', ()
1795
1798
1796 def dispatch_custom_completer(self, text):
1799 def dispatch_custom_completer(self, text):
1797 if not self.custom_completers:
1800 if not self.custom_completers:
1798 return
1801 return
1799
1802
1800 line = self.line_buffer
1803 line = self.line_buffer
1801 if not line.strip():
1804 if not line.strip():
1802 return None
1805 return None
1803
1806
1804 # Create a little structure to pass all the relevant information about
1807 # Create a little structure to pass all the relevant information about
1805 # the current completion to any custom completer.
1808 # the current completion to any custom completer.
1806 event = SimpleNamespace()
1809 event = SimpleNamespace()
1807 event.line = line
1810 event.line = line
1808 event.symbol = text
1811 event.symbol = text
1809 cmd = line.split(None,1)[0]
1812 cmd = line.split(None,1)[0]
1810 event.command = cmd
1813 event.command = cmd
1811 event.text_until_cursor = self.text_until_cursor
1814 event.text_until_cursor = self.text_until_cursor
1812
1815
1813 # for foo etc, try also to find completer for %foo
1816 # for foo etc, try also to find completer for %foo
1814 if not cmd.startswith(self.magic_escape):
1817 if not cmd.startswith(self.magic_escape):
1815 try_magic = self.custom_completers.s_matches(
1818 try_magic = self.custom_completers.s_matches(
1816 self.magic_escape + cmd)
1819 self.magic_escape + cmd)
1817 else:
1820 else:
1818 try_magic = []
1821 try_magic = []
1819
1822
1820 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1823 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1821 try_magic,
1824 try_magic,
1822 self.custom_completers.flat_matches(self.text_until_cursor)):
1825 self.custom_completers.flat_matches(self.text_until_cursor)):
1823 try:
1826 try:
1824 res = c(event)
1827 res = c(event)
1825 if res:
1828 if res:
1826 # first, try case sensitive match
1829 # first, try case sensitive match
1827 withcase = [r for r in res if r.startswith(text)]
1830 withcase = [r for r in res if r.startswith(text)]
1828 if withcase:
1831 if withcase:
1829 return withcase
1832 return withcase
1830 # if none, then case insensitive ones are ok too
1833 # if none, then case insensitive ones are ok too
1831 text_low = text.lower()
1834 text_low = text.lower()
1832 return [r for r in res if r.lower().startswith(text_low)]
1835 return [r for r in res if r.lower().startswith(text_low)]
1833 except TryNext:
1836 except TryNext:
1834 pass
1837 pass
1835 except KeyboardInterrupt:
1838 except KeyboardInterrupt:
1836 """
1839 """
1837 If custom completer take too long,
1840 If custom completer take too long,
1838 let keyboard interrupt abort and return nothing.
1841 let keyboard interrupt abort and return nothing.
1839 """
1842 """
1840 break
1843 break
1841
1844
1842 return None
1845 return None
1843
1846
1844 def completions(self, text: str, offset: int)->Iterator[Completion]:
1847 def completions(self, text: str, offset: int)->Iterator[Completion]:
1845 """
1848 """
1846 Returns an iterator over the possible completions
1849 Returns an iterator over the possible completions
1847
1850
1848 .. warning::
1851 .. warning::
1849
1852
1850 Unstable
1853 Unstable
1851
1854
1852 This function is unstable, API may change without warning.
1855 This function is unstable, API may change without warning.
1853 It will also raise unless use in proper context manager.
1856 It will also raise unless use in proper context manager.
1854
1857
1855 Parameters
1858 Parameters
1856 ----------
1859 ----------
1857 text : str
1860 text : str
1858 Full text of the current input, multi line string.
1861 Full text of the current input, multi line string.
1859 offset : int
1862 offset : int
1860 Integer representing the position of the cursor in ``text``. Offset
1863 Integer representing the position of the cursor in ``text``. Offset
1861 is 0-based indexed.
1864 is 0-based indexed.
1862
1865
1863 Yields
1866 Yields
1864 ------
1867 ------
1865 Completion
1868 Completion
1866
1869
1867 Notes
1870 Notes
1868 -----
1871 -----
1869 The cursor on a text can either be seen as being "in between"
1872 The cursor on a text can either be seen as being "in between"
1870 characters or "On" a character depending on the interface visible to
1873 characters or "On" a character depending on the interface visible to
1871 the user. For consistency the cursor being on "in between" characters X
1874 the user. For consistency the cursor being on "in between" characters X
1872 and Y is equivalent to the cursor being "on" character Y, that is to say
1875 and Y is equivalent to the cursor being "on" character Y, that is to say
1873 the character the cursor is on is considered as being after the cursor.
1876 the character the cursor is on is considered as being after the cursor.
1874
1877
1875 Combining characters may span more that one position in the
1878 Combining characters may span more that one position in the
1876 text.
1879 text.
1877
1880
1878 .. note::
1881 .. note::
1879
1882
1880 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1883 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1881 fake Completion token to distinguish completion returned by Jedi
1884 fake Completion token to distinguish completion returned by Jedi
1882 and usual IPython completion.
1885 and usual IPython completion.
1883
1886
1884 .. note::
1887 .. note::
1885
1888
1886 Completions are not completely deduplicated yet. If identical
1889 Completions are not completely deduplicated yet. If identical
1887 completions are coming from different sources this function does not
1890 completions are coming from different sources this function does not
1888 ensure that each completion object will only be present once.
1891 ensure that each completion object will only be present once.
1889 """
1892 """
1890 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1893 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1891 "It may change without warnings. "
1894 "It may change without warnings. "
1892 "Use in corresponding context manager.",
1895 "Use in corresponding context manager.",
1893 category=ProvisionalCompleterWarning, stacklevel=2)
1896 category=ProvisionalCompleterWarning, stacklevel=2)
1894
1897
1895 seen = set()
1898 seen = set()
1896 profiler:Optional[cProfile.Profile]
1899 profiler:Optional[cProfile.Profile]
1897 try:
1900 try:
1898 if self.profile_completions:
1901 if self.profile_completions:
1899 import cProfile
1902 import cProfile
1900 profiler = cProfile.Profile()
1903 profiler = cProfile.Profile()
1901 profiler.enable()
1904 profiler.enable()
1902 else:
1905 else:
1903 profiler = None
1906 profiler = None
1904
1907
1905 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1908 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1906 if c and (c in seen):
1909 if c and (c in seen):
1907 continue
1910 continue
1908 yield c
1911 yield c
1909 seen.add(c)
1912 seen.add(c)
1910 except KeyboardInterrupt:
1913 except KeyboardInterrupt:
1911 """if completions take too long and users send keyboard interrupt,
1914 """if completions take too long and users send keyboard interrupt,
1912 do not crash and return ASAP. """
1915 do not crash and return ASAP. """
1913 pass
1916 pass
1914 finally:
1917 finally:
1915 if profiler is not None:
1918 if profiler is not None:
1916 profiler.disable()
1919 profiler.disable()
1917 ensure_dir_exists(self.profiler_output_dir)
1920 ensure_dir_exists(self.profiler_output_dir)
1918 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
1921 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
1919 print("Writing profiler output to", output_path)
1922 print("Writing profiler output to", output_path)
1920 profiler.dump_stats(output_path)
1923 profiler.dump_stats(output_path)
1921
1924
1922 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
1925 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
1923 """
1926 """
1924 Core completion module.Same signature as :any:`completions`, with the
1927 Core completion module.Same signature as :any:`completions`, with the
1925 extra `timeout` parameter (in seconds).
1928 extra `timeout` parameter (in seconds).
1926
1929
1927 Computing jedi's completion ``.type`` can be quite expensive (it is a
1930 Computing jedi's completion ``.type`` can be quite expensive (it is a
1928 lazy property) and can require some warm-up, more warm up than just
1931 lazy property) and can require some warm-up, more warm up than just
1929 computing the ``name`` of a completion. The warm-up can be :
1932 computing the ``name`` of a completion. The warm-up can be :
1930
1933
1931 - Long warm-up the first time a module is encountered after
1934 - Long warm-up the first time a module is encountered after
1932 install/update: actually build parse/inference tree.
1935 install/update: actually build parse/inference tree.
1933
1936
1934 - first time the module is encountered in a session: load tree from
1937 - first time the module is encountered in a session: load tree from
1935 disk.
1938 disk.
1936
1939
1937 We don't want to block completions for tens of seconds so we give the
1940 We don't want to block completions for tens of seconds so we give the
1938 completer a "budget" of ``_timeout`` seconds per invocation to compute
1941 completer a "budget" of ``_timeout`` seconds per invocation to compute
1939 completions types, the completions that have not yet been computed will
1942 completions types, the completions that have not yet been computed will
1940 be marked as "unknown" an will have a chance to be computed next round
1943 be marked as "unknown" an will have a chance to be computed next round
1941 are things get cached.
1944 are things get cached.
1942
1945
1943 Keep in mind that Jedi is not the only thing treating the completion so
1946 Keep in mind that Jedi is not the only thing treating the completion so
1944 keep the timeout short-ish as if we take more than 0.3 second we still
1947 keep the timeout short-ish as if we take more than 0.3 second we still
1945 have lots of processing to do.
1948 have lots of processing to do.
1946
1949
1947 """
1950 """
1948 deadline = time.monotonic() + _timeout
1951 deadline = time.monotonic() + _timeout
1949
1952
1950
1953
1951 before = full_text[:offset]
1954 before = full_text[:offset]
1952 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1955 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1953
1956
1954 matched_text, matches, matches_origin, jedi_matches = self._complete(
1957 matched_text, matches, matches_origin, jedi_matches = self._complete(
1955 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1958 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
1956
1959
1957 iter_jm = iter(jedi_matches)
1960 iter_jm = iter(jedi_matches)
1958 if _timeout:
1961 if _timeout:
1959 for jm in iter_jm:
1962 for jm in iter_jm:
1960 try:
1963 try:
1961 type_ = jm.type
1964 type_ = jm.type
1962 except Exception:
1965 except Exception:
1963 if self.debug:
1966 if self.debug:
1964 print("Error in Jedi getting type of ", jm)
1967 print("Error in Jedi getting type of ", jm)
1965 type_ = None
1968 type_ = None
1966 delta = len(jm.name_with_symbols) - len(jm.complete)
1969 delta = len(jm.name_with_symbols) - len(jm.complete)
1967 if type_ == 'function':
1970 if type_ == 'function':
1968 signature = _make_signature(jm)
1971 signature = _make_signature(jm)
1969 else:
1972 else:
1970 signature = ''
1973 signature = ''
1971 yield Completion(start=offset - delta,
1974 yield Completion(start=offset - delta,
1972 end=offset,
1975 end=offset,
1973 text=jm.name_with_symbols,
1976 text=jm.name_with_symbols,
1974 type=type_,
1977 type=type_,
1975 signature=signature,
1978 signature=signature,
1976 _origin='jedi')
1979 _origin='jedi')
1977
1980
1978 if time.monotonic() > deadline:
1981 if time.monotonic() > deadline:
1979 break
1982 break
1980
1983
1981 for jm in iter_jm:
1984 for jm in iter_jm:
1982 delta = len(jm.name_with_symbols) - len(jm.complete)
1985 delta = len(jm.name_with_symbols) - len(jm.complete)
1983 yield Completion(start=offset - delta,
1986 yield Completion(start=offset - delta,
1984 end=offset,
1987 end=offset,
1985 text=jm.name_with_symbols,
1988 text=jm.name_with_symbols,
1986 type='<unknown>', # don't compute type for speed
1989 type='<unknown>', # don't compute type for speed
1987 _origin='jedi',
1990 _origin='jedi',
1988 signature='')
1991 signature='')
1989
1992
1990
1993
1991 start_offset = before.rfind(matched_text)
1994 start_offset = before.rfind(matched_text)
1992
1995
1993 # TODO:
1996 # TODO:
1994 # Suppress this, right now just for debug.
1997 # Suppress this, right now just for debug.
1995 if jedi_matches and matches and self.debug:
1998 if jedi_matches and matches and self.debug:
1996 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
1999 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
1997 _origin='debug', type='none', signature='')
2000 _origin='debug', type='none', signature='')
1998
2001
1999 # I'm unsure if this is always true, so let's assert and see if it
2002 # I'm unsure if this is always true, so let's assert and see if it
2000 # crash
2003 # crash
2001 assert before.endswith(matched_text)
2004 assert before.endswith(matched_text)
2002 for m, t in zip(matches, matches_origin):
2005 for m, t in zip(matches, matches_origin):
2003 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
2006 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
2004
2007
2005
2008
2006 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2009 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2007 """Find completions for the given text and line context.
2010 """Find completions for the given text and line context.
2008
2011
2009 Note that both the text and the line_buffer are optional, but at least
2012 Note that both the text and the line_buffer are optional, but at least
2010 one of them must be given.
2013 one of them must be given.
2011
2014
2012 Parameters
2015 Parameters
2013 ----------
2016 ----------
2014 text : string, optional
2017 text : string, optional
2015 Text to perform the completion on. If not given, the line buffer
2018 Text to perform the completion on. If not given, the line buffer
2016 is split using the instance's CompletionSplitter object.
2019 is split using the instance's CompletionSplitter object.
2017 line_buffer : string, optional
2020 line_buffer : string, optional
2018 If not given, the completer attempts to obtain the current line
2021 If not given, the completer attempts to obtain the current line
2019 buffer via readline. This keyword allows clients which are
2022 buffer via readline. This keyword allows clients which are
2020 requesting for text completions in non-readline contexts to inform
2023 requesting for text completions in non-readline contexts to inform
2021 the completer of the entire text.
2024 the completer of the entire text.
2022 cursor_pos : int, optional
2025 cursor_pos : int, optional
2023 Index of the cursor in the full line buffer. Should be provided by
2026 Index of the cursor in the full line buffer. Should be provided by
2024 remote frontends where kernel has no access to frontend state.
2027 remote frontends where kernel has no access to frontend state.
2025
2028
2026 Returns
2029 Returns
2027 -------
2030 -------
2028 Tuple of two items:
2031 Tuple of two items:
2029 text : str
2032 text : str
2030 Text that was actually used in the completion.
2033 Text that was actually used in the completion.
2031 matches : list
2034 matches : list
2032 A list of completion matches.
2035 A list of completion matches.
2033
2036
2034 Notes
2037 Notes
2035 -----
2038 -----
2036 This API is likely to be deprecated and replaced by
2039 This API is likely to be deprecated and replaced by
2037 :any:`IPCompleter.completions` in the future.
2040 :any:`IPCompleter.completions` in the future.
2038
2041
2039 """
2042 """
2040 warnings.warn('`Completer.complete` is pending deprecation since '
2043 warnings.warn('`Completer.complete` is pending deprecation since '
2041 'IPython 6.0 and will be replaced by `Completer.completions`.',
2044 'IPython 6.0 and will be replaced by `Completer.completions`.',
2042 PendingDeprecationWarning)
2045 PendingDeprecationWarning)
2043 # potential todo, FOLD the 3rd throw away argument of _complete
2046 # potential todo, FOLD the 3rd throw away argument of _complete
2044 # into the first 2 one.
2047 # into the first 2 one.
2045 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
2048 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
2046
2049
2047 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2050 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2048 full_text=None) -> _CompleteResult:
2051 full_text=None) -> _CompleteResult:
2049 """
2052 """
2050 Like complete but can also returns raw jedi completions as well as the
2053 Like complete but can also returns raw jedi completions as well as the
2051 origin of the completion text. This could (and should) be made much
2054 origin of the completion text. This could (and should) be made much
2052 cleaner but that will be simpler once we drop the old (and stateful)
2055 cleaner but that will be simpler once we drop the old (and stateful)
2053 :any:`complete` API.
2056 :any:`complete` API.
2054
2057
2055 With current provisional API, cursor_pos act both (depending on the
2058 With current provisional API, cursor_pos act both (depending on the
2056 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2059 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2057 ``column`` when passing multiline strings this could/should be renamed
2060 ``column`` when passing multiline strings this could/should be renamed
2058 but would add extra noise.
2061 but would add extra noise.
2059
2062
2060 Parameters
2063 Parameters
2061 ----------
2064 ----------
2062 cursor_line :
2065 cursor_line :
2063 Index of the line the cursor is on. 0 indexed.
2066 Index of the line the cursor is on. 0 indexed.
2064 cursor_pos :
2067 cursor_pos :
2065 Position of the cursor in the current line/line_buffer/text. 0
2068 Position of the cursor in the current line/line_buffer/text. 0
2066 indexed.
2069 indexed.
2067 line_buffer : optional, str
2070 line_buffer : optional, str
2068 The current line the cursor is in, this is mostly due to legacy
2071 The current line the cursor is in, this is mostly due to legacy
2069 reason that readline coudl only give a us the single current line.
2072 reason that readline coudl only give a us the single current line.
2070 Prefer `full_text`.
2073 Prefer `full_text`.
2071 text : str
2074 text : str
2072 The current "token" the cursor is in, mostly also for historical
2075 The current "token" the cursor is in, mostly also for historical
2073 reasons. as the completer would trigger only after the current line
2076 reasons. as the completer would trigger only after the current line
2074 was parsed.
2077 was parsed.
2075 full_text : str
2078 full_text : str
2076 Full text of the current cell.
2079 Full text of the current cell.
2077
2080
2078 Returns
2081 Returns
2079 -------
2082 -------
2080 A tuple of N elements which are (likely):
2083 A tuple of N elements which are (likely):
2081 matched_text: ? the text that the complete matched
2084 matched_text: ? the text that the complete matched
2082 matches: list of completions ?
2085 matches: list of completions ?
2083 matches_origin: ? list same length as matches, and where each completion came from
2086 matches_origin: ? list same length as matches, and where each completion came from
2084 jedi_matches: list of Jedi matches, have it's own structure.
2087 jedi_matches: list of Jedi matches, have it's own structure.
2085 """
2088 """
2086
2089
2087
2090
2088 # if the cursor position isn't given, the only sane assumption we can
2091 # if the cursor position isn't given, the only sane assumption we can
2089 # make is that it's at the end of the line (the common case)
2092 # make is that it's at the end of the line (the common case)
2090 if cursor_pos is None:
2093 if cursor_pos is None:
2091 cursor_pos = len(line_buffer) if text is None else len(text)
2094 cursor_pos = len(line_buffer) if text is None else len(text)
2092
2095
2093 if self.use_main_ns:
2096 if self.use_main_ns:
2094 self.namespace = __main__.__dict__
2097 self.namespace = __main__.__dict__
2095
2098
2096 # if text is either None or an empty string, rely on the line buffer
2099 # if text is either None or an empty string, rely on the line buffer
2097 if (not line_buffer) and full_text:
2100 if (not line_buffer) and full_text:
2098 line_buffer = full_text.split('\n')[cursor_line]
2101 line_buffer = full_text.split('\n')[cursor_line]
2099 if not text: # issue #11508: check line_buffer before calling split_line
2102 if not text: # issue #11508: check line_buffer before calling split_line
2100 text = self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ''
2103 text = self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ''
2101
2104
2102 if self.backslash_combining_completions:
2105 if self.backslash_combining_completions:
2103 # allow deactivation of these on windows.
2106 # allow deactivation of these on windows.
2104 base_text = text if not line_buffer else line_buffer[:cursor_pos]
2107 base_text = text if not line_buffer else line_buffer[:cursor_pos]
2105
2108
2106 for meth in (self.latex_matches,
2109 for meth in (self.latex_matches,
2107 self.unicode_name_matches,
2110 self.unicode_name_matches,
2108 back_latex_name_matches,
2111 back_latex_name_matches,
2109 back_unicode_name_matches,
2112 back_unicode_name_matches,
2110 self.fwd_unicode_match):
2113 self.fwd_unicode_match):
2111 name_text, name_matches = meth(base_text)
2114 name_text, name_matches = meth(base_text)
2112 if name_text:
2115 if name_text:
2113 return _CompleteResult(name_text, name_matches[:MATCHES_LIMIT], \
2116 return _CompleteResult(name_text, name_matches[:MATCHES_LIMIT], \
2114 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ())
2117 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ())
2115
2118
2116
2119
2117 # If no line buffer is given, assume the input text is all there was
2120 # If no line buffer is given, assume the input text is all there was
2118 if line_buffer is None:
2121 if line_buffer is None:
2119 line_buffer = text
2122 line_buffer = text
2120
2123
2121 self.line_buffer = line_buffer
2124 self.line_buffer = line_buffer
2122 self.text_until_cursor = self.line_buffer[:cursor_pos]
2125 self.text_until_cursor = self.line_buffer[:cursor_pos]
2123
2126
2124 # Do magic arg matches
2127 # Do magic arg matches
2125 for matcher in self.magic_arg_matchers:
2128 for matcher in self.magic_arg_matchers:
2126 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2129 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2127 if matches:
2130 if matches:
2128 origins = [matcher.__qualname__] * len(matches)
2131 origins = [matcher.__qualname__] * len(matches)
2129 return _CompleteResult(text, matches, origins, ())
2132 return _CompleteResult(text, matches, origins, ())
2130
2133
2131 # Start with a clean slate of completions
2134 # Start with a clean slate of completions
2132 matches = []
2135 matches = []
2133
2136
2134 # FIXME: we should extend our api to return a dict with completions for
2137 # FIXME: we should extend our api to return a dict with completions for
2135 # different types of objects. The rlcomplete() method could then
2138 # different types of objects. The rlcomplete() method could then
2136 # simply collapse the dict into a list for readline, but we'd have
2139 # simply collapse the dict into a list for readline, but we'd have
2137 # richer completion semantics in other environments.
2140 # richer completion semantics in other environments.
2138 completions:Iterable[Any] = []
2141 completions:Iterable[Any] = []
2139 if self.use_jedi:
2142 if self.use_jedi:
2140 if not full_text:
2143 if not full_text:
2141 full_text = line_buffer
2144 full_text = line_buffer
2142 completions = self._jedi_matches(
2145 completions = self._jedi_matches(
2143 cursor_pos, cursor_line, full_text)
2146 cursor_pos, cursor_line, full_text)
2144
2147
2145 if self.merge_completions:
2148 if self.merge_completions:
2146 matches = []
2149 matches = []
2147 for matcher in self.matchers:
2150 for matcher in self.matchers:
2148 try:
2151 try:
2149 matches.extend([(m, matcher.__qualname__)
2152 matches.extend([(m, matcher.__qualname__)
2150 for m in matcher(text)])
2153 for m in matcher(text)])
2151 except:
2154 except:
2152 # Show the ugly traceback if the matcher causes an
2155 # Show the ugly traceback if the matcher causes an
2153 # exception, but do NOT crash the kernel!
2156 # exception, but do NOT crash the kernel!
2154 sys.excepthook(*sys.exc_info())
2157 sys.excepthook(*sys.exc_info())
2155 else:
2158 else:
2156 for matcher in self.matchers:
2159 for matcher in self.matchers:
2157 matches = [(m, matcher.__qualname__)
2160 matches = [(m, matcher.__qualname__)
2158 for m in matcher(text)]
2161 for m in matcher(text)]
2159 if matches:
2162 if matches:
2160 break
2163 break
2161
2164
2162 seen = set()
2165 seen = set()
2163 filtered_matches = set()
2166 filtered_matches = set()
2164 for m in matches:
2167 for m in matches:
2165 t, c = m
2168 t, c = m
2166 if t not in seen:
2169 if t not in seen:
2167 filtered_matches.add(m)
2170 filtered_matches.add(m)
2168 seen.add(t)
2171 seen.add(t)
2169
2172
2170 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2173 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2171
2174
2172 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2175 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2173
2176
2174 _filtered_matches = custom_res or _filtered_matches
2177 _filtered_matches = custom_res or _filtered_matches
2175
2178
2176 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2179 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2177 _matches = [m[0] for m in _filtered_matches]
2180 _matches = [m[0] for m in _filtered_matches]
2178 origins = [m[1] for m in _filtered_matches]
2181 origins = [m[1] for m in _filtered_matches]
2179
2182
2180 self.matches = _matches
2183 self.matches = _matches
2181
2184
2182 return _CompleteResult(text, _matches, origins, completions)
2185 return _CompleteResult(text, _matches, origins, completions)
2183
2186
2184 def fwd_unicode_match(self, text:str) -> Tuple[str, Sequence[str]]:
2187 def fwd_unicode_match(self, text:str) -> Tuple[str, Sequence[str]]:
2185 """
2188 """
2186 Forward match a string starting with a backslash with a list of
2189 Forward match a string starting with a backslash with a list of
2187 potential Unicode completions.
2190 potential Unicode completions.
2188
2191
2189 Will compute list list of Unicode character names on first call and cache it.
2192 Will compute list list of Unicode character names on first call and cache it.
2190
2193
2191 Returns
2194 Returns
2192 -------
2195 -------
2193 At tuple with:
2196 At tuple with:
2194 - matched text (empty if no matches)
2197 - matched text (empty if no matches)
2195 - list of potential completions, empty tuple otherwise)
2198 - list of potential completions, empty tuple otherwise)
2196 """
2199 """
2197 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2200 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2198 # We could do a faster match using a Trie.
2201 # We could do a faster match using a Trie.
2199
2202
2200 # Using pygtrie the following seem to work:
2203 # Using pygtrie the following seem to work:
2201
2204
2202 # s = PrefixSet()
2205 # s = PrefixSet()
2203
2206
2204 # for c in range(0,0x10FFFF + 1):
2207 # for c in range(0,0x10FFFF + 1):
2205 # try:
2208 # try:
2206 # s.add(unicodedata.name(chr(c)))
2209 # s.add(unicodedata.name(chr(c)))
2207 # except ValueError:
2210 # except ValueError:
2208 # pass
2211 # pass
2209 # [''.join(k) for k in s.iter(prefix)]
2212 # [''.join(k) for k in s.iter(prefix)]
2210
2213
2211 # But need to be timed and adds an extra dependency.
2214 # But need to be timed and adds an extra dependency.
2212
2215
2213 slashpos = text.rfind('\\')
2216 slashpos = text.rfind('\\')
2214 # if text starts with slash
2217 # if text starts with slash
2215 if slashpos > -1:
2218 if slashpos > -1:
2216 # PERF: It's important that we don't access self._unicode_names
2219 # PERF: It's important that we don't access self._unicode_names
2217 # until we're inside this if-block. _unicode_names is lazily
2220 # until we're inside this if-block. _unicode_names is lazily
2218 # initialized, and it takes a user-noticeable amount of time to
2221 # initialized, and it takes a user-noticeable amount of time to
2219 # initialize it, so we don't want to initialize it unless we're
2222 # initialize it, so we don't want to initialize it unless we're
2220 # actually going to use it.
2223 # actually going to use it.
2221 s = text[slashpos+1:]
2224 s = text[slashpos+1:]
2222 candidates = [x for x in self.unicode_names if x.startswith(s)]
2225 candidates = [x for x in self.unicode_names if x.startswith(s)]
2223 if candidates:
2226 if candidates:
2224 return s, candidates
2227 return s, candidates
2225 else:
2228 else:
2226 return '', ()
2229 return '', ()
2227
2230
2228 # if text does not start with slash
2231 # if text does not start with slash
2229 else:
2232 else:
2230 return '', ()
2233 return '', ()
2231
2234
2232 @property
2235 @property
2233 def unicode_names(self) -> List[str]:
2236 def unicode_names(self) -> List[str]:
2234 """List of names of unicode code points that can be completed.
2237 """List of names of unicode code points that can be completed.
2235
2238
2236 The list is lazily initialized on first access.
2239 The list is lazily initialized on first access.
2237 """
2240 """
2238 if self._unicode_names is None:
2241 if self._unicode_names is None:
2239 names = []
2242 names = []
2240 for c in range(0,0x10FFFF + 1):
2243 for c in range(0,0x10FFFF + 1):
2241 try:
2244 try:
2242 names.append(unicodedata.name(chr(c)))
2245 names.append(unicodedata.name(chr(c)))
2243 except ValueError:
2246 except ValueError:
2244 pass
2247 pass
2245 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2248 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2246
2249
2247 return self._unicode_names
2250 return self._unicode_names
2248
2251
2249 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2252 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2250 names = []
2253 names = []
2251 for start,stop in ranges:
2254 for start,stop in ranges:
2252 for c in range(start, stop) :
2255 for c in range(start, stop) :
2253 try:
2256 try:
2254 names.append(unicodedata.name(chr(c)))
2257 names.append(unicodedata.name(chr(c)))
2255 except ValueError:
2258 except ValueError:
2256 pass
2259 pass
2257 return names
2260 return names
@@ -1,437 +1,365 b''
1 """Nose Plugin that supports IPython doctests.
1 """Nose Plugin that supports IPython doctests.
2
2
3 Limitations:
3 Limitations:
4
4
5 - When generating examples for use as doctests, make sure that you have
5 - When generating examples for use as doctests, make sure that you have
6 pretty-printing OFF. This can be done either by setting the
6 pretty-printing OFF. This can be done either by setting the
7 ``PlainTextFormatter.pprint`` option in your configuration file to False, or
7 ``PlainTextFormatter.pprint`` option in your configuration file to False, or
8 by interactively disabling it with %Pprint. This is required so that IPython
8 by interactively disabling it with %Pprint. This is required so that IPython
9 output matches that of normal Python, which is used by doctest for internal
9 output matches that of normal Python, which is used by doctest for internal
10 execution.
10 execution.
11
11
12 - Do not rely on specific prompt numbers for results (such as using
12 - Do not rely on specific prompt numbers for results (such as using
13 '_34==True', for example). For IPython tests run via an external process the
13 '_34==True', for example). For IPython tests run via an external process the
14 prompt numbers may be different, and IPython tests run as normal python code
14 prompt numbers may be different, and IPython tests run as normal python code
15 won't even have these special _NN variables set at all.
15 won't even have these special _NN variables set at all.
16 """
16 """
17
17
18 #-----------------------------------------------------------------------------
18 #-----------------------------------------------------------------------------
19 # Module imports
19 # Module imports
20
20
21 # From the standard library
21 # From the standard library
22 import doctest
22 import doctest
23 import inspect
24 import logging
23 import logging
25 import os
24 import os
26 import re
25 import re
27
26
28 from testpath import modified_env
27 from testpath import modified_env
29
28
30 #-----------------------------------------------------------------------------
29 #-----------------------------------------------------------------------------
31 # Module globals and other constants
30 # Module globals and other constants
32 #-----------------------------------------------------------------------------
31 #-----------------------------------------------------------------------------
33
32
34 log = logging.getLogger(__name__)
33 log = logging.getLogger(__name__)
35
34
36
35
37 #-----------------------------------------------------------------------------
36 #-----------------------------------------------------------------------------
38 # Classes and functions
37 # Classes and functions
39 #-----------------------------------------------------------------------------
38 #-----------------------------------------------------------------------------
40
39
41 class DocTestSkip(object):
40 class DocTestSkip(object):
42 """Object wrapper for doctests to be skipped."""
41 """Object wrapper for doctests to be skipped."""
43
42
44 ds_skip = """Doctest to skip.
43 ds_skip = """Doctest to skip.
45 >>> 1 #doctest: +SKIP
44 >>> 1 #doctest: +SKIP
46 """
45 """
47
46
48 def __init__(self,obj):
47 def __init__(self,obj):
49 self.obj = obj
48 self.obj = obj
50
49
51 def __getattribute__(self,key):
50 def __getattribute__(self,key):
52 if key == '__doc__':
51 if key == '__doc__':
53 return DocTestSkip.ds_skip
52 return DocTestSkip.ds_skip
54 else:
53 else:
55 return getattr(object.__getattribute__(self,'obj'),key)
54 return getattr(object.__getattribute__(self,'obj'),key)
56
55
57 # Modified version of the one in the stdlib, that fixes a python bug (doctests
58 # not found in extension modules, http://bugs.python.org/issue3158)
59 class DocTestFinder(doctest.DocTestFinder):
60
61 def _from_module(self, module, object):
62 """
63 Return true if the given object is defined in the given
64 module.
65 """
66 if module is None:
67 return True
68 elif inspect.isfunction(object):
69 return module.__dict__ is object.__globals__
70 elif inspect.isbuiltin(object):
71 return module.__name__ == object.__module__
72 elif inspect.isclass(object):
73 return module.__name__ == object.__module__
74 elif inspect.ismethod(object):
75 # This one may be a bug in cython that fails to correctly set the
76 # __module__ attribute of methods, but since the same error is easy
77 # to make by extension code writers, having this safety in place
78 # isn't such a bad idea
79 return module.__name__ == object.__self__.__class__.__module__
80 elif inspect.getmodule(object) is not None:
81 return module is inspect.getmodule(object)
82 elif hasattr(object, '__module__'):
83 return module.__name__ == object.__module__
84 elif isinstance(object, property):
85 return True # [XX] no way not be sure.
86 elif inspect.ismethoddescriptor(object):
87 # Unbound PyQt signals reach this point in Python 3.4b3, and we want
88 # to avoid throwing an error. See also http://bugs.python.org/issue3158
89 return False
90 else:
91 raise ValueError("object must be a class or function, got %r" % object)
92
56
57 class DocTestFinder(doctest.DocTestFinder):
93 def _find(self, tests, obj, name, module, source_lines, globs, seen):
58 def _find(self, tests, obj, name, module, source_lines, globs, seen):
94 """
59 """
95 Find tests for the given object and any contained objects, and
60 Find tests for the given object and any contained objects, and
96 add them to `tests`.
61 add them to `tests`.
97 """
62 """
98 print('_find for:', obj, name, module) # dbg
63 print('_find for:', obj, name, module) # dbg
99 if bool(getattr(obj, "__skip_doctest__", False)):
64 if bool(getattr(obj, "__skip_doctest__", False)):
100 #print 'SKIPPING DOCTEST FOR:',obj # dbg
65 #print 'SKIPPING DOCTEST FOR:',obj # dbg
101 obj = DocTestSkip(obj)
66 obj = DocTestSkip(obj)
102
67
103 doctest.DocTestFinder._find(self,tests, obj, name, module,
68 super()._find(tests, obj, name, module, source_lines, globs, seen)
104 source_lines, globs, seen)
105
106 # Below we re-run pieces of the above method with manual modifications,
107 # because the original code is buggy and fails to correctly identify
108 # doctests in extension modules.
109
110 # Local shorthands
111 from inspect import isroutine, isclass
112
113 # Look for tests in a module's contained objects.
114 if inspect.ismodule(obj) and self._recurse:
115 for valname, val in obj.__dict__.items():
116 valname1 = '%s.%s' % (name, valname)
117 if ( (isroutine(val) or isclass(val))
118 and self._from_module(module, val) ):
119
120 self._find(tests, val, valname1, module, source_lines,
121 globs, seen)
122
123 # Look for tests in a class's contained objects.
124 if inspect.isclass(obj) and self._recurse:
125 #print 'RECURSE into class:',obj # dbg
126 for valname, val in obj.__dict__.items():
127 # Special handling for staticmethod/classmethod.
128 if isinstance(val, staticmethod):
129 val = getattr(obj, valname)
130 if isinstance(val, classmethod):
131 val = getattr(obj, valname).__func__
132
133 # Recurse to methods, properties, and nested classes.
134 if ((inspect.isfunction(val) or inspect.isclass(val) or
135 inspect.ismethod(val) or
136 isinstance(val, property)) and
137 self._from_module(module, val)):
138 valname = '%s.%s' % (name, valname)
139 self._find(tests, val, valname, module, source_lines,
140 globs, seen)
141
69
142
70
143 class IPDoctestOutputChecker(doctest.OutputChecker):
71 class IPDoctestOutputChecker(doctest.OutputChecker):
144 """Second-chance checker with support for random tests.
72 """Second-chance checker with support for random tests.
145
73
146 If the default comparison doesn't pass, this checker looks in the expected
74 If the default comparison doesn't pass, this checker looks in the expected
147 output string for flags that tell us to ignore the output.
75 output string for flags that tell us to ignore the output.
148 """
76 """
149
77
150 random_re = re.compile(r'#\s*random\s+')
78 random_re = re.compile(r'#\s*random\s+')
151
79
152 def check_output(self, want, got, optionflags):
80 def check_output(self, want, got, optionflags):
153 """Check output, accepting special markers embedded in the output.
81 """Check output, accepting special markers embedded in the output.
154
82
155 If the output didn't pass the default validation but the special string
83 If the output didn't pass the default validation but the special string
156 '#random' is included, we accept it."""
84 '#random' is included, we accept it."""
157
85
158 # Let the original tester verify first, in case people have valid tests
86 # Let the original tester verify first, in case people have valid tests
159 # that happen to have a comment saying '#random' embedded in.
87 # that happen to have a comment saying '#random' embedded in.
160 ret = doctest.OutputChecker.check_output(self, want, got,
88 ret = doctest.OutputChecker.check_output(self, want, got,
161 optionflags)
89 optionflags)
162 if not ret and self.random_re.search(want):
90 if not ret and self.random_re.search(want):
163 #print >> sys.stderr, 'RANDOM OK:',want # dbg
91 #print >> sys.stderr, 'RANDOM OK:',want # dbg
164 return True
92 return True
165
93
166 return ret
94 return ret
167
95
168
96
169 # A simple subclassing of the original with a different class name, so we can
97 # A simple subclassing of the original with a different class name, so we can
170 # distinguish and treat differently IPython examples from pure python ones.
98 # distinguish and treat differently IPython examples from pure python ones.
171 class IPExample(doctest.Example): pass
99 class IPExample(doctest.Example): pass
172
100
173
101
174 class IPExternalExample(doctest.Example):
102 class IPExternalExample(doctest.Example):
175 """Doctest examples to be run in an external process."""
103 """Doctest examples to be run in an external process."""
176
104
177 def __init__(self, source, want, exc_msg=None, lineno=0, indent=0,
105 def __init__(self, source, want, exc_msg=None, lineno=0, indent=0,
178 options=None):
106 options=None):
179 # Parent constructor
107 # Parent constructor
180 doctest.Example.__init__(self,source,want,exc_msg,lineno,indent,options)
108 doctest.Example.__init__(self,source,want,exc_msg,lineno,indent,options)
181
109
182 # An EXTRA newline is needed to prevent pexpect hangs
110 # An EXTRA newline is needed to prevent pexpect hangs
183 self.source += '\n'
111 self.source += '\n'
184
112
185
113
186 class IPDocTestParser(doctest.DocTestParser):
114 class IPDocTestParser(doctest.DocTestParser):
187 """
115 """
188 A class used to parse strings containing doctest examples.
116 A class used to parse strings containing doctest examples.
189
117
190 Note: This is a version modified to properly recognize IPython input and
118 Note: This is a version modified to properly recognize IPython input and
191 convert any IPython examples into valid Python ones.
119 convert any IPython examples into valid Python ones.
192 """
120 """
193 # This regular expression is used to find doctest examples in a
121 # This regular expression is used to find doctest examples in a
194 # string. It defines three groups: `source` is the source code
122 # string. It defines three groups: `source` is the source code
195 # (including leading indentation and prompts); `indent` is the
123 # (including leading indentation and prompts); `indent` is the
196 # indentation of the first (PS1) line of the source code; and
124 # indentation of the first (PS1) line of the source code; and
197 # `want` is the expected output (including leading indentation).
125 # `want` is the expected output (including leading indentation).
198
126
199 # Classic Python prompts or default IPython ones
127 # Classic Python prompts or default IPython ones
200 _PS1_PY = r'>>>'
128 _PS1_PY = r'>>>'
201 _PS2_PY = r'\.\.\.'
129 _PS2_PY = r'\.\.\.'
202
130
203 _PS1_IP = r'In\ \[\d+\]:'
131 _PS1_IP = r'In\ \[\d+\]:'
204 _PS2_IP = r'\ \ \ \.\.\.+:'
132 _PS2_IP = r'\ \ \ \.\.\.+:'
205
133
206 _RE_TPL = r'''
134 _RE_TPL = r'''
207 # Source consists of a PS1 line followed by zero or more PS2 lines.
135 # Source consists of a PS1 line followed by zero or more PS2 lines.
208 (?P<source>
136 (?P<source>
209 (?:^(?P<indent> [ ]*) (?P<ps1> %s) .*) # PS1 line
137 (?:^(?P<indent> [ ]*) (?P<ps1> %s) .*) # PS1 line
210 (?:\n [ ]* (?P<ps2> %s) .*)*) # PS2 lines
138 (?:\n [ ]* (?P<ps2> %s) .*)*) # PS2 lines
211 \n? # a newline
139 \n? # a newline
212 # Want consists of any non-blank lines that do not start with PS1.
140 # Want consists of any non-blank lines that do not start with PS1.
213 (?P<want> (?:(?![ ]*$) # Not a blank line
141 (?P<want> (?:(?![ ]*$) # Not a blank line
214 (?![ ]*%s) # Not a line starting with PS1
142 (?![ ]*%s) # Not a line starting with PS1
215 (?![ ]*%s) # Not a line starting with PS2
143 (?![ ]*%s) # Not a line starting with PS2
216 .*$\n? # But any other line
144 .*$\n? # But any other line
217 )*)
145 )*)
218 '''
146 '''
219
147
220 _EXAMPLE_RE_PY = re.compile( _RE_TPL % (_PS1_PY,_PS2_PY,_PS1_PY,_PS2_PY),
148 _EXAMPLE_RE_PY = re.compile( _RE_TPL % (_PS1_PY,_PS2_PY,_PS1_PY,_PS2_PY),
221 re.MULTILINE | re.VERBOSE)
149 re.MULTILINE | re.VERBOSE)
222
150
223 _EXAMPLE_RE_IP = re.compile( _RE_TPL % (_PS1_IP,_PS2_IP,_PS1_IP,_PS2_IP),
151 _EXAMPLE_RE_IP = re.compile( _RE_TPL % (_PS1_IP,_PS2_IP,_PS1_IP,_PS2_IP),
224 re.MULTILINE | re.VERBOSE)
152 re.MULTILINE | re.VERBOSE)
225
153
226 # Mark a test as being fully random. In this case, we simply append the
154 # Mark a test as being fully random. In this case, we simply append the
227 # random marker ('#random') to each individual example's output. This way
155 # random marker ('#random') to each individual example's output. This way
228 # we don't need to modify any other code.
156 # we don't need to modify any other code.
229 _RANDOM_TEST = re.compile(r'#\s*all-random\s+')
157 _RANDOM_TEST = re.compile(r'#\s*all-random\s+')
230
158
231 # Mark tests to be executed in an external process - currently unsupported.
159 # Mark tests to be executed in an external process - currently unsupported.
232 _EXTERNAL_IP = re.compile(r'#\s*ipdoctest:\s*EXTERNAL')
160 _EXTERNAL_IP = re.compile(r'#\s*ipdoctest:\s*EXTERNAL')
233
161
234 def ip2py(self,source):
162 def ip2py(self,source):
235 """Convert input IPython source into valid Python."""
163 """Convert input IPython source into valid Python."""
236 block = _ip.input_transformer_manager.transform_cell(source)
164 block = _ip.input_transformer_manager.transform_cell(source)
237 if len(block.splitlines()) == 1:
165 if len(block.splitlines()) == 1:
238 return _ip.prefilter(block)
166 return _ip.prefilter(block)
239 else:
167 else:
240 return block
168 return block
241
169
242 def parse(self, string, name='<string>'):
170 def parse(self, string, name='<string>'):
243 """
171 """
244 Divide the given string into examples and intervening text,
172 Divide the given string into examples and intervening text,
245 and return them as a list of alternating Examples and strings.
173 and return them as a list of alternating Examples and strings.
246 Line numbers for the Examples are 0-based. The optional
174 Line numbers for the Examples are 0-based. The optional
247 argument `name` is a name identifying this string, and is only
175 argument `name` is a name identifying this string, and is only
248 used for error messages.
176 used for error messages.
249 """
177 """
250
178
251 #print 'Parse string:\n',string # dbg
179 #print 'Parse string:\n',string # dbg
252
180
253 string = string.expandtabs()
181 string = string.expandtabs()
254 # If all lines begin with the same indentation, then strip it.
182 # If all lines begin with the same indentation, then strip it.
255 min_indent = self._min_indent(string)
183 min_indent = self._min_indent(string)
256 if min_indent > 0:
184 if min_indent > 0:
257 string = '\n'.join([l[min_indent:] for l in string.split('\n')])
185 string = '\n'.join([l[min_indent:] for l in string.split('\n')])
258
186
259 output = []
187 output = []
260 charno, lineno = 0, 0
188 charno, lineno = 0, 0
261
189
262 # We make 'all random' tests by adding the '# random' mark to every
190 # We make 'all random' tests by adding the '# random' mark to every
263 # block of output in the test.
191 # block of output in the test.
264 if self._RANDOM_TEST.search(string):
192 if self._RANDOM_TEST.search(string):
265 random_marker = '\n# random'
193 random_marker = '\n# random'
266 else:
194 else:
267 random_marker = ''
195 random_marker = ''
268
196
269 # Whether to convert the input from ipython to python syntax
197 # Whether to convert the input from ipython to python syntax
270 ip2py = False
198 ip2py = False
271 # Find all doctest examples in the string. First, try them as Python
199 # Find all doctest examples in the string. First, try them as Python
272 # examples, then as IPython ones
200 # examples, then as IPython ones
273 terms = list(self._EXAMPLE_RE_PY.finditer(string))
201 terms = list(self._EXAMPLE_RE_PY.finditer(string))
274 if terms:
202 if terms:
275 # Normal Python example
203 # Normal Python example
276 #print '-'*70 # dbg
204 #print '-'*70 # dbg
277 #print 'PyExample, Source:\n',string # dbg
205 #print 'PyExample, Source:\n',string # dbg
278 #print '-'*70 # dbg
206 #print '-'*70 # dbg
279 Example = doctest.Example
207 Example = doctest.Example
280 else:
208 else:
281 # It's an ipython example. Note that IPExamples are run
209 # It's an ipython example. Note that IPExamples are run
282 # in-process, so their syntax must be turned into valid python.
210 # in-process, so their syntax must be turned into valid python.
283 # IPExternalExamples are run out-of-process (via pexpect) so they
211 # IPExternalExamples are run out-of-process (via pexpect) so they
284 # don't need any filtering (a real ipython will be executing them).
212 # don't need any filtering (a real ipython will be executing them).
285 terms = list(self._EXAMPLE_RE_IP.finditer(string))
213 terms = list(self._EXAMPLE_RE_IP.finditer(string))
286 if self._EXTERNAL_IP.search(string):
214 if self._EXTERNAL_IP.search(string):
287 #print '-'*70 # dbg
215 #print '-'*70 # dbg
288 #print 'IPExternalExample, Source:\n',string # dbg
216 #print 'IPExternalExample, Source:\n',string # dbg
289 #print '-'*70 # dbg
217 #print '-'*70 # dbg
290 Example = IPExternalExample
218 Example = IPExternalExample
291 else:
219 else:
292 #print '-'*70 # dbg
220 #print '-'*70 # dbg
293 #print 'IPExample, Source:\n',string # dbg
221 #print 'IPExample, Source:\n',string # dbg
294 #print '-'*70 # dbg
222 #print '-'*70 # dbg
295 Example = IPExample
223 Example = IPExample
296 ip2py = True
224 ip2py = True
297
225
298 for m in terms:
226 for m in terms:
299 # Add the pre-example text to `output`.
227 # Add the pre-example text to `output`.
300 output.append(string[charno:m.start()])
228 output.append(string[charno:m.start()])
301 # Update lineno (lines before this example)
229 # Update lineno (lines before this example)
302 lineno += string.count('\n', charno, m.start())
230 lineno += string.count('\n', charno, m.start())
303 # Extract info from the regexp match.
231 # Extract info from the regexp match.
304 (source, options, want, exc_msg) = \
232 (source, options, want, exc_msg) = \
305 self._parse_example(m, name, lineno,ip2py)
233 self._parse_example(m, name, lineno,ip2py)
306
234
307 # Append the random-output marker (it defaults to empty in most
235 # Append the random-output marker (it defaults to empty in most
308 # cases, it's only non-empty for 'all-random' tests):
236 # cases, it's only non-empty for 'all-random' tests):
309 want += random_marker
237 want += random_marker
310
238
311 if Example is IPExternalExample:
239 if Example is IPExternalExample:
312 options[doctest.NORMALIZE_WHITESPACE] = True
240 options[doctest.NORMALIZE_WHITESPACE] = True
313 want += '\n'
241 want += '\n'
314
242
315 # Create an Example, and add it to the list.
243 # Create an Example, and add it to the list.
316 if not self._IS_BLANK_OR_COMMENT(source):
244 if not self._IS_BLANK_OR_COMMENT(source):
317 output.append(Example(source, want, exc_msg,
245 output.append(Example(source, want, exc_msg,
318 lineno=lineno,
246 lineno=lineno,
319 indent=min_indent+len(m.group('indent')),
247 indent=min_indent+len(m.group('indent')),
320 options=options))
248 options=options))
321 # Update lineno (lines inside this example)
249 # Update lineno (lines inside this example)
322 lineno += string.count('\n', m.start(), m.end())
250 lineno += string.count('\n', m.start(), m.end())
323 # Update charno.
251 # Update charno.
324 charno = m.end()
252 charno = m.end()
325 # Add any remaining post-example text to `output`.
253 # Add any remaining post-example text to `output`.
326 output.append(string[charno:])
254 output.append(string[charno:])
327 return output
255 return output
328
256
329 def _parse_example(self, m, name, lineno,ip2py=False):
257 def _parse_example(self, m, name, lineno,ip2py=False):
330 """
258 """
331 Given a regular expression match from `_EXAMPLE_RE` (`m`),
259 Given a regular expression match from `_EXAMPLE_RE` (`m`),
332 return a pair `(source, want)`, where `source` is the matched
260 return a pair `(source, want)`, where `source` is the matched
333 example's source code (with prompts and indentation stripped);
261 example's source code (with prompts and indentation stripped);
334 and `want` is the example's expected output (with indentation
262 and `want` is the example's expected output (with indentation
335 stripped).
263 stripped).
336
264
337 `name` is the string's name, and `lineno` is the line number
265 `name` is the string's name, and `lineno` is the line number
338 where the example starts; both are used for error messages.
266 where the example starts; both are used for error messages.
339
267
340 Optional:
268 Optional:
341 `ip2py`: if true, filter the input via IPython to convert the syntax
269 `ip2py`: if true, filter the input via IPython to convert the syntax
342 into valid python.
270 into valid python.
343 """
271 """
344
272
345 # Get the example's indentation level.
273 # Get the example's indentation level.
346 indent = len(m.group('indent'))
274 indent = len(m.group('indent'))
347
275
348 # Divide source into lines; check that they're properly
276 # Divide source into lines; check that they're properly
349 # indented; and then strip their indentation & prompts.
277 # indented; and then strip their indentation & prompts.
350 source_lines = m.group('source').split('\n')
278 source_lines = m.group('source').split('\n')
351
279
352 # We're using variable-length input prompts
280 # We're using variable-length input prompts
353 ps1 = m.group('ps1')
281 ps1 = m.group('ps1')
354 ps2 = m.group('ps2')
282 ps2 = m.group('ps2')
355 ps1_len = len(ps1)
283 ps1_len = len(ps1)
356
284
357 self._check_prompt_blank(source_lines, indent, name, lineno,ps1_len)
285 self._check_prompt_blank(source_lines, indent, name, lineno,ps1_len)
358 if ps2:
286 if ps2:
359 self._check_prefix(source_lines[1:], ' '*indent + ps2, name, lineno)
287 self._check_prefix(source_lines[1:], ' '*indent + ps2, name, lineno)
360
288
361 source = '\n'.join([sl[indent+ps1_len+1:] for sl in source_lines])
289 source = '\n'.join([sl[indent+ps1_len+1:] for sl in source_lines])
362
290
363 if ip2py:
291 if ip2py:
364 # Convert source input from IPython into valid Python syntax
292 # Convert source input from IPython into valid Python syntax
365 source = self.ip2py(source)
293 source = self.ip2py(source)
366
294
367 # Divide want into lines; check that it's properly indented; and
295 # Divide want into lines; check that it's properly indented; and
368 # then strip the indentation. Spaces before the last newline should
296 # then strip the indentation. Spaces before the last newline should
369 # be preserved, so plain rstrip() isn't good enough.
297 # be preserved, so plain rstrip() isn't good enough.
370 want = m.group('want')
298 want = m.group('want')
371 want_lines = want.split('\n')
299 want_lines = want.split('\n')
372 if len(want_lines) > 1 and re.match(r' *$', want_lines[-1]):
300 if len(want_lines) > 1 and re.match(r' *$', want_lines[-1]):
373 del want_lines[-1] # forget final newline & spaces after it
301 del want_lines[-1] # forget final newline & spaces after it
374 self._check_prefix(want_lines, ' '*indent, name,
302 self._check_prefix(want_lines, ' '*indent, name,
375 lineno + len(source_lines))
303 lineno + len(source_lines))
376
304
377 # Remove ipython output prompt that might be present in the first line
305 # Remove ipython output prompt that might be present in the first line
378 want_lines[0] = re.sub(r'Out\[\d+\]: \s*?\n?','',want_lines[0])
306 want_lines[0] = re.sub(r'Out\[\d+\]: \s*?\n?','',want_lines[0])
379
307
380 want = '\n'.join([wl[indent:] for wl in want_lines])
308 want = '\n'.join([wl[indent:] for wl in want_lines])
381
309
382 # If `want` contains a traceback message, then extract it.
310 # If `want` contains a traceback message, then extract it.
383 m = self._EXCEPTION_RE.match(want)
311 m = self._EXCEPTION_RE.match(want)
384 if m:
312 if m:
385 exc_msg = m.group('msg')
313 exc_msg = m.group('msg')
386 else:
314 else:
387 exc_msg = None
315 exc_msg = None
388
316
389 # Extract options from the source.
317 # Extract options from the source.
390 options = self._find_options(source, name, lineno)
318 options = self._find_options(source, name, lineno)
391
319
392 return source, options, want, exc_msg
320 return source, options, want, exc_msg
393
321
394 def _check_prompt_blank(self, lines, indent, name, lineno, ps1_len):
322 def _check_prompt_blank(self, lines, indent, name, lineno, ps1_len):
395 """
323 """
396 Given the lines of a source string (including prompts and
324 Given the lines of a source string (including prompts and
397 leading indentation), check to make sure that every prompt is
325 leading indentation), check to make sure that every prompt is
398 followed by a space character. If any line is not followed by
326 followed by a space character. If any line is not followed by
399 a space character, then raise ValueError.
327 a space character, then raise ValueError.
400
328
401 Note: IPython-modified version which takes the input prompt length as a
329 Note: IPython-modified version which takes the input prompt length as a
402 parameter, so that prompts of variable length can be dealt with.
330 parameter, so that prompts of variable length can be dealt with.
403 """
331 """
404 space_idx = indent+ps1_len
332 space_idx = indent+ps1_len
405 min_len = space_idx+1
333 min_len = space_idx+1
406 for i, line in enumerate(lines):
334 for i, line in enumerate(lines):
407 if len(line) >= min_len and line[space_idx] != ' ':
335 if len(line) >= min_len and line[space_idx] != ' ':
408 raise ValueError('line %r of the docstring for %s '
336 raise ValueError('line %r of the docstring for %s '
409 'lacks blank after %s: %r' %
337 'lacks blank after %s: %r' %
410 (lineno+i+1, name,
338 (lineno+i+1, name,
411 line[indent:space_idx], line))
339 line[indent:space_idx], line))
412
340
413
341
414 SKIP = doctest.register_optionflag('SKIP')
342 SKIP = doctest.register_optionflag('SKIP')
415
343
416
344
417 class IPDocTestRunner(doctest.DocTestRunner,object):
345 class IPDocTestRunner(doctest.DocTestRunner,object):
418 """Test runner that synchronizes the IPython namespace with test globals.
346 """Test runner that synchronizes the IPython namespace with test globals.
419 """
347 """
420
348
421 def run(self, test, compileflags=None, out=None, clear_globs=True):
349 def run(self, test, compileflags=None, out=None, clear_globs=True):
422
350
423 # Hack: ipython needs access to the execution context of the example,
351 # Hack: ipython needs access to the execution context of the example,
424 # so that it can propagate user variables loaded by %run into
352 # so that it can propagate user variables loaded by %run into
425 # test.globs. We put them here into our modified %run as a function
353 # test.globs. We put them here into our modified %run as a function
426 # attribute. Our new %run will then only make the namespace update
354 # attribute. Our new %run will then only make the namespace update
427 # when called (rather than unconditionally updating test.globs here
355 # when called (rather than unconditionally updating test.globs here
428 # for all examples, most of which won't be calling %run anyway).
356 # for all examples, most of which won't be calling %run anyway).
429 #_ip._ipdoctest_test_globs = test.globs
357 #_ip._ipdoctest_test_globs = test.globs
430 #_ip._ipdoctest_test_filename = test.filename
358 #_ip._ipdoctest_test_filename = test.filename
431
359
432 test.globs.update(_ip.user_ns)
360 test.globs.update(_ip.user_ns)
433
361
434 # Override terminal size to standardise traceback format
362 # Override terminal size to standardise traceback format
435 with modified_env({'COLUMNS': '80', 'LINES': '24'}):
363 with modified_env({'COLUMNS': '80', 'LINES': '24'}):
436 return super(IPDocTestRunner,self).run(test,
364 return super(IPDocTestRunner,self).run(test,
437 compileflags,out,clear_globs)
365 compileflags,out,clear_globs)
General Comments 0
You need to be logged in to leave comments. Login now