##// END OF EJS Templates
Refactor `IPCompleter` Matcher API
krassowski -
Show More
This diff has been collapsed as it changes many lines, (767 lines changed) Show them Hide them
@@ -1,2272 +1,2763
1 """Completion for IPython.
1 """Completion for IPython.
2
2
3 This module started as fork of the rlcompleter module in the Python standard
3 This module started as fork of the rlcompleter module in the Python standard
4 library. The original enhancements made to rlcompleter have been sent
4 library. The original enhancements made to rlcompleter have been sent
5 upstream and were accepted as of Python 2.3,
5 upstream and were accepted as of Python 2.3,
6
6
7 This module now support a wide variety of completion mechanism both available
7 This module now support a wide variety of completion mechanism both available
8 for normal classic Python code, as well as completer for IPython specific
8 for normal classic Python code, as well as completer for IPython specific
9 Syntax like magics.
9 Syntax like magics.
10
10
11 Latex and Unicode completion
11 Latex and Unicode completion
12 ============================
12 ============================
13
13
14 IPython and compatible frontends not only can complete your code, but can help
14 IPython and compatible frontends not only can complete your code, but can help
15 you to input a wide range of characters. In particular we allow you to insert
15 you to input a wide range of characters. In particular we allow you to insert
16 a unicode character using the tab completion mechanism.
16 a unicode character using the tab completion mechanism.
17
17
18 Forward latex/unicode completion
18 Forward latex/unicode completion
19 --------------------------------
19 --------------------------------
20
20
21 Forward completion allows you to easily type a unicode character using its latex
21 Forward completion allows you to easily type a unicode character using its latex
22 name, or unicode long description. To do so type a backslash follow by the
22 name, or unicode long description. To do so type a backslash follow by the
23 relevant name and press tab:
23 relevant name and press tab:
24
24
25
25
26 Using latex completion:
26 Using latex completion:
27
27
28 .. code::
28 .. code::
29
29
30 \\alpha<tab>
30 \\alpha<tab>
31 α
31 α
32
32
33 or using unicode completion:
33 or using unicode completion:
34
34
35
35
36 .. code::
36 .. code::
37
37
38 \\GREEK SMALL LETTER ALPHA<tab>
38 \\GREEK SMALL LETTER ALPHA<tab>
39 α
39 α
40
40
41
41
42 Only valid Python identifiers will complete. Combining characters (like arrow or
42 Only valid Python identifiers will complete. Combining characters (like arrow or
43 dots) are also available, unlike latex they need to be put after the their
43 dots) are also available, unlike latex they need to be put after the their
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
44 counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
45
46 Some browsers are known to display combining characters incorrectly.
46 Some browsers are known to display combining characters incorrectly.
47
47
48 Backward latex completion
48 Backward latex completion
49 -------------------------
49 -------------------------
50
50
51 It is sometime challenging to know how to type a character, if you are using
51 It is sometime challenging to know how to type a character, if you are using
52 IPython, or any compatible frontend you can prepend backslash to the character
52 IPython, or any compatible frontend you can prepend backslash to the character
53 and press ``<tab>`` to expand it to its latex form.
53 and press ``<tab>`` to expand it to its latex form.
54
54
55 .. code::
55 .. code::
56
56
57 \\α<tab>
57 \\α<tab>
58 \\alpha
58 \\alpha
59
59
60
60
61 Both forward and backward completions can be deactivated by setting the
61 Both forward and backward completions can be deactivated by setting the
62 ``Completer.backslash_combining_completions`` option to ``False``.
62 ``Completer.backslash_combining_completions`` option to ``False``.
63
63
64
64
65 Experimental
65 Experimental
66 ============
66 ============
67
67
68 Starting with IPython 6.0, this module can make use of the Jedi library to
68 Starting with IPython 6.0, this module can make use of the Jedi library to
69 generate completions both using static analysis of the code, and dynamically
69 generate completions both using static analysis of the code, and dynamically
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
70 inspecting multiple namespaces. Jedi is an autocompletion and static analysis
71 for Python. The APIs attached to this new mechanism is unstable and will
71 for Python. The APIs attached to this new mechanism is unstable and will
72 raise unless use in an :any:`provisionalcompleter` context manager.
72 raise unless use in an :any:`provisionalcompleter` context manager.
73
73
74 You will find that the following are experimental:
74 You will find that the following are experimental:
75
75
76 - :any:`provisionalcompleter`
76 - :any:`provisionalcompleter`
77 - :any:`IPCompleter.completions`
77 - :any:`IPCompleter.completions`
78 - :any:`Completion`
78 - :any:`Completion`
79 - :any:`rectify_completions`
79 - :any:`rectify_completions`
80
80
81 .. note::
81 .. note::
82
82
83 better name for :any:`rectify_completions` ?
83 better name for :any:`rectify_completions` ?
84
84
85 We welcome any feedback on these new API, and we also encourage you to try this
85 We welcome any feedback on these new API, and we also encourage you to try this
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
86 module in debug mode (start IPython with ``--Completer.debug=True``) in order
87 to have extra logging information if :any:`jedi` is crashing, or if current
87 to have extra logging information if :any:`jedi` is crashing, or if current
88 IPython completer pending deprecations are returning results not yet handled
88 IPython completer pending deprecations are returning results not yet handled
89 by :any:`jedi`
89 by :any:`jedi`
90
90
91 Using Jedi for tab completion allow snippets like the following to work without
91 Using Jedi for tab completion allow snippets like the following to work without
92 having to execute any code:
92 having to execute any code:
93
93
94 >>> myvar = ['hello', 42]
94 >>> myvar = ['hello', 42]
95 ... myvar[1].bi<tab>
95 ... myvar[1].bi<tab>
96
96
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
97 Tab completion will be able to infer that ``myvar[1]`` is a real number without
98 executing any code unlike the previously available ``IPCompleter.greedy``
98 executing any code unlike the previously available ``IPCompleter.greedy``
99 option.
99 option.
100
100
101 Be sure to update :any:`jedi` to the latest stable version or to try the
101 Be sure to update :any:`jedi` to the latest stable version or to try the
102 current development version to get better completions.
102 current development version to get better completions.
103
104 Matchers
105 ========
106
107 All completions routines are implemented using unified ``matchers`` API.
108 The matchers API is provisional and subject to change without notice.
109
110 The built-in matchers include:
111
112 - ``IPCompleter.dict_key_matcher``: dictionary key completions,
113 - ``IPCompleter.magic_matcher``: completions for magics,
114 - ``IPCompleter.unicode_name_matcher``, ``IPCompleter.fwd_unicode_matcher`` and ``IPCompleter.latex_matcher``: see `Forward latex/unicode completion`_,
115 - ``back_unicode_name_matcher`` and ``back_latex_name_matcher``: see `Backward latex completion`_,
116 - ``IPCompleter.file_matcher``: paths to files and directories,
117 - ``IPCompleter.python_func_kw_matcher`` - function keywords,
118 - ``IPCompleter.python_matches`` - globals and attributes (v1 API),
119 - ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
120 - ``IPCompleter.custom_completer_matcher`` - pluggable completer with a default implementation in ``core.InteractiveShell``
121 which uses uses IPython hooks system (`complete_command`) with string dispatch (including regular expressions).
122 Differently to other matchers, ``custom_completer_matcher`` will not suppress Jedi results to match
123 behaviour in earlier IPython versions.
124
125 Adding custom matchers is possible by appending to `IPCompleter.custom_matchers` list,
126 but please be aware that this API is subject to change.
103 """
127 """
104
128
105
129
106 # Copyright (c) IPython Development Team.
130 # Copyright (c) IPython Development Team.
107 # Distributed under the terms of the Modified BSD License.
131 # Distributed under the terms of the Modified BSD License.
108 #
132 #
109 # Some of this code originated from rlcompleter in the Python standard library
133 # Some of this code originated from rlcompleter in the Python standard library
110 # Copyright (C) 2001 Python Software Foundation, www.python.org
134 # Copyright (C) 2001 Python Software Foundation, www.python.org
111
135
112
136
113 import builtins as builtin_mod
137 import builtins as builtin_mod
114 import glob
138 import glob
115 import inspect
139 import inspect
116 import itertools
140 import itertools
117 import keyword
141 import keyword
118 import os
142 import os
119 import re
143 import re
120 import string
144 import string
121 import sys
145 import sys
122 import time
146 import time
123 import unicodedata
147 import unicodedata
124 import uuid
148 import uuid
125 import warnings
149 import warnings
126 from contextlib import contextmanager
150 from contextlib import contextmanager
151 from functools import lru_cache, partial
127 from importlib import import_module
152 from importlib import import_module
128 from types import SimpleNamespace
153 from types import SimpleNamespace
129 from typing import Iterable, Iterator, List, Tuple, Union, Any, Sequence, Dict, NamedTuple, Pattern, Optional
154 from typing import (
155 Iterable,
156 Iterator,
157 List,
158 Tuple,
159 Union,
160 Any,
161 Sequence,
162 Dict,
163 NamedTuple,
164 Pattern,
165 Optional,
166 Callable,
167 TYPE_CHECKING,
168 Set,
169 )
170 from typing_extensions import TypedDict, NotRequired
130
171
131 from IPython.core.error import TryNext
172 from IPython.core.error import TryNext
132 from IPython.core.inputtransformer2 import ESC_MAGIC
173 from IPython.core.inputtransformer2 import ESC_MAGIC
133 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
174 from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
134 from IPython.core.oinspect import InspectColors
175 from IPython.core.oinspect import InspectColors
135 from IPython.testing.skipdoctest import skip_doctest
176 from IPython.testing.skipdoctest import skip_doctest
136 from IPython.utils import generics
177 from IPython.utils import generics
137 from IPython.utils.dir2 import dir2, get_real_method
178 from IPython.utils.dir2 import dir2, get_real_method
138 from IPython.utils.path import ensure_dir_exists
179 from IPython.utils.path import ensure_dir_exists
139 from IPython.utils.process import arg_split
180 from IPython.utils.process import arg_split
140 from traitlets import Bool, Enum, Int, List as ListTrait, Unicode, default, observe
181 from traitlets import (
182 Bool,
183 Enum,
184 Int,
185 List as ListTrait,
186 Unicode,
187 Dict as DictTrait,
188 Union as UnionTrait,
189 default,
190 observe,
191 )
141 from traitlets.config.configurable import Configurable
192 from traitlets.config.configurable import Configurable
142
193
143 import __main__
194 import __main__
144
195
145 # skip module docstests
196 # skip module docstests
146 __skip_doctest__ = True
197 __skip_doctest__ = True
147
198
199
148 try:
200 try:
149 import jedi
201 import jedi
150 jedi.settings.case_insensitive_completion = False
202 jedi.settings.case_insensitive_completion = False
151 import jedi.api.helpers
203 import jedi.api.helpers
152 import jedi.api.classes
204 import jedi.api.classes
153 JEDI_INSTALLED = True
205 JEDI_INSTALLED = True
154 except ImportError:
206 except ImportError:
155 JEDI_INSTALLED = False
207 JEDI_INSTALLED = False
156 #-----------------------------------------------------------------------------
208
209 if TYPE_CHECKING:
210 from typing import cast
211 else:
212
213 def cast(obj, _type):
214 return obj
215
216
217 # -----------------------------------------------------------------------------
157 # Globals
218 # Globals
158 #-----------------------------------------------------------------------------
219 #-----------------------------------------------------------------------------
159
220
160 # ranges where we have most of the valid unicode names. We could be more finer
221 # ranges where we have most of the valid unicode names. We could be more finer
161 # grained but is it worth it for performance While unicode have character in the
222 # grained but is it worth it for performance While unicode have character in the
162 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
223 # range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
163 # write this). With below range we cover them all, with a density of ~67%
224 # write this). With below range we cover them all, with a density of ~67%
164 # biggest next gap we consider only adds up about 1% density and there are 600
225 # biggest next gap we consider only adds up about 1% density and there are 600
165 # gaps that would need hard coding.
226 # gaps that would need hard coding.
166 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
227 _UNICODE_RANGES = [(32, 0x3134b), (0xe0001, 0xe01f0)]
167
228
168 # Public API
229 # Public API
169 __all__ = ['Completer','IPCompleter']
230 __all__ = ['Completer','IPCompleter']
170
231
171 if sys.platform == 'win32':
232 if sys.platform == 'win32':
172 PROTECTABLES = ' '
233 PROTECTABLES = ' '
173 else:
234 else:
174 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
235 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
175
236
176 # Protect against returning an enormous number of completions which the frontend
237 # Protect against returning an enormous number of completions which the frontend
177 # may have trouble processing.
238 # may have trouble processing.
178 MATCHES_LIMIT = 500
239 MATCHES_LIMIT = 500
179
240
241 # Completion type reported when no type can be inferred.
242 _UNKNOWN_TYPE = "<unknown>"
180
243
181 class ProvisionalCompleterWarning(FutureWarning):
244 class ProvisionalCompleterWarning(FutureWarning):
182 """
245 """
183 Exception raise by an experimental feature in this module.
246 Exception raise by an experimental feature in this module.
184
247
185 Wrap code in :any:`provisionalcompleter` context manager if you
248 Wrap code in :any:`provisionalcompleter` context manager if you
186 are certain you want to use an unstable feature.
249 are certain you want to use an unstable feature.
187 """
250 """
188 pass
251 pass
189
252
190 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
253 warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
191
254
192
255
193 @skip_doctest
256 @skip_doctest
194 @contextmanager
257 @contextmanager
195 def provisionalcompleter(action='ignore'):
258 def provisionalcompleter(action='ignore'):
196 """
259 """
197 This context manager has to be used in any place where unstable completer
260 This context manager has to be used in any place where unstable completer
198 behavior and API may be called.
261 behavior and API may be called.
199
262
200 >>> with provisionalcompleter():
263 >>> with provisionalcompleter():
201 ... completer.do_experimental_things() # works
264 ... completer.do_experimental_things() # works
202
265
203 >>> completer.do_experimental_things() # raises.
266 >>> completer.do_experimental_things() # raises.
204
267
205 .. note::
268 .. note::
206
269
207 Unstable
270 Unstable
208
271
209 By using this context manager you agree that the API in use may change
272 By using this context manager you agree that the API in use may change
210 without warning, and that you won't complain if they do so.
273 without warning, and that you won't complain if they do so.
211
274
212 You also understand that, if the API is not to your liking, you should report
275 You also understand that, if the API is not to your liking, you should report
213 a bug to explain your use case upstream.
276 a bug to explain your use case upstream.
214
277
215 We'll be happy to get your feedback, feature requests, and improvements on
278 We'll be happy to get your feedback, feature requests, and improvements on
216 any of the unstable APIs!
279 any of the unstable APIs!
217 """
280 """
218 with warnings.catch_warnings():
281 with warnings.catch_warnings():
219 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
282 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
220 yield
283 yield
221
284
222
285
223 def has_open_quotes(s):
286 def has_open_quotes(s):
224 """Return whether a string has open quotes.
287 """Return whether a string has open quotes.
225
288
226 This simply counts whether the number of quote characters of either type in
289 This simply counts whether the number of quote characters of either type in
227 the string is odd.
290 the string is odd.
228
291
229 Returns
292 Returns
230 -------
293 -------
231 If there is an open quote, the quote character is returned. Else, return
294 If there is an open quote, the quote character is returned. Else, return
232 False.
295 False.
233 """
296 """
234 # We check " first, then ', so complex cases with nested quotes will get
297 # We check " first, then ', so complex cases with nested quotes will get
235 # the " to take precedence.
298 # the " to take precedence.
236 if s.count('"') % 2:
299 if s.count('"') % 2:
237 return '"'
300 return '"'
238 elif s.count("'") % 2:
301 elif s.count("'") % 2:
239 return "'"
302 return "'"
240 else:
303 else:
241 return False
304 return False
242
305
243
306
244 def protect_filename(s, protectables=PROTECTABLES):
307 def protect_filename(s, protectables=PROTECTABLES):
245 """Escape a string to protect certain characters."""
308 """Escape a string to protect certain characters."""
246 if set(s) & set(protectables):
309 if set(s) & set(protectables):
247 if sys.platform == "win32":
310 if sys.platform == "win32":
248 return '"' + s + '"'
311 return '"' + s + '"'
249 else:
312 else:
250 return "".join(("\\" + c if c in protectables else c) for c in s)
313 return "".join(("\\" + c if c in protectables else c) for c in s)
251 else:
314 else:
252 return s
315 return s
253
316
254
317
255 def expand_user(path:str) -> Tuple[str, bool, str]:
318 def expand_user(path:str) -> Tuple[str, bool, str]:
256 """Expand ``~``-style usernames in strings.
319 """Expand ``~``-style usernames in strings.
257
320
258 This is similar to :func:`os.path.expanduser`, but it computes and returns
321 This is similar to :func:`os.path.expanduser`, but it computes and returns
259 extra information that will be useful if the input was being used in
322 extra information that will be useful if the input was being used in
260 computing completions, and you wish to return the completions with the
323 computing completions, and you wish to return the completions with the
261 original '~' instead of its expanded value.
324 original '~' instead of its expanded value.
262
325
263 Parameters
326 Parameters
264 ----------
327 ----------
265 path : str
328 path : str
266 String to be expanded. If no ~ is present, the output is the same as the
329 String to be expanded. If no ~ is present, the output is the same as the
267 input.
330 input.
268
331
269 Returns
332 Returns
270 -------
333 -------
271 newpath : str
334 newpath : str
272 Result of ~ expansion in the input path.
335 Result of ~ expansion in the input path.
273 tilde_expand : bool
336 tilde_expand : bool
274 Whether any expansion was performed or not.
337 Whether any expansion was performed or not.
275 tilde_val : str
338 tilde_val : str
276 The value that ~ was replaced with.
339 The value that ~ was replaced with.
277 """
340 """
278 # Default values
341 # Default values
279 tilde_expand = False
342 tilde_expand = False
280 tilde_val = ''
343 tilde_val = ''
281 newpath = path
344 newpath = path
282
345
283 if path.startswith('~'):
346 if path.startswith('~'):
284 tilde_expand = True
347 tilde_expand = True
285 rest = len(path)-1
348 rest = len(path)-1
286 newpath = os.path.expanduser(path)
349 newpath = os.path.expanduser(path)
287 if rest:
350 if rest:
288 tilde_val = newpath[:-rest]
351 tilde_val = newpath[:-rest]
289 else:
352 else:
290 tilde_val = newpath
353 tilde_val = newpath
291
354
292 return newpath, tilde_expand, tilde_val
355 return newpath, tilde_expand, tilde_val
293
356
294
357
295 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
358 def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
296 """Does the opposite of expand_user, with its outputs.
359 """Does the opposite of expand_user, with its outputs.
297 """
360 """
298 if tilde_expand:
361 if tilde_expand:
299 return path.replace(tilde_val, '~')
362 return path.replace(tilde_val, '~')
300 else:
363 else:
301 return path
364 return path
302
365
303
366
304 def completions_sorting_key(word):
367 def completions_sorting_key(word):
305 """key for sorting completions
368 """key for sorting completions
306
369
307 This does several things:
370 This does several things:
308
371
309 - Demote any completions starting with underscores to the end
372 - Demote any completions starting with underscores to the end
310 - Insert any %magic and %%cellmagic completions in the alphabetical order
373 - Insert any %magic and %%cellmagic completions in the alphabetical order
311 by their name
374 by their name
312 """
375 """
313 prio1, prio2 = 0, 0
376 prio1, prio2 = 0, 0
314
377
315 if word.startswith('__'):
378 if word.startswith('__'):
316 prio1 = 2
379 prio1 = 2
317 elif word.startswith('_'):
380 elif word.startswith('_'):
318 prio1 = 1
381 prio1 = 1
319
382
320 if word.endswith('='):
383 if word.endswith('='):
321 prio1 = -1
384 prio1 = -1
322
385
323 if word.startswith('%%'):
386 if word.startswith('%%'):
324 # If there's another % in there, this is something else, so leave it alone
387 # If there's another % in there, this is something else, so leave it alone
325 if not "%" in word[2:]:
388 if not "%" in word[2:]:
326 word = word[2:]
389 word = word[2:]
327 prio2 = 2
390 prio2 = 2
328 elif word.startswith('%'):
391 elif word.startswith('%'):
329 if not "%" in word[1:]:
392 if not "%" in word[1:]:
330 word = word[1:]
393 word = word[1:]
331 prio2 = 1
394 prio2 = 1
332
395
333 return prio1, word, prio2
396 return prio1, word, prio2
334
397
335
398
336 class _FakeJediCompletion:
399 class _FakeJediCompletion:
337 """
400 """
338 This is a workaround to communicate to the UI that Jedi has crashed and to
401 This is a workaround to communicate to the UI that Jedi has crashed and to
339 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
402 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
340
403
341 Added in IPython 6.0 so should likely be removed for 7.0
404 Added in IPython 6.0 so should likely be removed for 7.0
342
405
343 """
406 """
344
407
345 def __init__(self, name):
408 def __init__(self, name):
346
409
347 self.name = name
410 self.name = name
348 self.complete = name
411 self.complete = name
349 self.type = 'crashed'
412 self.type = 'crashed'
350 self.name_with_symbols = name
413 self.name_with_symbols = name
351 self.signature = ''
414 self.signature = ''
352 self._origin = 'fake'
415 self._origin = 'fake'
353
416
354 def __repr__(self):
417 def __repr__(self):
355 return '<Fake completion object jedi has crashed>'
418 return '<Fake completion object jedi has crashed>'
356
419
357
420
421 _JediCompletionLike = Union[jedi.api.Completion, _FakeJediCompletion]
422
423
358 class Completion:
424 class Completion:
359 """
425 """
360 Completion object used and return by IPython completers.
426 Completion object used and return by IPython completers.
361
427
362 .. warning::
428 .. warning::
363
429
364 Unstable
430 Unstable
365
431
366 This function is unstable, API may change without warning.
432 This function is unstable, API may change without warning.
367 It will also raise unless use in proper context manager.
433 It will also raise unless use in proper context manager.
368
434
369 This act as a middle ground :any:`Completion` object between the
435 This act as a middle ground :any:`Completion` object between the
370 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
436 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
371 object. While Jedi need a lot of information about evaluator and how the
437 object. While Jedi need a lot of information about evaluator and how the
372 code should be ran/inspected, PromptToolkit (and other frontend) mostly
438 code should be ran/inspected, PromptToolkit (and other frontend) mostly
373 need user facing information.
439 need user facing information.
374
440
375 - Which range should be replaced replaced by what.
441 - Which range should be replaced replaced by what.
376 - Some metadata (like completion type), or meta information to displayed to
442 - Some metadata (like completion type), or meta information to displayed to
377 the use user.
443 the use user.
378
444
379 For debugging purpose we can also store the origin of the completion (``jedi``,
445 For debugging purpose we can also store the origin of the completion (``jedi``,
380 ``IPython.python_matches``, ``IPython.magics_matches``...).
446 ``IPython.python_matches``, ``IPython.magics_matches``...).
381 """
447 """
382
448
383 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
449 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
384
450
385 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
451 def __init__(self, start: int, end: int, text: str, *, type: str=None, _origin='', signature='') -> None:
386 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
452 warnings.warn("``Completion`` is a provisional API (as of IPython 6.0). "
387 "It may change without warnings. "
453 "It may change without warnings. "
388 "Use in corresponding context manager.",
454 "Use in corresponding context manager.",
389 category=ProvisionalCompleterWarning, stacklevel=2)
455 category=ProvisionalCompleterWarning, stacklevel=2)
390
456
391 self.start = start
457 self.start = start
392 self.end = end
458 self.end = end
393 self.text = text
459 self.text = text
394 self.type = type
460 self.type = type
395 self.signature = signature
461 self.signature = signature
396 self._origin = _origin
462 self._origin = _origin
397
463
398 def __repr__(self):
464 def __repr__(self):
399 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
465 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
400 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
466 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
401
467
402 def __eq__(self, other)->Bool:
468 def __eq__(self, other)->Bool:
403 """
469 """
404 Equality and hash do not hash the type (as some completer may not be
470 Equality and hash do not hash the type (as some completer may not be
405 able to infer the type), but are use to (partially) de-duplicate
471 able to infer the type), but are use to (partially) de-duplicate
406 completion.
472 completion.
407
473
408 Completely de-duplicating completion is a bit tricker that just
474 Completely de-duplicating completion is a bit tricker that just
409 comparing as it depends on surrounding text, which Completions are not
475 comparing as it depends on surrounding text, which Completions are not
410 aware of.
476 aware of.
411 """
477 """
412 return self.start == other.start and \
478 return self.start == other.start and \
413 self.end == other.end and \
479 self.end == other.end and \
414 self.text == other.text
480 self.text == other.text
415
481
416 def __hash__(self):
482 def __hash__(self):
417 return hash((self.start, self.end, self.text))
483 return hash((self.start, self.end, self.text))
418
484
419
485
486 class SimpleCompletion:
487 # TODO: decide whether we should keep the ``SimpleCompletion`` separate from ``Completion``
488 # there are two advantages of keeping them separate:
489 # - compatibility with old readline `Completer.complete` interface (less important)
490 # - ease of use for third parties (just return matched text and don't worry about coordinates)
491 # the disadvantage is that we need to loop over the completions again to transform them into
492 # `Completion` objects (but it was done like that before the refactor into `SimpleCompletion` too).
493 __slots__ = ["text", "type"]
494
495 def __init__(self, text: str, *, type: str = None):
496 self.text = text
497 self.type = type
498
499 def __repr__(self):
500 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
501
502
503 class _MatcherResultBase(TypedDict):
504
505 #: suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
506 matched_fragment: NotRequired[str]
507
508 #: whether to suppress results from other matchers; default is False.
509 suppress_others: NotRequired[bool]
510
511 #: are completions already ordered and should be left as-is? default is False.
512 ordered: NotRequired[bool]
513
514 # TODO: should we used a relevance score for ordering?
515 #: value between 0 (likely not relevant) and 100 (likely relevant); default is 50.
516 # relevance: NotRequired[float]
517
518
519 class SimpleMatcherResult(_MatcherResultBase):
520 """Result of new-style completion matcher."""
521
522 #: list of candidate completions
523 completions: Sequence[SimpleCompletion]
524
525
526 class _JediMatcherResult(_MatcherResultBase):
527 """Matching result returned by Jedi (will be processed differently)"""
528
529 #: list of candidate completions
530 completions: Iterable[_JediCompletionLike]
531
532
533 class CompletionContext(NamedTuple):
534 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
535 # which was not explicitly visible as an argument of the matcher, making any refactor
536 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
537 # from the completer, and make substituting them in sub-classes easier.
538
539 #: Relevant fragment of code directly preceding the cursor.
540 #: The extraction of token is implemented via splitter heuristic
541 #: (following readline behaviour for legacy reasons), which is user configurable
542 #: (by switching the greedy mode).
543 token: str
544
545 full_text: str
546
547 #: Cursor position in the line (the same for ``full_text`` and `text``).
548 cursor_position: int
549
550 #: Cursor line in ``full_text``.
551 cursor_line: int
552
553 @property
554 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
555 def text_until_cursor(self) -> str:
556 return self.line_with_cursor[: self.cursor_position]
557
558 @property
559 @lru_cache(maxsize=None) # TODO change to @cache after dropping Python 3.7
560 def line_with_cursor(self) -> str:
561 return self.full_text.split("\n")[self.cursor_line]
562
563
564 MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
565
566 MatcherAPIv1 = Callable[[str], List[str]]
567 MatcherAPIv2 = Callable[[CompletionContext], MatcherResult]
568 Matcher = Union[MatcherAPIv1, MatcherAPIv2]
569
570
571 def completion_matcher(
572 *, priority: float = None, identifier: str = None, api_version=1
573 ):
574 """Adds attributes describing the matcher.
575
576 Parameters
577 ----------
578 priority : Optional[float]
579 The priority of the matcher, determines the order of execution of matchers.
580 Higher priority means that the matcher will be executed first. Defaults to 50.
581 identifier : Optional[str]
582 identifier of the matcher allowing users to modify the behaviour via traitlets,
583 and also used to for debugging (will be passed as ``origin`` with the completions).
584 Defaults to matcher function ``__qualname__``.
585 api_version: Optional[int]
586 version of the Matcher API used by this matcher.
587 Currently supported values are 1 and 2.
588 Defaults to 1.
589 """
590
591 def wrapper(func: Matcher):
592 func.matcher_priority = priority
593 func.matcher_identifier = identifier or func.__qualname__
594 func.matcher_api_version = api_version
595 return func
596
597 return wrapper
598
599
600 def _get_matcher_id(matcher: Matcher):
601 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
602
603
604 def _get_matcher_api_version(matcher):
605 return getattr(matcher, "matcher_api_version", 1)
606
607
608 context_matcher = partial(completion_matcher, api_version=2)
609
610
420 _IC = Iterable[Completion]
611 _IC = Iterable[Completion]
421
612
422
613
423 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
614 def _deduplicate_completions(text: str, completions: _IC)-> _IC:
424 """
615 """
425 Deduplicate a set of completions.
616 Deduplicate a set of completions.
426
617
427 .. warning::
618 .. warning::
428
619
429 Unstable
620 Unstable
430
621
431 This function is unstable, API may change without warning.
622 This function is unstable, API may change without warning.
432
623
433 Parameters
624 Parameters
434 ----------
625 ----------
435 text : str
626 text : str
436 text that should be completed.
627 text that should be completed.
437 completions : Iterator[Completion]
628 completions : Iterator[Completion]
438 iterator over the completions to deduplicate
629 iterator over the completions to deduplicate
439
630
440 Yields
631 Yields
441 ------
632 ------
442 `Completions` objects
633 `Completions` objects
443 Completions coming from multiple sources, may be different but end up having
634 Completions coming from multiple sources, may be different but end up having
444 the same effect when applied to ``text``. If this is the case, this will
635 the same effect when applied to ``text``. If this is the case, this will
445 consider completions as equal and only emit the first encountered.
636 consider completions as equal and only emit the first encountered.
446 Not folded in `completions()` yet for debugging purpose, and to detect when
637 Not folded in `completions()` yet for debugging purpose, and to detect when
447 the IPython completer does return things that Jedi does not, but should be
638 the IPython completer does return things that Jedi does not, but should be
448 at some point.
639 at some point.
449 """
640 """
450 completions = list(completions)
641 completions = list(completions)
451 if not completions:
642 if not completions:
452 return
643 return
453
644
454 new_start = min(c.start for c in completions)
645 new_start = min(c.start for c in completions)
455 new_end = max(c.end for c in completions)
646 new_end = max(c.end for c in completions)
456
647
457 seen = set()
648 seen = set()
458 for c in completions:
649 for c in completions:
459 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
650 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
460 if new_text not in seen:
651 if new_text not in seen:
461 yield c
652 yield c
462 seen.add(new_text)
653 seen.add(new_text)
463
654
464
655
465 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
656 def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
466 """
657 """
467 Rectify a set of completions to all have the same ``start`` and ``end``
658 Rectify a set of completions to all have the same ``start`` and ``end``
468
659
469 .. warning::
660 .. warning::
470
661
471 Unstable
662 Unstable
472
663
473 This function is unstable, API may change without warning.
664 This function is unstable, API may change without warning.
474 It will also raise unless use in proper context manager.
665 It will also raise unless use in proper context manager.
475
666
476 Parameters
667 Parameters
477 ----------
668 ----------
478 text : str
669 text : str
479 text that should be completed.
670 text that should be completed.
480 completions : Iterator[Completion]
671 completions : Iterator[Completion]
481 iterator over the completions to rectify
672 iterator over the completions to rectify
482 _debug : bool
673 _debug : bool
483 Log failed completion
674 Log failed completion
484
675
485 Notes
676 Notes
486 -----
677 -----
487 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
678 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
488 the Jupyter Protocol requires them to behave like so. This will readjust
679 the Jupyter Protocol requires them to behave like so. This will readjust
489 the completion to have the same ``start`` and ``end`` by padding both
680 the completion to have the same ``start`` and ``end`` by padding both
490 extremities with surrounding text.
681 extremities with surrounding text.
491
682
492 During stabilisation should support a ``_debug`` option to log which
683 During stabilisation should support a ``_debug`` option to log which
493 completion are return by the IPython completer and not found in Jedi in
684 completion are return by the IPython completer and not found in Jedi in
494 order to make upstream bug report.
685 order to make upstream bug report.
495 """
686 """
496 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
687 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
497 "It may change without warnings. "
688 "It may change without warnings. "
498 "Use in corresponding context manager.",
689 "Use in corresponding context manager.",
499 category=ProvisionalCompleterWarning, stacklevel=2)
690 category=ProvisionalCompleterWarning, stacklevel=2)
500
691
501 completions = list(completions)
692 completions = list(completions)
502 if not completions:
693 if not completions:
503 return
694 return
504 starts = (c.start for c in completions)
695 starts = (c.start for c in completions)
505 ends = (c.end for c in completions)
696 ends = (c.end for c in completions)
506
697
507 new_start = min(starts)
698 new_start = min(starts)
508 new_end = max(ends)
699 new_end = max(ends)
509
700
510 seen_jedi = set()
701 seen_jedi = set()
511 seen_python_matches = set()
702 seen_python_matches = set()
512 for c in completions:
703 for c in completions:
513 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
704 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
514 if c._origin == 'jedi':
705 if c._origin == 'jedi':
515 seen_jedi.add(new_text)
706 seen_jedi.add(new_text)
516 elif c._origin == 'IPCompleter.python_matches':
707 elif c._origin == 'IPCompleter.python_matches':
517 seen_python_matches.add(new_text)
708 seen_python_matches.add(new_text)
518 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
709 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
519 diff = seen_python_matches.difference(seen_jedi)
710 diff = seen_python_matches.difference(seen_jedi)
520 if diff and _debug:
711 if diff and _debug:
521 print('IPython.python matches have extras:', diff)
712 print('IPython.python matches have extras:', diff)
522
713
523
714
524 if sys.platform == 'win32':
715 if sys.platform == 'win32':
525 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
716 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
526 else:
717 else:
527 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
718 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
528
719
529 GREEDY_DELIMS = ' =\r\n'
720 GREEDY_DELIMS = ' =\r\n'
530
721
531
722
532 class CompletionSplitter(object):
723 class CompletionSplitter(object):
533 """An object to split an input line in a manner similar to readline.
724 """An object to split an input line in a manner similar to readline.
534
725
535 By having our own implementation, we can expose readline-like completion in
726 By having our own implementation, we can expose readline-like completion in
536 a uniform manner to all frontends. This object only needs to be given the
727 a uniform manner to all frontends. This object only needs to be given the
537 line of text to be split and the cursor position on said line, and it
728 line of text to be split and the cursor position on said line, and it
538 returns the 'word' to be completed on at the cursor after splitting the
729 returns the 'word' to be completed on at the cursor after splitting the
539 entire line.
730 entire line.
540
731
541 What characters are used as splitting delimiters can be controlled by
732 What characters are used as splitting delimiters can be controlled by
542 setting the ``delims`` attribute (this is a property that internally
733 setting the ``delims`` attribute (this is a property that internally
543 automatically builds the necessary regular expression)"""
734 automatically builds the necessary regular expression)"""
544
735
545 # Private interface
736 # Private interface
546
737
547 # A string of delimiter characters. The default value makes sense for
738 # A string of delimiter characters. The default value makes sense for
548 # IPython's most typical usage patterns.
739 # IPython's most typical usage patterns.
549 _delims = DELIMS
740 _delims = DELIMS
550
741
551 # The expression (a normal string) to be compiled into a regular expression
742 # The expression (a normal string) to be compiled into a regular expression
552 # for actual splitting. We store it as an attribute mostly for ease of
743 # for actual splitting. We store it as an attribute mostly for ease of
553 # debugging, since this type of code can be so tricky to debug.
744 # debugging, since this type of code can be so tricky to debug.
554 _delim_expr = None
745 _delim_expr = None
555
746
556 # The regular expression that does the actual splitting
747 # The regular expression that does the actual splitting
557 _delim_re = None
748 _delim_re = None
558
749
559 def __init__(self, delims=None):
750 def __init__(self, delims=None):
560 delims = CompletionSplitter._delims if delims is None else delims
751 delims = CompletionSplitter._delims if delims is None else delims
561 self.delims = delims
752 self.delims = delims
562
753
563 @property
754 @property
564 def delims(self):
755 def delims(self):
565 """Return the string of delimiter characters."""
756 """Return the string of delimiter characters."""
566 return self._delims
757 return self._delims
567
758
568 @delims.setter
759 @delims.setter
569 def delims(self, delims):
760 def delims(self, delims):
570 """Set the delimiters for line splitting."""
761 """Set the delimiters for line splitting."""
571 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
762 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
572 self._delim_re = re.compile(expr)
763 self._delim_re = re.compile(expr)
573 self._delims = delims
764 self._delims = delims
574 self._delim_expr = expr
765 self._delim_expr = expr
575
766
576 def split_line(self, line, cursor_pos=None):
767 def split_line(self, line, cursor_pos=None):
577 """Split a line of text with a cursor at the given position.
768 """Split a line of text with a cursor at the given position.
578 """
769 """
579 l = line if cursor_pos is None else line[:cursor_pos]
770 l = line if cursor_pos is None else line[:cursor_pos]
580 return self._delim_re.split(l)[-1]
771 return self._delim_re.split(l)[-1]
581
772
582
773
583
774
584 class Completer(Configurable):
775 class Completer(Configurable):
585
776
586 greedy = Bool(False,
777 greedy = Bool(False,
587 help="""Activate greedy completion
778 help="""Activate greedy completion
588 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
779 PENDING DEPRECATION. this is now mostly taken care of with Jedi.
589
780
590 This will enable completion on elements of lists, results of function calls, etc.,
781 This will enable completion on elements of lists, results of function calls, etc.,
591 but can be unsafe because the code is actually evaluated on TAB.
782 but can be unsafe because the code is actually evaluated on TAB.
592 """,
783 """,
593 ).tag(config=True)
784 ).tag(config=True)
594
785
595 use_jedi = Bool(default_value=JEDI_INSTALLED,
786 use_jedi = Bool(default_value=JEDI_INSTALLED,
596 help="Experimental: Use Jedi to generate autocompletions. "
787 help="Experimental: Use Jedi to generate autocompletions. "
597 "Default to True if jedi is installed.").tag(config=True)
788 "Default to True if jedi is installed.").tag(config=True)
598
789
599 jedi_compute_type_timeout = Int(default_value=400,
790 jedi_compute_type_timeout = Int(default_value=400,
600 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
791 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
601 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
792 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
602 performance by preventing jedi to build its cache.
793 performance by preventing jedi to build its cache.
603 """).tag(config=True)
794 """).tag(config=True)
604
795
605 debug = Bool(default_value=False,
796 debug = Bool(default_value=False,
606 help='Enable debug for the Completer. Mostly print extra '
797 help='Enable debug for the Completer. Mostly print extra '
607 'information for experimental jedi integration.')\
798 'information for experimental jedi integration.')\
608 .tag(config=True)
799 .tag(config=True)
609
800
610 backslash_combining_completions = Bool(True,
801 backslash_combining_completions = Bool(True,
611 help="Enable unicode completions, e.g. \\alpha<tab> . "
802 help="Enable unicode completions, e.g. \\alpha<tab> . "
612 "Includes completion of latex commands, unicode names, and expanding "
803 "Includes completion of latex commands, unicode names, and expanding "
613 "unicode characters back to latex commands.").tag(config=True)
804 "unicode characters back to latex commands.").tag(config=True)
614
805
615 def __init__(self, namespace=None, global_namespace=None, **kwargs):
806 def __init__(self, namespace=None, global_namespace=None, **kwargs):
616 """Create a new completer for the command line.
807 """Create a new completer for the command line.
617
808
618 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
809 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
619
810
620 If unspecified, the default namespace where completions are performed
811 If unspecified, the default namespace where completions are performed
621 is __main__ (technically, __main__.__dict__). Namespaces should be
812 is __main__ (technically, __main__.__dict__). Namespaces should be
622 given as dictionaries.
813 given as dictionaries.
623
814
624 An optional second namespace can be given. This allows the completer
815 An optional second namespace can be given. This allows the completer
625 to handle cases where both the local and global scopes need to be
816 to handle cases where both the local and global scopes need to be
626 distinguished.
817 distinguished.
627 """
818 """
628
819
629 # Don't bind to namespace quite yet, but flag whether the user wants a
820 # Don't bind to namespace quite yet, but flag whether the user wants a
630 # specific namespace or to use __main__.__dict__. This will allow us
821 # specific namespace or to use __main__.__dict__. This will allow us
631 # to bind to __main__.__dict__ at completion time, not now.
822 # to bind to __main__.__dict__ at completion time, not now.
632 if namespace is None:
823 if namespace is None:
633 self.use_main_ns = True
824 self.use_main_ns = True
634 else:
825 else:
635 self.use_main_ns = False
826 self.use_main_ns = False
636 self.namespace = namespace
827 self.namespace = namespace
637
828
638 # The global namespace, if given, can be bound directly
829 # The global namespace, if given, can be bound directly
639 if global_namespace is None:
830 if global_namespace is None:
640 self.global_namespace = {}
831 self.global_namespace = {}
641 else:
832 else:
642 self.global_namespace = global_namespace
833 self.global_namespace = global_namespace
643
834
644 self.custom_matchers = []
835 self.custom_matchers = []
645
836
646 super(Completer, self).__init__(**kwargs)
837 super(Completer, self).__init__(**kwargs)
647
838
648 def complete(self, text, state):
839 def complete(self, text, state):
649 """Return the next possible completion for 'text'.
840 """Return the next possible completion for 'text'.
650
841
651 This is called successively with state == 0, 1, 2, ... until it
842 This is called successively with state == 0, 1, 2, ... until it
652 returns None. The completion should begin with 'text'.
843 returns None. The completion should begin with 'text'.
653
844
654 """
845 """
655 if self.use_main_ns:
846 if self.use_main_ns:
656 self.namespace = __main__.__dict__
847 self.namespace = __main__.__dict__
657
848
658 if state == 0:
849 if state == 0:
659 if "." in text:
850 if "." in text:
660 self.matches = self.attr_matches(text)
851 self.matches = self.attr_matches(text)
661 else:
852 else:
662 self.matches = self.global_matches(text)
853 self.matches = self.global_matches(text)
663 try:
854 try:
664 return self.matches[state]
855 return self.matches[state]
665 except IndexError:
856 except IndexError:
666 return None
857 return None
667
858
668 def global_matches(self, text):
859 def global_matches(self, text):
669 """Compute matches when text is a simple name.
860 """Compute matches when text is a simple name.
670
861
671 Return a list of all keywords, built-in functions and names currently
862 Return a list of all keywords, built-in functions and names currently
672 defined in self.namespace or self.global_namespace that match.
863 defined in self.namespace or self.global_namespace that match.
673
864
674 """
865 """
675 matches = []
866 matches = []
676 match_append = matches.append
867 match_append = matches.append
677 n = len(text)
868 n = len(text)
678 for lst in [keyword.kwlist,
869 for lst in [keyword.kwlist,
679 builtin_mod.__dict__.keys(),
870 builtin_mod.__dict__.keys(),
680 self.namespace.keys(),
871 self.namespace.keys(),
681 self.global_namespace.keys()]:
872 self.global_namespace.keys()]:
682 for word in lst:
873 for word in lst:
683 if word[:n] == text and word != "__builtins__":
874 if word[:n] == text and word != "__builtins__":
684 match_append(word)
875 match_append(word)
685
876
686 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
877 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
687 for lst in [self.namespace.keys(),
878 for lst in [self.namespace.keys(),
688 self.global_namespace.keys()]:
879 self.global_namespace.keys()]:
689 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
880 shortened = {"_".join([sub[0] for sub in word.split('_')]) : word
690 for word in lst if snake_case_re.match(word)}
881 for word in lst if snake_case_re.match(word)}
691 for word in shortened.keys():
882 for word in shortened.keys():
692 if word[:n] == text and word != "__builtins__":
883 if word[:n] == text and word != "__builtins__":
693 match_append(shortened[word])
884 match_append(shortened[word])
694 return matches
885 return matches
695
886
696 def attr_matches(self, text):
887 def attr_matches(self, text):
697 """Compute matches when text contains a dot.
888 """Compute matches when text contains a dot.
698
889
699 Assuming the text is of the form NAME.NAME....[NAME], and is
890 Assuming the text is of the form NAME.NAME....[NAME], and is
700 evaluatable in self.namespace or self.global_namespace, it will be
891 evaluatable in self.namespace or self.global_namespace, it will be
701 evaluated and its attributes (as revealed by dir()) are used as
892 evaluated and its attributes (as revealed by dir()) are used as
702 possible completions. (For class instances, class members are
893 possible completions. (For class instances, class members are
703 also considered.)
894 also considered.)
704
895
705 WARNING: this can still invoke arbitrary C code, if an object
896 WARNING: this can still invoke arbitrary C code, if an object
706 with a __getattr__ hook is evaluated.
897 with a __getattr__ hook is evaluated.
707
898
708 """
899 """
709
900
710 # Another option, seems to work great. Catches things like ''.<tab>
901 # Another option, seems to work great. Catches things like ''.<tab>
711 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
902 m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text)
712
903
713 if m:
904 if m:
714 expr, attr = m.group(1, 3)
905 expr, attr = m.group(1, 3)
715 elif self.greedy:
906 elif self.greedy:
716 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
907 m2 = re.match(r"(.+)\.(\w*)$", self.line_buffer)
717 if not m2:
908 if not m2:
718 return []
909 return []
719 expr, attr = m2.group(1,2)
910 expr, attr = m2.group(1,2)
720 else:
911 else:
721 return []
912 return []
722
913
723 try:
914 try:
724 obj = eval(expr, self.namespace)
915 obj = eval(expr, self.namespace)
725 except:
916 except:
726 try:
917 try:
727 obj = eval(expr, self.global_namespace)
918 obj = eval(expr, self.global_namespace)
728 except:
919 except:
729 return []
920 return []
730
921
731 if self.limit_to__all__ and hasattr(obj, '__all__'):
922 if self.limit_to__all__ and hasattr(obj, '__all__'):
732 words = get__all__entries(obj)
923 words = get__all__entries(obj)
733 else:
924 else:
734 words = dir2(obj)
925 words = dir2(obj)
735
926
736 try:
927 try:
737 words = generics.complete_object(obj, words)
928 words = generics.complete_object(obj, words)
738 except TryNext:
929 except TryNext:
739 pass
930 pass
740 except AssertionError:
931 except AssertionError:
741 raise
932 raise
742 except Exception:
933 except Exception:
743 # Silence errors from completion function
934 # Silence errors from completion function
744 #raise # dbg
935 #raise # dbg
745 pass
936 pass
746 # Build match list to return
937 # Build match list to return
747 n = len(attr)
938 n = len(attr)
748 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
939 return [u"%s.%s" % (expr, w) for w in words if w[:n] == attr ]
749
940
750
941
751 def get__all__entries(obj):
942 def get__all__entries(obj):
752 """returns the strings in the __all__ attribute"""
943 """returns the strings in the __all__ attribute"""
753 try:
944 try:
754 words = getattr(obj, '__all__')
945 words = getattr(obj, '__all__')
755 except:
946 except:
756 return []
947 return []
757
948
758 return [w for w in words if isinstance(w, str)]
949 return [w for w in words if isinstance(w, str)]
759
950
760
951
761 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
952 def match_dict_keys(keys: List[Union[str, bytes, Tuple[Union[str, bytes]]]], prefix: str, delims: str,
762 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
953 extra_prefix: Optional[Tuple[str, bytes]]=None) -> Tuple[str, int, List[str]]:
763 """Used by dict_key_matches, matching the prefix to a list of keys
954 """Used by dict_key_matches, matching the prefix to a list of keys
764
955
765 Parameters
956 Parameters
766 ----------
957 ----------
767 keys
958 keys
768 list of keys in dictionary currently being completed.
959 list of keys in dictionary currently being completed.
769 prefix
960 prefix
770 Part of the text already typed by the user. E.g. `mydict[b'fo`
961 Part of the text already typed by the user. E.g. `mydict[b'fo`
771 delims
962 delims
772 String of delimiters to consider when finding the current key.
963 String of delimiters to consider when finding the current key.
773 extra_prefix : optional
964 extra_prefix : optional
774 Part of the text already typed in multi-key index cases. E.g. for
965 Part of the text already typed in multi-key index cases. E.g. for
775 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
966 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
776
967
777 Returns
968 Returns
778 -------
969 -------
779 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
970 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
780 ``quote`` being the quote that need to be used to close current string.
971 ``quote`` being the quote that need to be used to close current string.
781 ``token_start`` the position where the replacement should start occurring,
972 ``token_start`` the position where the replacement should start occurring,
782 ``matches`` a list of replacement/completion
973 ``matches`` a list of replacement/completion
783
974
784 """
975 """
785 prefix_tuple = extra_prefix if extra_prefix else ()
976 prefix_tuple = extra_prefix if extra_prefix else ()
786 Nprefix = len(prefix_tuple)
977 Nprefix = len(prefix_tuple)
787 def filter_prefix_tuple(key):
978 def filter_prefix_tuple(key):
788 # Reject too short keys
979 # Reject too short keys
789 if len(key) <= Nprefix:
980 if len(key) <= Nprefix:
790 return False
981 return False
791 # Reject keys with non str/bytes in it
982 # Reject keys with non str/bytes in it
792 for k in key:
983 for k in key:
793 if not isinstance(k, (str, bytes)):
984 if not isinstance(k, (str, bytes)):
794 return False
985 return False
795 # Reject keys that do not match the prefix
986 # Reject keys that do not match the prefix
796 for k, pt in zip(key, prefix_tuple):
987 for k, pt in zip(key, prefix_tuple):
797 if k != pt:
988 if k != pt:
798 return False
989 return False
799 # All checks passed!
990 # All checks passed!
800 return True
991 return True
801
992
802 filtered_keys:List[Union[str,bytes]] = []
993 filtered_keys:List[Union[str,bytes]] = []
803 def _add_to_filtered_keys(key):
994 def _add_to_filtered_keys(key):
804 if isinstance(key, (str, bytes)):
995 if isinstance(key, (str, bytes)):
805 filtered_keys.append(key)
996 filtered_keys.append(key)
806
997
807 for k in keys:
998 for k in keys:
808 if isinstance(k, tuple):
999 if isinstance(k, tuple):
809 if filter_prefix_tuple(k):
1000 if filter_prefix_tuple(k):
810 _add_to_filtered_keys(k[Nprefix])
1001 _add_to_filtered_keys(k[Nprefix])
811 else:
1002 else:
812 _add_to_filtered_keys(k)
1003 _add_to_filtered_keys(k)
813
1004
814 if not prefix:
1005 if not prefix:
815 return '', 0, [repr(k) for k in filtered_keys]
1006 return '', 0, [repr(k) for k in filtered_keys]
816 quote_match = re.search('["\']', prefix)
1007 quote_match = re.search('["\']', prefix)
817 assert quote_match is not None # silence mypy
1008 assert quote_match is not None # silence mypy
818 quote = quote_match.group()
1009 quote = quote_match.group()
819 try:
1010 try:
820 prefix_str = eval(prefix + quote, {})
1011 prefix_str = eval(prefix + quote, {})
821 except Exception:
1012 except Exception:
822 return '', 0, []
1013 return '', 0, []
823
1014
824 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1015 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
825 token_match = re.search(pattern, prefix, re.UNICODE)
1016 token_match = re.search(pattern, prefix, re.UNICODE)
826 assert token_match is not None # silence mypy
1017 assert token_match is not None # silence mypy
827 token_start = token_match.start()
1018 token_start = token_match.start()
828 token_prefix = token_match.group()
1019 token_prefix = token_match.group()
829
1020
830 matched:List[str] = []
1021 matched:List[str] = []
831 for key in filtered_keys:
1022 for key in filtered_keys:
832 try:
1023 try:
833 if not key.startswith(prefix_str):
1024 if not key.startswith(prefix_str):
834 continue
1025 continue
835 except (AttributeError, TypeError, UnicodeError):
1026 except (AttributeError, TypeError, UnicodeError):
836 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1027 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
837 continue
1028 continue
838
1029
839 # reformat remainder of key to begin with prefix
1030 # reformat remainder of key to begin with prefix
840 rem = key[len(prefix_str):]
1031 rem = key[len(prefix_str):]
841 # force repr wrapped in '
1032 # force repr wrapped in '
842 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1033 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
843 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1034 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
844 if quote == '"':
1035 if quote == '"':
845 # The entered prefix is quoted with ",
1036 # The entered prefix is quoted with ",
846 # but the match is quoted with '.
1037 # but the match is quoted with '.
847 # A contained " hence needs escaping for comparison:
1038 # A contained " hence needs escaping for comparison:
848 rem_repr = rem_repr.replace('"', '\\"')
1039 rem_repr = rem_repr.replace('"', '\\"')
849
1040
850 # then reinsert prefix from start of token
1041 # then reinsert prefix from start of token
851 matched.append('%s%s' % (token_prefix, rem_repr))
1042 matched.append('%s%s' % (token_prefix, rem_repr))
852 return quote, token_start, matched
1043 return quote, token_start, matched
853
1044
854
1045
855 def cursor_to_position(text:str, line:int, column:int)->int:
1046 def cursor_to_position(text:str, line:int, column:int)->int:
856 """
1047 """
857 Convert the (line,column) position of the cursor in text to an offset in a
1048 Convert the (line,column) position of the cursor in text to an offset in a
858 string.
1049 string.
859
1050
860 Parameters
1051 Parameters
861 ----------
1052 ----------
862 text : str
1053 text : str
863 The text in which to calculate the cursor offset
1054 The text in which to calculate the cursor offset
864 line : int
1055 line : int
865 Line of the cursor; 0-indexed
1056 Line of the cursor; 0-indexed
866 column : int
1057 column : int
867 Column of the cursor 0-indexed
1058 Column of the cursor 0-indexed
868
1059
869 Returns
1060 Returns
870 -------
1061 -------
871 Position of the cursor in ``text``, 0-indexed.
1062 Position of the cursor in ``text``, 0-indexed.
872
1063
873 See Also
1064 See Also
874 --------
1065 --------
875 position_to_cursor : reciprocal of this function
1066 position_to_cursor : reciprocal of this function
876
1067
877 """
1068 """
878 lines = text.split('\n')
1069 lines = text.split('\n')
879 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1070 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
880
1071
881 return sum(len(l) + 1 for l in lines[:line]) + column
1072 return sum(len(l) + 1 for l in lines[:line]) + column
882
1073
883 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
1074 def position_to_cursor(text:str, offset:int)->Tuple[int, int]:
884 """
1075 """
885 Convert the position of the cursor in text (0 indexed) to a line
1076 Convert the position of the cursor in text (0 indexed) to a line
886 number(0-indexed) and a column number (0-indexed) pair
1077 number(0-indexed) and a column number (0-indexed) pair
887
1078
888 Position should be a valid position in ``text``.
1079 Position should be a valid position in ``text``.
889
1080
890 Parameters
1081 Parameters
891 ----------
1082 ----------
892 text : str
1083 text : str
893 The text in which to calculate the cursor offset
1084 The text in which to calculate the cursor offset
894 offset : int
1085 offset : int
895 Position of the cursor in ``text``, 0-indexed.
1086 Position of the cursor in ``text``, 0-indexed.
896
1087
897 Returns
1088 Returns
898 -------
1089 -------
899 (line, column) : (int, int)
1090 (line, column) : (int, int)
900 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1091 Line of the cursor; 0-indexed, column of the cursor 0-indexed
901
1092
902 See Also
1093 See Also
903 --------
1094 --------
904 cursor_to_position : reciprocal of this function
1095 cursor_to_position : reciprocal of this function
905
1096
906 """
1097 """
907
1098
908 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1099 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
909
1100
910 before = text[:offset]
1101 before = text[:offset]
911 blines = before.split('\n') # ! splitnes trim trailing \n
1102 blines = before.split('\n') # ! splitnes trim trailing \n
912 line = before.count('\n')
1103 line = before.count('\n')
913 col = len(blines[-1])
1104 col = len(blines[-1])
914 return line, col
1105 return line, col
915
1106
916
1107
917 def _safe_isinstance(obj, module, class_name):
1108 def _safe_isinstance(obj, module, class_name):
918 """Checks if obj is an instance of module.class_name if loaded
1109 """Checks if obj is an instance of module.class_name if loaded
919 """
1110 """
920 return (module in sys.modules and
1111 return (module in sys.modules and
921 isinstance(obj, getattr(import_module(module), class_name)))
1112 isinstance(obj, getattr(import_module(module), class_name)))
922
1113
923 def back_unicode_name_matches(text:str) -> Tuple[str, Sequence[str]]:
1114
1115 @context_matcher()
1116 def back_unicode_name_matcher(context):
1117 fragment, matches = back_unicode_name_matches(context.token)
1118 return _convert_matcher_v1_result_to_v2(
1119 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1120 )
1121
1122
1123 def back_unicode_name_matches(text: str) -> Tuple[str, Sequence[str]]:
924 """Match Unicode characters back to Unicode name
1124 """Match Unicode characters back to Unicode name
925
1125
926 This does ``☃`` -> ``\\snowman``
1126 This does ``☃`` -> ``\\snowman``
927
1127
928 Note that snowman is not a valid python3 combining character but will be expanded.
1128 Note that snowman is not a valid python3 combining character but will be expanded.
929 Though it will not recombine back to the snowman character by the completion machinery.
1129 Though it will not recombine back to the snowman character by the completion machinery.
930
1130
931 This will not either back-complete standard sequences like \\n, \\b ...
1131 This will not either back-complete standard sequences like \\n, \\b ...
932
1132
933 Returns
1133 Returns
934 =======
1134 =======
935
1135
936 Return a tuple with two elements:
1136 Return a tuple with two elements:
937
1137
938 - The Unicode character that was matched (preceded with a backslash), or
1138 - The Unicode character that was matched (preceded with a backslash), or
939 empty string,
1139 empty string,
940 - a sequence (of 1), name for the match Unicode character, preceded by
1140 - a sequence (of 1), name for the match Unicode character, preceded by
941 backslash, or empty if no match.
1141 backslash, or empty if no match.
942
1142
943 """
1143 """
944 if len(text)<2:
1144 if len(text)<2:
945 return '', ()
1145 return '', ()
946 maybe_slash = text[-2]
1146 maybe_slash = text[-2]
947 if maybe_slash != '\\':
1147 if maybe_slash != '\\':
948 return '', ()
1148 return '', ()
949
1149
950 char = text[-1]
1150 char = text[-1]
951 # no expand on quote for completion in strings.
1151 # no expand on quote for completion in strings.
952 # nor backcomplete standard ascii keys
1152 # nor backcomplete standard ascii keys
953 if char in string.ascii_letters or char in ('"',"'"):
1153 if char in string.ascii_letters or char in ('"',"'"):
954 return '', ()
1154 return '', ()
955 try :
1155 try :
956 unic = unicodedata.name(char)
1156 unic = unicodedata.name(char)
957 return '\\'+char,('\\'+unic,)
1157 return '\\'+char,('\\'+unic,)
958 except KeyError:
1158 except KeyError:
959 pass
1159 pass
960 return '', ()
1160 return '', ()
961
1161
962 def back_latex_name_matches(text:str) -> Tuple[str, Sequence[str]] :
1162
1163 @context_matcher()
1164 def back_latex_name_matcher(context):
1165 fragment, matches = back_latex_name_matches(context.token)
1166 return _convert_matcher_v1_result_to_v2(
1167 matches, type="latex", fragment=fragment, suppress_if_matches=True
1168 )
1169
1170
1171 def back_latex_name_matches(text: str) -> Tuple[str, Sequence[str]]:
963 """Match latex characters back to unicode name
1172 """Match latex characters back to unicode name
964
1173
965 This does ``\\ℵ`` -> ``\\aleph``
1174 This does ``\\ℵ`` -> ``\\aleph``
966
1175
967 """
1176 """
968 if len(text)<2:
1177 if len(text)<2:
969 return '', ()
1178 return '', ()
970 maybe_slash = text[-2]
1179 maybe_slash = text[-2]
971 if maybe_slash != '\\':
1180 if maybe_slash != '\\':
972 return '', ()
1181 return '', ()
973
1182
974
1183
975 char = text[-1]
1184 char = text[-1]
976 # no expand on quote for completion in strings.
1185 # no expand on quote for completion in strings.
977 # nor backcomplete standard ascii keys
1186 # nor backcomplete standard ascii keys
978 if char in string.ascii_letters or char in ('"',"'"):
1187 if char in string.ascii_letters or char in ('"',"'"):
979 return '', ()
1188 return '', ()
980 try :
1189 try :
981 latex = reverse_latex_symbol[char]
1190 latex = reverse_latex_symbol[char]
982 # '\\' replace the \ as well
1191 # '\\' replace the \ as well
983 return '\\'+char,[latex]
1192 return '\\'+char,[latex]
984 except KeyError:
1193 except KeyError:
985 pass
1194 pass
986 return '', ()
1195 return '', ()
987
1196
988
1197
989 def _formatparamchildren(parameter) -> str:
1198 def _formatparamchildren(parameter) -> str:
990 """
1199 """
991 Get parameter name and value from Jedi Private API
1200 Get parameter name and value from Jedi Private API
992
1201
993 Jedi does not expose a simple way to get `param=value` from its API.
1202 Jedi does not expose a simple way to get `param=value` from its API.
994
1203
995 Parameters
1204 Parameters
996 ----------
1205 ----------
997 parameter
1206 parameter
998 Jedi's function `Param`
1207 Jedi's function `Param`
999
1208
1000 Returns
1209 Returns
1001 -------
1210 -------
1002 A string like 'a', 'b=1', '*args', '**kwargs'
1211 A string like 'a', 'b=1', '*args', '**kwargs'
1003
1212
1004 """
1213 """
1005 description = parameter.description
1214 description = parameter.description
1006 if not description.startswith('param '):
1215 if not description.startswith('param '):
1007 raise ValueError('Jedi function parameter description have change format.'
1216 raise ValueError('Jedi function parameter description have change format.'
1008 'Expected "param ...", found %r".' % description)
1217 'Expected "param ...", found %r".' % description)
1009 return description[6:]
1218 return description[6:]
1010
1219
1011 def _make_signature(completion)-> str:
1220 def _make_signature(completion)-> str:
1012 """
1221 """
1013 Make the signature from a jedi completion
1222 Make the signature from a jedi completion
1014
1223
1015 Parameters
1224 Parameters
1016 ----------
1225 ----------
1017 completion : jedi.Completion
1226 completion : jedi.Completion
1018 object does not complete a function type
1227 object does not complete a function type
1019
1228
1020 Returns
1229 Returns
1021 -------
1230 -------
1022 a string consisting of the function signature, with the parenthesis but
1231 a string consisting of the function signature, with the parenthesis but
1023 without the function name. example:
1232 without the function name. example:
1024 `(a, *args, b=1, **kwargs)`
1233 `(a, *args, b=1, **kwargs)`
1025
1234
1026 """
1235 """
1027
1236
1028 # it looks like this might work on jedi 0.17
1237 # it looks like this might work on jedi 0.17
1029 if hasattr(completion, 'get_signatures'):
1238 if hasattr(completion, 'get_signatures'):
1030 signatures = completion.get_signatures()
1239 signatures = completion.get_signatures()
1031 if not signatures:
1240 if not signatures:
1032 return '(?)'
1241 return '(?)'
1033
1242
1034 c0 = completion.get_signatures()[0]
1243 c0 = completion.get_signatures()[0]
1035 return '('+c0.to_string().split('(', maxsplit=1)[1]
1244 return '('+c0.to_string().split('(', maxsplit=1)[1]
1036
1245
1037 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1246 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1038 for p in signature.defined_names()) if f])
1247 for p in signature.defined_names()) if f])
1039
1248
1040
1249
1041 class _CompleteResult(NamedTuple):
1250 _CompleteResult = Dict[str, MatcherResult]
1042 matched_text : str
1251
1043 matches: Sequence[str]
1252
1044 matches_origin: Sequence[str]
1253 def _convert_matcher_v1_result_to_v2(
1045 jedi_matches: Any
1254 matches: Sequence[str],
1255 type: str,
1256 fragment: str = None,
1257 suppress_if_matches: bool = False,
1258 ) -> SimpleMatcherResult:
1259 """Utility to help with transition"""
1260 result = {
1261 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1262 "suppress_others": (True if matches else False)
1263 if suppress_if_matches
1264 else False,
1265 }
1266 if fragment is not None:
1267 result["matched_fragment"] = fragment
1268 return result
1046
1269
1047
1270
1048 class IPCompleter(Completer):
1271 class IPCompleter(Completer):
1049 """Extension of the completer class with IPython-specific features"""
1272 """Extension of the completer class with IPython-specific features"""
1050
1273
1051 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1274 __dict_key_regexps: Optional[Dict[bool,Pattern]] = None
1052
1275
1053 @observe('greedy')
1276 @observe('greedy')
1054 def _greedy_changed(self, change):
1277 def _greedy_changed(self, change):
1055 """update the splitter and readline delims when greedy is changed"""
1278 """update the splitter and readline delims when greedy is changed"""
1056 if change['new']:
1279 if change['new']:
1057 self.splitter.delims = GREEDY_DELIMS
1280 self.splitter.delims = GREEDY_DELIMS
1058 else:
1281 else:
1059 self.splitter.delims = DELIMS
1282 self.splitter.delims = DELIMS
1060
1283
1061 dict_keys_only = Bool(False,
1284 dict_keys_only = Bool(
1062 help="""Whether to show dict key matches only""")
1285 False,
1286 help="""
1287 Whether to show dict key matches only.
1288
1289 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1290 """,
1291 )
1292
1293 suppress_competing_matchers = UnionTrait(
1294 [Bool(), DictTrait(Bool(None, allow_none=True))],
1295 help="""
1296 Whether to suppress completions from other `Matchers`_.
1297
1298 When set to ``None`` (default) the matchers will attempt to auto-detect
1299 whether suppression of other matchers is desirable. For example, at
1300 the beginning of a line followed by `%` we expect a magic completion
1301 to be the only applicable option, and after ``my_dict['`` we usually
1302 expect a completion with an existing dictionary key.
1303
1304 If you want to disable this heuristic and see completions from all matchers,
1305 set ``IPCompleter.suppress_competing_matchers = False``.
1306 To disable the heuristic for specific matchers provide a dictionary mapping:
1307 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1308
1309 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1310 completions to the set of matchers with the highest priority;
1311 this is equivalent to ``IPCompleter.merge_completions`` and
1312 can be beneficial for performance, but will sometimes omit relevant
1313 candidates from matchers further down the priority list.
1314 """,
1315 ).tag(config=True)
1063
1316
1064 merge_completions = Bool(True,
1317 merge_completions = Bool(
1318 True,
1065 help="""Whether to merge completion results into a single list
1319 help="""Whether to merge completion results into a single list
1066
1320
1067 If False, only the completion results from the first non-empty
1321 If False, only the completion results from the first non-empty
1068 completer will be returned.
1322 completer will be returned.
1069 """
1323
1324 As of version 8.5.0, setting the value to ``False`` is an alias for:
1325 ``IPCompleter.suppress_competing_matchers = True.``.
1326 """,
1327 ).tag(config=True)
1328
1329 disable_matchers = ListTrait(
1330 Unicode(), help="""List of matchers to disable."""
1070 ).tag(config=True)
1331 ).tag(config=True)
1071 omit__names = Enum((0,1,2), default_value=2,
1332
1333 omit__names = Enum(
1334 (0, 1, 2),
1335 default_value=2,
1072 help="""Instruct the completer to omit private method names
1336 help="""Instruct the completer to omit private method names
1073
1337
1074 Specifically, when completing on ``object.<tab>``.
1338 Specifically, when completing on ``object.<tab>``.
1075
1339
1076 When 2 [default]: all names that start with '_' will be excluded.
1340 When 2 [default]: all names that start with '_' will be excluded.
1077
1341
1078 When 1: all 'magic' names (``__foo__``) will be excluded.
1342 When 1: all 'magic' names (``__foo__``) will be excluded.
1079
1343
1080 When 0: nothing will be excluded.
1344 When 0: nothing will be excluded.
1081 """
1345 """
1082 ).tag(config=True)
1346 ).tag(config=True)
1083 limit_to__all__ = Bool(False,
1347 limit_to__all__ = Bool(False,
1084 help="""
1348 help="""
1085 DEPRECATED as of version 5.0.
1349 DEPRECATED as of version 5.0.
1086
1350
1087 Instruct the completer to use __all__ for the completion
1351 Instruct the completer to use __all__ for the completion
1088
1352
1089 Specifically, when completing on ``object.<tab>``.
1353 Specifically, when completing on ``object.<tab>``.
1090
1354
1091 When True: only those names in obj.__all__ will be included.
1355 When True: only those names in obj.__all__ will be included.
1092
1356
1093 When False [default]: the __all__ attribute is ignored
1357 When False [default]: the __all__ attribute is ignored
1094 """,
1358 """,
1095 ).tag(config=True)
1359 ).tag(config=True)
1096
1360
1097 profile_completions = Bool(
1361 profile_completions = Bool(
1098 default_value=False,
1362 default_value=False,
1099 help="If True, emit profiling data for completion subsystem using cProfile."
1363 help="If True, emit profiling data for completion subsystem using cProfile."
1100 ).tag(config=True)
1364 ).tag(config=True)
1101
1365
1102 profiler_output_dir = Unicode(
1366 profiler_output_dir = Unicode(
1103 default_value=".completion_profiles",
1367 default_value=".completion_profiles",
1104 help="Template for path at which to output profile data for completions."
1368 help="Template for path at which to output profile data for completions."
1105 ).tag(config=True)
1369 ).tag(config=True)
1106
1370
1107 @observe('limit_to__all__')
1371 @observe('limit_to__all__')
1108 def _limit_to_all_changed(self, change):
1372 def _limit_to_all_changed(self, change):
1109 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1373 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
1110 'value has been deprecated since IPython 5.0, will be made to have '
1374 'value has been deprecated since IPython 5.0, will be made to have '
1111 'no effects and then removed in future version of IPython.',
1375 'no effects and then removed in future version of IPython.',
1112 UserWarning)
1376 UserWarning)
1113
1377
1114 def __init__(
1378 def __init__(
1115 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1379 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
1116 ):
1380 ):
1117 """IPCompleter() -> completer
1381 """IPCompleter() -> completer
1118
1382
1119 Return a completer object.
1383 Return a completer object.
1120
1384
1121 Parameters
1385 Parameters
1122 ----------
1386 ----------
1123 shell
1387 shell
1124 a pointer to the ipython shell itself. This is needed
1388 a pointer to the ipython shell itself. This is needed
1125 because this completer knows about magic functions, and those can
1389 because this completer knows about magic functions, and those can
1126 only be accessed via the ipython instance.
1390 only be accessed via the ipython instance.
1127 namespace : dict, optional
1391 namespace : dict, optional
1128 an optional dict where completions are performed.
1392 an optional dict where completions are performed.
1129 global_namespace : dict, optional
1393 global_namespace : dict, optional
1130 secondary optional dict for completions, to
1394 secondary optional dict for completions, to
1131 handle cases (such as IPython embedded inside functions) where
1395 handle cases (such as IPython embedded inside functions) where
1132 both Python scopes are visible.
1396 both Python scopes are visible.
1133 config : Config
1397 config : Config
1134 traitlet's config object
1398 traitlet's config object
1135 **kwargs
1399 **kwargs
1136 passed to super class unmodified.
1400 passed to super class unmodified.
1137 """
1401 """
1138
1402
1139 self.magic_escape = ESC_MAGIC
1403 self.magic_escape = ESC_MAGIC
1140 self.splitter = CompletionSplitter()
1404 self.splitter = CompletionSplitter()
1141
1405
1142 # _greedy_changed() depends on splitter and readline being defined:
1406 # _greedy_changed() depends on splitter and readline being defined:
1143 super().__init__(
1407 super().__init__(
1144 namespace=namespace,
1408 namespace=namespace,
1145 global_namespace=global_namespace,
1409 global_namespace=global_namespace,
1146 config=config,
1410 config=config,
1147 **kwargs
1411 **kwargs,
1148 )
1412 )
1149
1413
1150 # List where completion matches will be stored
1414 # List where completion matches will be stored
1151 self.matches = []
1415 self.matches = []
1152 self.shell = shell
1416 self.shell = shell
1153 # Regexp to split filenames with spaces in them
1417 # Regexp to split filenames with spaces in them
1154 self.space_name_re = re.compile(r'([^\\] )')
1418 self.space_name_re = re.compile(r'([^\\] )')
1155 # Hold a local ref. to glob.glob for speed
1419 # Hold a local ref. to glob.glob for speed
1156 self.glob = glob.glob
1420 self.glob = glob.glob
1157
1421
1158 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1422 # Determine if we are running on 'dumb' terminals, like (X)Emacs
1159 # buffers, to avoid completion problems.
1423 # buffers, to avoid completion problems.
1160 term = os.environ.get('TERM','xterm')
1424 term = os.environ.get('TERM','xterm')
1161 self.dumb_terminal = term in ['dumb','emacs']
1425 self.dumb_terminal = term in ['dumb','emacs']
1162
1426
1163 # Special handling of backslashes needed in win32 platforms
1427 # Special handling of backslashes needed in win32 platforms
1164 if sys.platform == "win32":
1428 if sys.platform == "win32":
1165 self.clean_glob = self._clean_glob_win32
1429 self.clean_glob = self._clean_glob_win32
1166 else:
1430 else:
1167 self.clean_glob = self._clean_glob
1431 self.clean_glob = self._clean_glob
1168
1432
1169 #regexp to parse docstring for function signature
1433 #regexp to parse docstring for function signature
1170 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1434 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1171 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1435 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1172 #use this if positional argument name is also needed
1436 #use this if positional argument name is also needed
1173 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1437 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
1174
1438
1175 self.magic_arg_matchers = [
1439 self.magic_arg_matchers = [
1176 self.magic_config_matches,
1440 self.magic_config_matcher,
1177 self.magic_color_matches,
1441 self.magic_color_matcher,
1178 ]
1442 ]
1179
1443
1180 # This is set externally by InteractiveShell
1444 # This is set externally by InteractiveShell
1181 self.custom_completers = None
1445 self.custom_completers = None
1182
1446
1183 # This is a list of names of unicode characters that can be completed
1447 # This is a list of names of unicode characters that can be completed
1184 # into their corresponding unicode value. The list is large, so we
1448 # into their corresponding unicode value. The list is large, so we
1185 # lazily initialize it on first use. Consuming code should access this
1449 # lazily initialize it on first use. Consuming code should access this
1186 # attribute through the `@unicode_names` property.
1450 # attribute through the `@unicode_names` property.
1187 self._unicode_names = None
1451 self._unicode_names = None
1188
1452
1453 self._backslash_combining_matchers = [
1454 self.latex_name_matcher,
1455 self.unicode_name_matcher,
1456 back_latex_name_matcher,
1457 back_unicode_name_matcher,
1458 self.fwd_unicode_matcher,
1459 ]
1460
1461 if not self.backslash_combining_completions:
1462 for matcher in self._backslash_combining_matchers:
1463 self.disable_matchers.append(matcher.matcher_identifier)
1464
1465 if not self.merge_completions:
1466 self.suppress_competing_matchers = True
1467
1468 if self.dict_keys_only:
1469 self.disable_matchers.append(self.dict_key_matcher.matcher_identifier)
1470
1189 @property
1471 @property
1190 def matchers(self) -> List[Any]:
1472 def matchers(self) -> List[Matcher]:
1191 """All active matcher routines for completion"""
1473 """All active matcher routines for completion"""
1192 if self.dict_keys_only:
1474 if self.dict_keys_only:
1193 return [self.dict_key_matches]
1475 return [self.dict_key_matcher]
1194
1476
1195 if self.use_jedi:
1477 if self.use_jedi:
1196 return [
1478 return [
1197 *self.custom_matchers,
1479 *self.custom_matchers,
1198 self.dict_key_matches,
1480 *self._backslash_combining_matchers,
1199 self.file_matches,
1481 *self.magic_arg_matchers,
1200 self.magic_matches,
1482 self.custom_completer_matcher,
1483 self.magic_matcher,
1484 self._jedi_matcher,
1485 self.dict_key_matcher,
1486 self.file_matcher,
1201 ]
1487 ]
1202 else:
1488 else:
1203 return [
1489 return [
1204 *self.custom_matchers,
1490 *self.custom_matchers,
1205 self.dict_key_matches,
1491 *self._backslash_combining_matchers,
1492 *self.magic_arg_matchers,
1493 self.custom_completer_matcher,
1494 self.dict_key_matcher,
1495 # TODO: convert python_matches to v2 API
1496 self.magic_matcher,
1206 self.python_matches,
1497 self.python_matches,
1207 self.file_matches,
1498 self.file_matcher,
1208 self.magic_matches,
1499 self.python_func_kw_matcher,
1209 self.python_func_kw_matches,
1210 ]
1500 ]
1211
1501
1212 def all_completions(self, text:str) -> List[str]:
1502 def all_completions(self, text:str) -> List[str]:
1213 """
1503 """
1214 Wrapper around the completion methods for the benefit of emacs.
1504 Wrapper around the completion methods for the benefit of emacs.
1215 """
1505 """
1216 prefix = text.rpartition('.')[0]
1506 prefix = text.rpartition('.')[0]
1217 with provisionalcompleter():
1507 with provisionalcompleter():
1218 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1508 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
1219 for c in self.completions(text, len(text))]
1509 for c in self.completions(text, len(text))]
1220
1510
1221 return self.complete(text)[1]
1511 return self.complete(text)[1]
1222
1512
1223 def _clean_glob(self, text:str):
1513 def _clean_glob(self, text:str):
1224 return self.glob("%s*" % text)
1514 return self.glob("%s*" % text)
1225
1515
1226 def _clean_glob_win32(self, text:str):
1516 def _clean_glob_win32(self, text:str):
1227 return [f.replace("\\","/")
1517 return [f.replace("\\","/")
1228 for f in self.glob("%s*" % text)]
1518 for f in self.glob("%s*" % text)]
1229
1519
1230 def file_matches(self, text:str)->List[str]:
1520 @context_matcher()
1521 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1522 matches = self.file_matches(context.token)
1523 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
1524 # starts with `/home/`, `C:\`, etc)
1525 return _convert_matcher_v1_result_to_v2(matches, type="path")
1526
1527 def file_matches(self, text: str) -> List[str]:
1231 """Match filenames, expanding ~USER type strings.
1528 """Match filenames, expanding ~USER type strings.
1232
1529
1233 Most of the seemingly convoluted logic in this completer is an
1530 Most of the seemingly convoluted logic in this completer is an
1234 attempt to handle filenames with spaces in them. And yet it's not
1531 attempt to handle filenames with spaces in them. And yet it's not
1235 quite perfect, because Python's readline doesn't expose all of the
1532 quite perfect, because Python's readline doesn't expose all of the
1236 GNU readline details needed for this to be done correctly.
1533 GNU readline details needed for this to be done correctly.
1237
1534
1238 For a filename with a space in it, the printed completions will be
1535 For a filename with a space in it, the printed completions will be
1239 only the parts after what's already been typed (instead of the
1536 only the parts after what's already been typed (instead of the
1240 full completions, as is normally done). I don't think with the
1537 full completions, as is normally done). I don't think with the
1241 current (as of Python 2.3) Python readline it's possible to do
1538 current (as of Python 2.3) Python readline it's possible to do
1242 better."""
1539 better."""
1243
1540
1244 # chars that require escaping with backslash - i.e. chars
1541 # chars that require escaping with backslash - i.e. chars
1245 # that readline treats incorrectly as delimiters, but we
1542 # that readline treats incorrectly as delimiters, but we
1246 # don't want to treat as delimiters in filename matching
1543 # don't want to treat as delimiters in filename matching
1247 # when escaped with backslash
1544 # when escaped with backslash
1248 if text.startswith('!'):
1545 if text.startswith('!'):
1249 text = text[1:]
1546 text = text[1:]
1250 text_prefix = u'!'
1547 text_prefix = u'!'
1251 else:
1548 else:
1252 text_prefix = u''
1549 text_prefix = u''
1253
1550
1254 text_until_cursor = self.text_until_cursor
1551 text_until_cursor = self.text_until_cursor
1255 # track strings with open quotes
1552 # track strings with open quotes
1256 open_quotes = has_open_quotes(text_until_cursor)
1553 open_quotes = has_open_quotes(text_until_cursor)
1257
1554
1258 if '(' in text_until_cursor or '[' in text_until_cursor:
1555 if '(' in text_until_cursor or '[' in text_until_cursor:
1259 lsplit = text
1556 lsplit = text
1260 else:
1557 else:
1261 try:
1558 try:
1262 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1559 # arg_split ~ shlex.split, but with unicode bugs fixed by us
1263 lsplit = arg_split(text_until_cursor)[-1]
1560 lsplit = arg_split(text_until_cursor)[-1]
1264 except ValueError:
1561 except ValueError:
1265 # typically an unmatched ", or backslash without escaped char.
1562 # typically an unmatched ", or backslash without escaped char.
1266 if open_quotes:
1563 if open_quotes:
1267 lsplit = text_until_cursor.split(open_quotes)[-1]
1564 lsplit = text_until_cursor.split(open_quotes)[-1]
1268 else:
1565 else:
1269 return []
1566 return []
1270 except IndexError:
1567 except IndexError:
1271 # tab pressed on empty line
1568 # tab pressed on empty line
1272 lsplit = ""
1569 lsplit = ""
1273
1570
1274 if not open_quotes and lsplit != protect_filename(lsplit):
1571 if not open_quotes and lsplit != protect_filename(lsplit):
1275 # if protectables are found, do matching on the whole escaped name
1572 # if protectables are found, do matching on the whole escaped name
1276 has_protectables = True
1573 has_protectables = True
1277 text0,text = text,lsplit
1574 text0,text = text,lsplit
1278 else:
1575 else:
1279 has_protectables = False
1576 has_protectables = False
1280 text = os.path.expanduser(text)
1577 text = os.path.expanduser(text)
1281
1578
1282 if text == "":
1579 if text == "":
1283 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1580 return [text_prefix + protect_filename(f) for f in self.glob("*")]
1284
1581
1285 # Compute the matches from the filesystem
1582 # Compute the matches from the filesystem
1286 if sys.platform == 'win32':
1583 if sys.platform == 'win32':
1287 m0 = self.clean_glob(text)
1584 m0 = self.clean_glob(text)
1288 else:
1585 else:
1289 m0 = self.clean_glob(text.replace('\\', ''))
1586 m0 = self.clean_glob(text.replace('\\', ''))
1290
1587
1291 if has_protectables:
1588 if has_protectables:
1292 # If we had protectables, we need to revert our changes to the
1589 # If we had protectables, we need to revert our changes to the
1293 # beginning of filename so that we don't double-write the part
1590 # beginning of filename so that we don't double-write the part
1294 # of the filename we have so far
1591 # of the filename we have so far
1295 len_lsplit = len(lsplit)
1592 len_lsplit = len(lsplit)
1296 matches = [text_prefix + text0 +
1593 matches = [text_prefix + text0 +
1297 protect_filename(f[len_lsplit:]) for f in m0]
1594 protect_filename(f[len_lsplit:]) for f in m0]
1298 else:
1595 else:
1299 if open_quotes:
1596 if open_quotes:
1300 # if we have a string with an open quote, we don't need to
1597 # if we have a string with an open quote, we don't need to
1301 # protect the names beyond the quote (and we _shouldn't_, as
1598 # protect the names beyond the quote (and we _shouldn't_, as
1302 # it would cause bugs when the filesystem call is made).
1599 # it would cause bugs when the filesystem call is made).
1303 matches = m0 if sys.platform == "win32" else\
1600 matches = m0 if sys.platform == "win32" else\
1304 [protect_filename(f, open_quotes) for f in m0]
1601 [protect_filename(f, open_quotes) for f in m0]
1305 else:
1602 else:
1306 matches = [text_prefix +
1603 matches = [text_prefix +
1307 protect_filename(f) for f in m0]
1604 protect_filename(f) for f in m0]
1308
1605
1309 # Mark directories in input list by appending '/' to their names.
1606 # Mark directories in input list by appending '/' to their names.
1310 return [x+'/' if os.path.isdir(x) else x for x in matches]
1607 return [x+'/' if os.path.isdir(x) else x for x in matches]
1311
1608
1312 def magic_matches(self, text:str):
1609 @context_matcher()
1610 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1611 text = context.token
1612 matches = self.magic_matches(text)
1613 result = _convert_matcher_v1_result_to_v2(matches, type="magic")
1614 is_magic_prefix = len(text) > 0 and text[0] == "%"
1615 result["suppress_others"] = is_magic_prefix and bool(result["completions"])
1616 return result
1617
1618 def magic_matches(self, text: str):
1313 """Match magics"""
1619 """Match magics"""
1314 # Get all shell magics now rather than statically, so magics loaded at
1620 # Get all shell magics now rather than statically, so magics loaded at
1315 # runtime show up too.
1621 # runtime show up too.
1316 lsm = self.shell.magics_manager.lsmagic()
1622 lsm = self.shell.magics_manager.lsmagic()
1317 line_magics = lsm['line']
1623 line_magics = lsm['line']
1318 cell_magics = lsm['cell']
1624 cell_magics = lsm['cell']
1319 pre = self.magic_escape
1625 pre = self.magic_escape
1320 pre2 = pre+pre
1626 pre2 = pre+pre
1321
1627
1322 explicit_magic = text.startswith(pre)
1628 explicit_magic = text.startswith(pre)
1323
1629
1324 # Completion logic:
1630 # Completion logic:
1325 # - user gives %%: only do cell magics
1631 # - user gives %%: only do cell magics
1326 # - user gives %: do both line and cell magics
1632 # - user gives %: do both line and cell magics
1327 # - no prefix: do both
1633 # - no prefix: do both
1328 # In other words, line magics are skipped if the user gives %% explicitly
1634 # In other words, line magics are skipped if the user gives %% explicitly
1329 #
1635 #
1330 # We also exclude magics that match any currently visible names:
1636 # We also exclude magics that match any currently visible names:
1331 # https://github.com/ipython/ipython/issues/4877, unless the user has
1637 # https://github.com/ipython/ipython/issues/4877, unless the user has
1332 # typed a %:
1638 # typed a %:
1333 # https://github.com/ipython/ipython/issues/10754
1639 # https://github.com/ipython/ipython/issues/10754
1334 bare_text = text.lstrip(pre)
1640 bare_text = text.lstrip(pre)
1335 global_matches = self.global_matches(bare_text)
1641 global_matches = self.global_matches(bare_text)
1336 if not explicit_magic:
1642 if not explicit_magic:
1337 def matches(magic):
1643 def matches(magic):
1338 """
1644 """
1339 Filter magics, in particular remove magics that match
1645 Filter magics, in particular remove magics that match
1340 a name present in global namespace.
1646 a name present in global namespace.
1341 """
1647 """
1342 return ( magic.startswith(bare_text) and
1648 return ( magic.startswith(bare_text) and
1343 magic not in global_matches )
1649 magic not in global_matches )
1344 else:
1650 else:
1345 def matches(magic):
1651 def matches(magic):
1346 return magic.startswith(bare_text)
1652 return magic.startswith(bare_text)
1347
1653
1348 comp = [ pre2+m for m in cell_magics if matches(m)]
1654 comp = [ pre2+m for m in cell_magics if matches(m)]
1349 if not text.startswith(pre2):
1655 if not text.startswith(pre2):
1350 comp += [ pre+m for m in line_magics if matches(m)]
1656 comp += [ pre+m for m in line_magics if matches(m)]
1351
1657
1352 return comp
1658 return comp
1353
1659
1354 def magic_config_matches(self, text:str) -> List[str]:
1660 @context_matcher()
1355 """ Match class names and attributes for %config magic """
1661 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1662 # NOTE: uses `line_buffer` equivalent for compatibility
1663 matches = self.magic_config_matches(context.line_with_cursor)
1664 return _convert_matcher_v1_result_to_v2(matches, type="param")
1665
1666 def magic_config_matches(self, text: str) -> List[str]:
1667 """Match class names and attributes for %config magic"""
1356 texts = text.strip().split()
1668 texts = text.strip().split()
1357
1669
1358 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1670 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
1359 # get all configuration classes
1671 # get all configuration classes
1360 classes = sorted(set([ c for c in self.shell.configurables
1672 classes = sorted(set([ c for c in self.shell.configurables
1361 if c.__class__.class_traits(config=True)
1673 if c.__class__.class_traits(config=True)
1362 ]), key=lambda x: x.__class__.__name__)
1674 ]), key=lambda x: x.__class__.__name__)
1363 classnames = [ c.__class__.__name__ for c in classes ]
1675 classnames = [ c.__class__.__name__ for c in classes ]
1364
1676
1365 # return all classnames if config or %config is given
1677 # return all classnames if config or %config is given
1366 if len(texts) == 1:
1678 if len(texts) == 1:
1367 return classnames
1679 return classnames
1368
1680
1369 # match classname
1681 # match classname
1370 classname_texts = texts[1].split('.')
1682 classname_texts = texts[1].split('.')
1371 classname = classname_texts[0]
1683 classname = classname_texts[0]
1372 classname_matches = [ c for c in classnames
1684 classname_matches = [ c for c in classnames
1373 if c.startswith(classname) ]
1685 if c.startswith(classname) ]
1374
1686
1375 # return matched classes or the matched class with attributes
1687 # return matched classes or the matched class with attributes
1376 if texts[1].find('.') < 0:
1688 if texts[1].find('.') < 0:
1377 return classname_matches
1689 return classname_matches
1378 elif len(classname_matches) == 1 and \
1690 elif len(classname_matches) == 1 and \
1379 classname_matches[0] == classname:
1691 classname_matches[0] == classname:
1380 cls = classes[classnames.index(classname)].__class__
1692 cls = classes[classnames.index(classname)].__class__
1381 help = cls.class_get_help()
1693 help = cls.class_get_help()
1382 # strip leading '--' from cl-args:
1694 # strip leading '--' from cl-args:
1383 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1695 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
1384 return [ attr.split('=')[0]
1696 return [ attr.split('=')[0]
1385 for attr in help.strip().splitlines()
1697 for attr in help.strip().splitlines()
1386 if attr.startswith(texts[1]) ]
1698 if attr.startswith(texts[1]) ]
1387 return []
1699 return []
1388
1700
1389 def magic_color_matches(self, text:str) -> List[str] :
1701 @context_matcher()
1390 """ Match color schemes for %colors magic"""
1702 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1703 # NOTE: uses `line_buffer` equivalent for compatibility
1704 matches = self.magic_color_matches(context.line_with_cursor)
1705 return _convert_matcher_v1_result_to_v2(matches, type="param")
1706
1707 def magic_color_matches(self, text: str) -> List[str]:
1708 """Match color schemes for %colors magic"""
1391 texts = text.split()
1709 texts = text.split()
1392 if text.endswith(' '):
1710 if text.endswith(' '):
1393 # .split() strips off the trailing whitespace. Add '' back
1711 # .split() strips off the trailing whitespace. Add '' back
1394 # so that: '%colors ' -> ['%colors', '']
1712 # so that: '%colors ' -> ['%colors', '']
1395 texts.append('')
1713 texts.append('')
1396
1714
1397 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1715 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
1398 prefix = texts[1]
1716 prefix = texts[1]
1399 return [ color for color in InspectColors.keys()
1717 return [ color for color in InspectColors.keys()
1400 if color.startswith(prefix) ]
1718 if color.startswith(prefix) ]
1401 return []
1719 return []
1402
1720
1403 def _jedi_matches(self, cursor_column:int, cursor_line:int, text:str) -> Iterable[Any]:
1721 @context_matcher(identifier="IPCompleter.jedi_matcher")
1722 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
1723 matches = self._jedi_matches(
1724 cursor_column=context.cursor_position,
1725 cursor_line=context.cursor_line,
1726 text=context.full_text,
1727 )
1728 return {
1729 "completions": matches,
1730 # statis analysis should not suppress other matchers
1731 "suppress_others": False,
1732 }
1733
1734 def _jedi_matches(
1735 self, cursor_column: int, cursor_line: int, text: str
1736 ) -> Iterable[_JediCompletionLike]:
1404 """
1737 """
1405 Return a list of :any:`jedi.api.Completions` object from a ``text`` and
1738 Return a list of :any:`jedi.api.Completion`s object from a ``text`` and
1406 cursor position.
1739 cursor position.
1407
1740
1408 Parameters
1741 Parameters
1409 ----------
1742 ----------
1410 cursor_column : int
1743 cursor_column : int
1411 column position of the cursor in ``text``, 0-indexed.
1744 column position of the cursor in ``text``, 0-indexed.
1412 cursor_line : int
1745 cursor_line : int
1413 line position of the cursor in ``text``, 0-indexed
1746 line position of the cursor in ``text``, 0-indexed
1414 text : str
1747 text : str
1415 text to complete
1748 text to complete
1416
1749
1417 Notes
1750 Notes
1418 -----
1751 -----
1419 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1752 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
1420 object containing a string with the Jedi debug information attached.
1753 object containing a string with the Jedi debug information attached.
1421 """
1754 """
1422 namespaces = [self.namespace]
1755 namespaces = [self.namespace]
1423 if self.global_namespace is not None:
1756 if self.global_namespace is not None:
1424 namespaces.append(self.global_namespace)
1757 namespaces.append(self.global_namespace)
1425
1758
1426 completion_filter = lambda x:x
1759 completion_filter = lambda x:x
1427 offset = cursor_to_position(text, cursor_line, cursor_column)
1760 offset = cursor_to_position(text, cursor_line, cursor_column)
1428 # filter output if we are completing for object members
1761 # filter output if we are completing for object members
1429 if offset:
1762 if offset:
1430 pre = text[offset-1]
1763 pre = text[offset-1]
1431 if pre == '.':
1764 if pre == '.':
1432 if self.omit__names == 2:
1765 if self.omit__names == 2:
1433 completion_filter = lambda c:not c.name.startswith('_')
1766 completion_filter = lambda c:not c.name.startswith('_')
1434 elif self.omit__names == 1:
1767 elif self.omit__names == 1:
1435 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1768 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
1436 elif self.omit__names == 0:
1769 elif self.omit__names == 0:
1437 completion_filter = lambda x:x
1770 completion_filter = lambda x:x
1438 else:
1771 else:
1439 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1772 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
1440
1773
1441 interpreter = jedi.Interpreter(text[:offset], namespaces)
1774 interpreter = jedi.Interpreter(text[:offset], namespaces)
1442 try_jedi = True
1775 try_jedi = True
1443
1776
1444 try:
1777 try:
1445 # find the first token in the current tree -- if it is a ' or " then we are in a string
1778 # find the first token in the current tree -- if it is a ' or " then we are in a string
1446 completing_string = False
1779 completing_string = False
1447 try:
1780 try:
1448 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1781 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
1449 except StopIteration:
1782 except StopIteration:
1450 pass
1783 pass
1451 else:
1784 else:
1452 # note the value may be ', ", or it may also be ''' or """, or
1785 # note the value may be ', ", or it may also be ''' or """, or
1453 # in some cases, """what/you/typed..., but all of these are
1786 # in some cases, """what/you/typed..., but all of these are
1454 # strings.
1787 # strings.
1455 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1788 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
1456
1789
1457 # if we are in a string jedi is likely not the right candidate for
1790 # if we are in a string jedi is likely not the right candidate for
1458 # now. Skip it.
1791 # now. Skip it.
1459 try_jedi = not completing_string
1792 try_jedi = not completing_string
1460 except Exception as e:
1793 except Exception as e:
1461 # many of things can go wrong, we are using private API just don't crash.
1794 # many of things can go wrong, we are using private API just don't crash.
1462 if self.debug:
1795 if self.debug:
1463 print("Error detecting if completing a non-finished string :", e, '|')
1796 print("Error detecting if completing a non-finished string :", e, '|')
1464
1797
1465 if not try_jedi:
1798 if not try_jedi:
1466 return []
1799 return []
1467 try:
1800 try:
1468 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1801 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
1469 except Exception as e:
1802 except Exception as e:
1470 if self.debug:
1803 if self.debug:
1471 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1804 return [_FakeJediCompletion('Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' % (e))]
1472 else:
1805 else:
1473 return []
1806 return []
1474
1807
1475 def python_matches(self, text:str)->List[str]:
1808 def python_matches(self, text:str)->List[str]:
1476 """Match attributes or global python names"""
1809 """Match attributes or global python names"""
1477 if "." in text:
1810 if "." in text:
1478 try:
1811 try:
1479 matches = self.attr_matches(text)
1812 matches = self.attr_matches(text)
1480 if text.endswith('.') and self.omit__names:
1813 if text.endswith('.') and self.omit__names:
1481 if self.omit__names == 1:
1814 if self.omit__names == 1:
1482 # true if txt is _not_ a __ name, false otherwise:
1815 # true if txt is _not_ a __ name, false otherwise:
1483 no__name = (lambda txt:
1816 no__name = (lambda txt:
1484 re.match(r'.*\.__.*?__',txt) is None)
1817 re.match(r'.*\.__.*?__',txt) is None)
1485 else:
1818 else:
1486 # true if txt is _not_ a _ name, false otherwise:
1819 # true if txt is _not_ a _ name, false otherwise:
1487 no__name = (lambda txt:
1820 no__name = (lambda txt:
1488 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1821 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
1489 matches = filter(no__name, matches)
1822 matches = filter(no__name, matches)
1490 except NameError:
1823 except NameError:
1491 # catches <undefined attributes>.<tab>
1824 # catches <undefined attributes>.<tab>
1492 matches = []
1825 matches = []
1493 else:
1826 else:
1494 matches = self.global_matches(text)
1827 matches = self.global_matches(text)
1495 return matches
1828 return matches
1496
1829
1497 def _default_arguments_from_docstring(self, doc):
1830 def _default_arguments_from_docstring(self, doc):
1498 """Parse the first line of docstring for call signature.
1831 """Parse the first line of docstring for call signature.
1499
1832
1500 Docstring should be of the form 'min(iterable[, key=func])\n'.
1833 Docstring should be of the form 'min(iterable[, key=func])\n'.
1501 It can also parse cython docstring of the form
1834 It can also parse cython docstring of the form
1502 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1835 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
1503 """
1836 """
1504 if doc is None:
1837 if doc is None:
1505 return []
1838 return []
1506
1839
1507 #care only the firstline
1840 #care only the firstline
1508 line = doc.lstrip().splitlines()[0]
1841 line = doc.lstrip().splitlines()[0]
1509
1842
1510 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1843 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
1511 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1844 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
1512 sig = self.docstring_sig_re.search(line)
1845 sig = self.docstring_sig_re.search(line)
1513 if sig is None:
1846 if sig is None:
1514 return []
1847 return []
1515 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1848 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
1516 sig = sig.groups()[0].split(',')
1849 sig = sig.groups()[0].split(',')
1517 ret = []
1850 ret = []
1518 for s in sig:
1851 for s in sig:
1519 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1852 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
1520 ret += self.docstring_kwd_re.findall(s)
1853 ret += self.docstring_kwd_re.findall(s)
1521 return ret
1854 return ret
1522
1855
1523 def _default_arguments(self, obj):
1856 def _default_arguments(self, obj):
1524 """Return the list of default arguments of obj if it is callable,
1857 """Return the list of default arguments of obj if it is callable,
1525 or empty list otherwise."""
1858 or empty list otherwise."""
1526 call_obj = obj
1859 call_obj = obj
1527 ret = []
1860 ret = []
1528 if inspect.isbuiltin(obj):
1861 if inspect.isbuiltin(obj):
1529 pass
1862 pass
1530 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1863 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
1531 if inspect.isclass(obj):
1864 if inspect.isclass(obj):
1532 #for cython embedsignature=True the constructor docstring
1865 #for cython embedsignature=True the constructor docstring
1533 #belongs to the object itself not __init__
1866 #belongs to the object itself not __init__
1534 ret += self._default_arguments_from_docstring(
1867 ret += self._default_arguments_from_docstring(
1535 getattr(obj, '__doc__', ''))
1868 getattr(obj, '__doc__', ''))
1536 # for classes, check for __init__,__new__
1869 # for classes, check for __init__,__new__
1537 call_obj = (getattr(obj, '__init__', None) or
1870 call_obj = (getattr(obj, '__init__', None) or
1538 getattr(obj, '__new__', None))
1871 getattr(obj, '__new__', None))
1539 # for all others, check if they are __call__able
1872 # for all others, check if they are __call__able
1540 elif hasattr(obj, '__call__'):
1873 elif hasattr(obj, '__call__'):
1541 call_obj = obj.__call__
1874 call_obj = obj.__call__
1542 ret += self._default_arguments_from_docstring(
1875 ret += self._default_arguments_from_docstring(
1543 getattr(call_obj, '__doc__', ''))
1876 getattr(call_obj, '__doc__', ''))
1544
1877
1545 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1878 _keeps = (inspect.Parameter.KEYWORD_ONLY,
1546 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1879 inspect.Parameter.POSITIONAL_OR_KEYWORD)
1547
1880
1548 try:
1881 try:
1549 sig = inspect.signature(obj)
1882 sig = inspect.signature(obj)
1550 ret.extend(k for k, v in sig.parameters.items() if
1883 ret.extend(k for k, v in sig.parameters.items() if
1551 v.kind in _keeps)
1884 v.kind in _keeps)
1552 except ValueError:
1885 except ValueError:
1553 pass
1886 pass
1554
1887
1555 return list(set(ret))
1888 return list(set(ret))
1556
1889
1890 @context_matcher()
1891 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1892 matches = self.python_func_kw_matches(context.token)
1893 return _convert_matcher_v1_result_to_v2(matches, type="param")
1894
1557 def python_func_kw_matches(self, text):
1895 def python_func_kw_matches(self, text):
1558 """Match named parameters (kwargs) of the last open function"""
1896 """Match named parameters (kwargs) of the last open function"""
1559
1897
1560 if "." in text: # a parameter cannot be dotted
1898 if "." in text: # a parameter cannot be dotted
1561 return []
1899 return []
1562 try: regexp = self.__funcParamsRegex
1900 try: regexp = self.__funcParamsRegex
1563 except AttributeError:
1901 except AttributeError:
1564 regexp = self.__funcParamsRegex = re.compile(r'''
1902 regexp = self.__funcParamsRegex = re.compile(r'''
1565 '.*?(?<!\\)' | # single quoted strings or
1903 '.*?(?<!\\)' | # single quoted strings or
1566 ".*?(?<!\\)" | # double quoted strings or
1904 ".*?(?<!\\)" | # double quoted strings or
1567 \w+ | # identifier
1905 \w+ | # identifier
1568 \S # other characters
1906 \S # other characters
1569 ''', re.VERBOSE | re.DOTALL)
1907 ''', re.VERBOSE | re.DOTALL)
1570 # 1. find the nearest identifier that comes before an unclosed
1908 # 1. find the nearest identifier that comes before an unclosed
1571 # parenthesis before the cursor
1909 # parenthesis before the cursor
1572 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1910 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
1573 tokens = regexp.findall(self.text_until_cursor)
1911 tokens = regexp.findall(self.text_until_cursor)
1574 iterTokens = reversed(tokens); openPar = 0
1912 iterTokens = reversed(tokens); openPar = 0
1575
1913
1576 for token in iterTokens:
1914 for token in iterTokens:
1577 if token == ')':
1915 if token == ')':
1578 openPar -= 1
1916 openPar -= 1
1579 elif token == '(':
1917 elif token == '(':
1580 openPar += 1
1918 openPar += 1
1581 if openPar > 0:
1919 if openPar > 0:
1582 # found the last unclosed parenthesis
1920 # found the last unclosed parenthesis
1583 break
1921 break
1584 else:
1922 else:
1585 return []
1923 return []
1586 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1924 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
1587 ids = []
1925 ids = []
1588 isId = re.compile(r'\w+$').match
1926 isId = re.compile(r'\w+$').match
1589
1927
1590 while True:
1928 while True:
1591 try:
1929 try:
1592 ids.append(next(iterTokens))
1930 ids.append(next(iterTokens))
1593 if not isId(ids[-1]):
1931 if not isId(ids[-1]):
1594 ids.pop(); break
1932 ids.pop(); break
1595 if not next(iterTokens) == '.':
1933 if not next(iterTokens) == '.':
1596 break
1934 break
1597 except StopIteration:
1935 except StopIteration:
1598 break
1936 break
1599
1937
1600 # Find all named arguments already assigned to, as to avoid suggesting
1938 # Find all named arguments already assigned to, as to avoid suggesting
1601 # them again
1939 # them again
1602 usedNamedArgs = set()
1940 usedNamedArgs = set()
1603 par_level = -1
1941 par_level = -1
1604 for token, next_token in zip(tokens, tokens[1:]):
1942 for token, next_token in zip(tokens, tokens[1:]):
1605 if token == '(':
1943 if token == '(':
1606 par_level += 1
1944 par_level += 1
1607 elif token == ')':
1945 elif token == ')':
1608 par_level -= 1
1946 par_level -= 1
1609
1947
1610 if par_level != 0:
1948 if par_level != 0:
1611 continue
1949 continue
1612
1950
1613 if next_token != '=':
1951 if next_token != '=':
1614 continue
1952 continue
1615
1953
1616 usedNamedArgs.add(token)
1954 usedNamedArgs.add(token)
1617
1955
1618 argMatches = []
1956 argMatches = []
1619 try:
1957 try:
1620 callableObj = '.'.join(ids[::-1])
1958 callableObj = '.'.join(ids[::-1])
1621 namedArgs = self._default_arguments(eval(callableObj,
1959 namedArgs = self._default_arguments(eval(callableObj,
1622 self.namespace))
1960 self.namespace))
1623
1961
1624 # Remove used named arguments from the list, no need to show twice
1962 # Remove used named arguments from the list, no need to show twice
1625 for namedArg in set(namedArgs) - usedNamedArgs:
1963 for namedArg in set(namedArgs) - usedNamedArgs:
1626 if namedArg.startswith(text):
1964 if namedArg.startswith(text):
1627 argMatches.append("%s=" %namedArg)
1965 argMatches.append("%s=" %namedArg)
1628 except:
1966 except:
1629 pass
1967 pass
1630
1968
1631 return argMatches
1969 return argMatches
1632
1970
1633 @staticmethod
1971 @staticmethod
1634 def _get_keys(obj: Any) -> List[Any]:
1972 def _get_keys(obj: Any) -> List[Any]:
1635 # Objects can define their own completions by defining an
1973 # Objects can define their own completions by defining an
1636 # _ipy_key_completions_() method.
1974 # _ipy_key_completions_() method.
1637 method = get_real_method(obj, '_ipython_key_completions_')
1975 method = get_real_method(obj, '_ipython_key_completions_')
1638 if method is not None:
1976 if method is not None:
1639 return method()
1977 return method()
1640
1978
1641 # Special case some common in-memory dict-like types
1979 # Special case some common in-memory dict-like types
1642 if isinstance(obj, dict) or\
1980 if isinstance(obj, dict) or\
1643 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1981 _safe_isinstance(obj, 'pandas', 'DataFrame'):
1644 try:
1982 try:
1645 return list(obj.keys())
1983 return list(obj.keys())
1646 except Exception:
1984 except Exception:
1647 return []
1985 return []
1648 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1986 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
1649 _safe_isinstance(obj, 'numpy', 'void'):
1987 _safe_isinstance(obj, 'numpy', 'void'):
1650 return obj.dtype.names or []
1988 return obj.dtype.names or []
1651 return []
1989 return []
1652
1990
1653 def dict_key_matches(self, text:str) -> List[str]:
1991 @context_matcher()
1654 "Match string keys in a dictionary, after e.g. 'foo[' "
1992 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
1993 matches = self.dict_key_matches(context.token)
1994 return _convert_matcher_v1_result_to_v2(
1995 matches, type="dict key", suppress_if_matches=True
1996 )
1997
1998 def dict_key_matches(self, text: str) -> List[str]:
1999 """Match string keys in a dictionary, after e.g. ``foo[``.
1655
2000
2001 DEPRECATED: Deprecated since 8.5. Use ``dict_key_matcher`` instead.
2002 """
1656
2003
1657 if self.__dict_key_regexps is not None:
2004 if self.__dict_key_regexps is not None:
1658 regexps = self.__dict_key_regexps
2005 regexps = self.__dict_key_regexps
1659 else:
2006 else:
1660 dict_key_re_fmt = r'''(?x)
2007 dict_key_re_fmt = r'''(?x)
1661 ( # match dict-referring expression wrt greedy setting
2008 ( # match dict-referring expression wrt greedy setting
1662 %s
2009 %s
1663 )
2010 )
1664 \[ # open bracket
2011 \[ # open bracket
1665 \s* # and optional whitespace
2012 \s* # and optional whitespace
1666 # Capture any number of str-like objects (e.g. "a", "b", 'c')
2013 # Capture any number of str-like objects (e.g. "a", "b", 'c')
1667 ((?:[uUbB]? # string prefix (r not handled)
2014 ((?:[uUbB]? # string prefix (r not handled)
1668 (?:
2015 (?:
1669 '(?:[^']|(?<!\\)\\')*'
2016 '(?:[^']|(?<!\\)\\')*'
1670 |
2017 |
1671 "(?:[^"]|(?<!\\)\\")*"
2018 "(?:[^"]|(?<!\\)\\")*"
1672 )
2019 )
1673 \s*,\s*
2020 \s*,\s*
1674 )*)
2021 )*)
1675 ([uUbB]? # string prefix (r not handled)
2022 ([uUbB]? # string prefix (r not handled)
1676 (?: # unclosed string
2023 (?: # unclosed string
1677 '(?:[^']|(?<!\\)\\')*
2024 '(?:[^']|(?<!\\)\\')*
1678 |
2025 |
1679 "(?:[^"]|(?<!\\)\\")*
2026 "(?:[^"]|(?<!\\)\\")*
1680 )
2027 )
1681 )?
2028 )?
1682 $
2029 $
1683 '''
2030 '''
1684 regexps = self.__dict_key_regexps = {
2031 regexps = self.__dict_key_regexps = {
1685 False: re.compile(dict_key_re_fmt % r'''
2032 False: re.compile(dict_key_re_fmt % r'''
1686 # identifiers separated by .
2033 # identifiers separated by .
1687 (?!\d)\w+
2034 (?!\d)\w+
1688 (?:\.(?!\d)\w+)*
2035 (?:\.(?!\d)\w+)*
1689 '''),
2036 '''),
1690 True: re.compile(dict_key_re_fmt % '''
2037 True: re.compile(dict_key_re_fmt % '''
1691 .+
2038 .+
1692 ''')
2039 ''')
1693 }
2040 }
1694
2041
1695 match = regexps[self.greedy].search(self.text_until_cursor)
2042 match = regexps[self.greedy].search(self.text_until_cursor)
1696
2043
1697 if match is None:
2044 if match is None:
1698 return []
2045 return []
1699
2046
1700 expr, prefix0, prefix = match.groups()
2047 expr, prefix0, prefix = match.groups()
1701 try:
2048 try:
1702 obj = eval(expr, self.namespace)
2049 obj = eval(expr, self.namespace)
1703 except Exception:
2050 except Exception:
1704 try:
2051 try:
1705 obj = eval(expr, self.global_namespace)
2052 obj = eval(expr, self.global_namespace)
1706 except Exception:
2053 except Exception:
1707 return []
2054 return []
1708
2055
1709 keys = self._get_keys(obj)
2056 keys = self._get_keys(obj)
1710 if not keys:
2057 if not keys:
1711 return keys
2058 return keys
1712
2059
1713 extra_prefix = eval(prefix0) if prefix0 != '' else None
2060 extra_prefix = eval(prefix0) if prefix0 != '' else None
1714
2061
1715 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
2062 closing_quote, token_offset, matches = match_dict_keys(keys, prefix, self.splitter.delims, extra_prefix=extra_prefix)
1716 if not matches:
2063 if not matches:
1717 return matches
2064 return matches
1718
2065
1719 # get the cursor position of
2066 # get the cursor position of
1720 # - the text being completed
2067 # - the text being completed
1721 # - the start of the key text
2068 # - the start of the key text
1722 # - the start of the completion
2069 # - the start of the completion
1723 text_start = len(self.text_until_cursor) - len(text)
2070 text_start = len(self.text_until_cursor) - len(text)
1724 if prefix:
2071 if prefix:
1725 key_start = match.start(3)
2072 key_start = match.start(3)
1726 completion_start = key_start + token_offset
2073 completion_start = key_start + token_offset
1727 else:
2074 else:
1728 key_start = completion_start = match.end()
2075 key_start = completion_start = match.end()
1729
2076
1730 # grab the leading prefix, to make sure all completions start with `text`
2077 # grab the leading prefix, to make sure all completions start with `text`
1731 if text_start > key_start:
2078 if text_start > key_start:
1732 leading = ''
2079 leading = ''
1733 else:
2080 else:
1734 leading = text[text_start:completion_start]
2081 leading = text[text_start:completion_start]
1735
2082
1736 # the index of the `[` character
2083 # the index of the `[` character
1737 bracket_idx = match.end(1)
2084 bracket_idx = match.end(1)
1738
2085
1739 # append closing quote and bracket as appropriate
2086 # append closing quote and bracket as appropriate
1740 # this is *not* appropriate if the opening quote or bracket is outside
2087 # this is *not* appropriate if the opening quote or bracket is outside
1741 # the text given to this method
2088 # the text given to this method
1742 suf = ''
2089 suf = ''
1743 continuation = self.line_buffer[len(self.text_until_cursor):]
2090 continuation = self.line_buffer[len(self.text_until_cursor):]
1744 if key_start > text_start and closing_quote:
2091 if key_start > text_start and closing_quote:
1745 # quotes were opened inside text, maybe close them
2092 # quotes were opened inside text, maybe close them
1746 if continuation.startswith(closing_quote):
2093 if continuation.startswith(closing_quote):
1747 continuation = continuation[len(closing_quote):]
2094 continuation = continuation[len(closing_quote):]
1748 else:
2095 else:
1749 suf += closing_quote
2096 suf += closing_quote
1750 if bracket_idx > text_start:
2097 if bracket_idx > text_start:
1751 # brackets were opened inside text, maybe close them
2098 # brackets were opened inside text, maybe close them
1752 if not continuation.startswith(']'):
2099 if not continuation.startswith(']'):
1753 suf += ']'
2100 suf += ']'
1754
2101
1755 return [leading + k + suf for k in matches]
2102 return [leading + k + suf for k in matches]
1756
2103
2104 @context_matcher()
2105 def unicode_name_matcher(self, context):
2106 fragment, matches = self.unicode_name_matches(context.token)
2107 return _convert_matcher_v1_result_to_v2(
2108 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2109 )
2110
1757 @staticmethod
2111 @staticmethod
1758 def unicode_name_matches(text:str) -> Tuple[str, List[str]] :
2112 def unicode_name_matches(text: str) -> Tuple[str, List[str]]:
1759 """Match Latex-like syntax for unicode characters base
2113 """Match Latex-like syntax for unicode characters base
1760 on the name of the character.
2114 on the name of the character.
1761
2115
1762 This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
2116 This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
1763
2117
1764 Works only on valid python 3 identifier, or on combining characters that
2118 Works only on valid python 3 identifier, or on combining characters that
1765 will combine to form a valid identifier.
2119 will combine to form a valid identifier.
1766 """
2120 """
1767 slashpos = text.rfind('\\')
2121 slashpos = text.rfind('\\')
1768 if slashpos > -1:
2122 if slashpos > -1:
1769 s = text[slashpos+1:]
2123 s = text[slashpos+1:]
1770 try :
2124 try :
1771 unic = unicodedata.lookup(s)
2125 unic = unicodedata.lookup(s)
1772 # allow combining chars
2126 # allow combining chars
1773 if ('a'+unic).isidentifier():
2127 if ('a'+unic).isidentifier():
1774 return '\\'+s,[unic]
2128 return '\\'+s,[unic]
1775 except KeyError:
2129 except KeyError:
1776 pass
2130 pass
1777 return '', []
2131 return '', []
1778
2132
2133 @context_matcher()
2134 def latex_name_matcher(self, context):
2135 fragment, matches = self.latex_matches(context.token)
2136 return _convert_matcher_v1_result_to_v2(
2137 matches, type="latex", fragment=fragment, suppress_if_matches=True
2138 )
1779
2139
1780 def latex_matches(self, text:str) -> Tuple[str, Sequence[str]]:
2140 def latex_matches(self, text: str) -> Tuple[str, Sequence[str]]:
1781 """Match Latex syntax for unicode characters.
2141 """Match Latex syntax for unicode characters.
1782
2142
1783 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
2143 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
1784 """
2144 """
1785 slashpos = text.rfind('\\')
2145 slashpos = text.rfind('\\')
1786 if slashpos > -1:
2146 if slashpos > -1:
1787 s = text[slashpos:]
2147 s = text[slashpos:]
1788 if s in latex_symbols:
2148 if s in latex_symbols:
1789 # Try to complete a full latex symbol to unicode
2149 # Try to complete a full latex symbol to unicode
1790 # \\alpha -> α
2150 # \\alpha -> α
1791 return s, [latex_symbols[s]]
2151 return s, [latex_symbols[s]]
1792 else:
2152 else:
1793 # If a user has partially typed a latex symbol, give them
2153 # If a user has partially typed a latex symbol, give them
1794 # a full list of options \al -> [\aleph, \alpha]
2154 # a full list of options \al -> [\aleph, \alpha]
1795 matches = [k for k in latex_symbols if k.startswith(s)]
2155 matches = [k for k in latex_symbols if k.startswith(s)]
1796 if matches:
2156 if matches:
1797 return s, matches
2157 return s, matches
1798 return '', ()
2158 return '', ()
1799
2159
2160 @context_matcher()
2161 def custom_completer_matcher(self, context):
2162 matches = self.dispatch_custom_completer(context.token) or []
2163 result = _convert_matcher_v1_result_to_v2(
2164 matches, type="<unknown>", suppress_if_matches=True
2165 )
2166 result["ordered"] = True
2167 return result
2168
1800 def dispatch_custom_completer(self, text):
2169 def dispatch_custom_completer(self, text):
1801 if not self.custom_completers:
2170 if not self.custom_completers:
1802 return
2171 return
1803
2172
1804 line = self.line_buffer
2173 line = self.line_buffer
1805 if not line.strip():
2174 if not line.strip():
1806 return None
2175 return None
1807
2176
1808 # Create a little structure to pass all the relevant information about
2177 # Create a little structure to pass all the relevant information about
1809 # the current completion to any custom completer.
2178 # the current completion to any custom completer.
1810 event = SimpleNamespace()
2179 event = SimpleNamespace()
1811 event.line = line
2180 event.line = line
1812 event.symbol = text
2181 event.symbol = text
1813 cmd = line.split(None,1)[0]
2182 cmd = line.split(None,1)[0]
1814 event.command = cmd
2183 event.command = cmd
1815 event.text_until_cursor = self.text_until_cursor
2184 event.text_until_cursor = self.text_until_cursor
1816
2185
1817 # for foo etc, try also to find completer for %foo
2186 # for foo etc, try also to find completer for %foo
1818 if not cmd.startswith(self.magic_escape):
2187 if not cmd.startswith(self.magic_escape):
1819 try_magic = self.custom_completers.s_matches(
2188 try_magic = self.custom_completers.s_matches(
1820 self.magic_escape + cmd)
2189 self.magic_escape + cmd)
1821 else:
2190 else:
1822 try_magic = []
2191 try_magic = []
1823
2192
1824 for c in itertools.chain(self.custom_completers.s_matches(cmd),
2193 for c in itertools.chain(self.custom_completers.s_matches(cmd),
1825 try_magic,
2194 try_magic,
1826 self.custom_completers.flat_matches(self.text_until_cursor)):
2195 self.custom_completers.flat_matches(self.text_until_cursor)):
1827 try:
2196 try:
1828 res = c(event)
2197 res = c(event)
1829 if res:
2198 if res:
1830 # first, try case sensitive match
2199 # first, try case sensitive match
1831 withcase = [r for r in res if r.startswith(text)]
2200 withcase = [r for r in res if r.startswith(text)]
1832 if withcase:
2201 if withcase:
1833 return withcase
2202 return withcase
1834 # if none, then case insensitive ones are ok too
2203 # if none, then case insensitive ones are ok too
1835 text_low = text.lower()
2204 text_low = text.lower()
1836 return [r for r in res if r.lower().startswith(text_low)]
2205 return [r for r in res if r.lower().startswith(text_low)]
1837 except TryNext:
2206 except TryNext:
1838 pass
2207 pass
1839 except KeyboardInterrupt:
2208 except KeyboardInterrupt:
1840 """
2209 """
1841 If custom completer take too long,
2210 If custom completer take too long,
1842 let keyboard interrupt abort and return nothing.
2211 let keyboard interrupt abort and return nothing.
1843 """
2212 """
1844 break
2213 break
1845
2214
1846 return None
2215 return None
1847
2216
1848 def completions(self, text: str, offset: int)->Iterator[Completion]:
2217 def completions(self, text: str, offset: int)->Iterator[Completion]:
1849 """
2218 """
1850 Returns an iterator over the possible completions
2219 Returns an iterator over the possible completions
1851
2220
1852 .. warning::
2221 .. warning::
1853
2222
1854 Unstable
2223 Unstable
1855
2224
1856 This function is unstable, API may change without warning.
2225 This function is unstable, API may change without warning.
1857 It will also raise unless use in proper context manager.
2226 It will also raise unless use in proper context manager.
1858
2227
1859 Parameters
2228 Parameters
1860 ----------
2229 ----------
1861 text : str
2230 text : str
1862 Full text of the current input, multi line string.
2231 Full text of the current input, multi line string.
1863 offset : int
2232 offset : int
1864 Integer representing the position of the cursor in ``text``. Offset
2233 Integer representing the position of the cursor in ``text``. Offset
1865 is 0-based indexed.
2234 is 0-based indexed.
1866
2235
1867 Yields
2236 Yields
1868 ------
2237 ------
1869 Completion
2238 Completion
1870
2239
1871 Notes
2240 Notes
1872 -----
2241 -----
1873 The cursor on a text can either be seen as being "in between"
2242 The cursor on a text can either be seen as being "in between"
1874 characters or "On" a character depending on the interface visible to
2243 characters or "On" a character depending on the interface visible to
1875 the user. For consistency the cursor being on "in between" characters X
2244 the user. For consistency the cursor being on "in between" characters X
1876 and Y is equivalent to the cursor being "on" character Y, that is to say
2245 and Y is equivalent to the cursor being "on" character Y, that is to say
1877 the character the cursor is on is considered as being after the cursor.
2246 the character the cursor is on is considered as being after the cursor.
1878
2247
1879 Combining characters may span more that one position in the
2248 Combining characters may span more that one position in the
1880 text.
2249 text.
1881
2250
1882 .. note::
2251 .. note::
1883
2252
1884 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
2253 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
1885 fake Completion token to distinguish completion returned by Jedi
2254 fake Completion token to distinguish completion returned by Jedi
1886 and usual IPython completion.
2255 and usual IPython completion.
1887
2256
1888 .. note::
2257 .. note::
1889
2258
1890 Completions are not completely deduplicated yet. If identical
2259 Completions are not completely deduplicated yet. If identical
1891 completions are coming from different sources this function does not
2260 completions are coming from different sources this function does not
1892 ensure that each completion object will only be present once.
2261 ensure that each completion object will only be present once.
1893 """
2262 """
1894 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
2263 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
1895 "It may change without warnings. "
2264 "It may change without warnings. "
1896 "Use in corresponding context manager.",
2265 "Use in corresponding context manager.",
1897 category=ProvisionalCompleterWarning, stacklevel=2)
2266 category=ProvisionalCompleterWarning, stacklevel=2)
1898
2267
1899 seen = set()
2268 seen = set()
1900 profiler:Optional[cProfile.Profile]
2269 profiler:Optional[cProfile.Profile]
1901 try:
2270 try:
1902 if self.profile_completions:
2271 if self.profile_completions:
1903 import cProfile
2272 import cProfile
1904 profiler = cProfile.Profile()
2273 profiler = cProfile.Profile()
1905 profiler.enable()
2274 profiler.enable()
1906 else:
2275 else:
1907 profiler = None
2276 profiler = None
1908
2277
1909 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
2278 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
1910 if c and (c in seen):
2279 if c and (c in seen):
1911 continue
2280 continue
1912 yield c
2281 yield c
1913 seen.add(c)
2282 seen.add(c)
1914 except KeyboardInterrupt:
2283 except KeyboardInterrupt:
1915 """if completions take too long and users send keyboard interrupt,
2284 """if completions take too long and users send keyboard interrupt,
1916 do not crash and return ASAP. """
2285 do not crash and return ASAP. """
1917 pass
2286 pass
1918 finally:
2287 finally:
1919 if profiler is not None:
2288 if profiler is not None:
1920 profiler.disable()
2289 profiler.disable()
1921 ensure_dir_exists(self.profiler_output_dir)
2290 ensure_dir_exists(self.profiler_output_dir)
1922 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
2291 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
1923 print("Writing profiler output to", output_path)
2292 print("Writing profiler output to", output_path)
1924 profiler.dump_stats(output_path)
2293 profiler.dump_stats(output_path)
1925
2294
1926 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
2295 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
1927 """
2296 """
1928 Core completion module.Same signature as :any:`completions`, with the
2297 Core completion module.Same signature as :any:`completions`, with the
1929 extra `timeout` parameter (in seconds).
2298 extra `timeout` parameter (in seconds).
1930
2299
1931 Computing jedi's completion ``.type`` can be quite expensive (it is a
2300 Computing jedi's completion ``.type`` can be quite expensive (it is a
1932 lazy property) and can require some warm-up, more warm up than just
2301 lazy property) and can require some warm-up, more warm up than just
1933 computing the ``name`` of a completion. The warm-up can be :
2302 computing the ``name`` of a completion. The warm-up can be :
1934
2303
1935 - Long warm-up the first time a module is encountered after
2304 - Long warm-up the first time a module is encountered after
1936 install/update: actually build parse/inference tree.
2305 install/update: actually build parse/inference tree.
1937
2306
1938 - first time the module is encountered in a session: load tree from
2307 - first time the module is encountered in a session: load tree from
1939 disk.
2308 disk.
1940
2309
1941 We don't want to block completions for tens of seconds so we give the
2310 We don't want to block completions for tens of seconds so we give the
1942 completer a "budget" of ``_timeout`` seconds per invocation to compute
2311 completer a "budget" of ``_timeout`` seconds per invocation to compute
1943 completions types, the completions that have not yet been computed will
2312 completions types, the completions that have not yet been computed will
1944 be marked as "unknown" an will have a chance to be computed next round
2313 be marked as "unknown" an will have a chance to be computed next round
1945 are things get cached.
2314 are things get cached.
1946
2315
1947 Keep in mind that Jedi is not the only thing treating the completion so
2316 Keep in mind that Jedi is not the only thing treating the completion so
1948 keep the timeout short-ish as if we take more than 0.3 second we still
2317 keep the timeout short-ish as if we take more than 0.3 second we still
1949 have lots of processing to do.
2318 have lots of processing to do.
1950
2319
1951 """
2320 """
1952 deadline = time.monotonic() + _timeout
2321 deadline = time.monotonic() + _timeout
1953
2322
1954
1955 before = full_text[:offset]
2323 before = full_text[:offset]
1956 cursor_line, cursor_column = position_to_cursor(full_text, offset)
2324 cursor_line, cursor_column = position_to_cursor(full_text, offset)
1957
2325
1958 matched_text, matches, matches_origin, jedi_matches = self._complete(
2326 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
1959 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column)
2327
2328 results = self._complete(
2329 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
2330 )
2331 non_jedi_results: Dict[str, SimpleMatcherResult] = {
2332 identifier: result
2333 for identifier, result in results.items()
2334 if identifier != jedi_matcher_id
2335 }
2336
2337 jedi_matches = (
2338 cast(results[jedi_matcher_id], _JediMatcherResult)["completions"]
2339 if jedi_matcher_id in results
2340 else ()
2341 )
1960
2342
1961 iter_jm = iter(jedi_matches)
2343 iter_jm = iter(jedi_matches)
1962 if _timeout:
2344 if _timeout:
1963 for jm in iter_jm:
2345 for jm in iter_jm:
1964 try:
2346 try:
1965 type_ = jm.type
2347 type_ = jm.type
1966 except Exception:
2348 except Exception:
1967 if self.debug:
2349 if self.debug:
1968 print("Error in Jedi getting type of ", jm)
2350 print("Error in Jedi getting type of ", jm)
1969 type_ = None
2351 type_ = None
1970 delta = len(jm.name_with_symbols) - len(jm.complete)
2352 delta = len(jm.name_with_symbols) - len(jm.complete)
1971 if type_ == 'function':
2353 if type_ == 'function':
1972 signature = _make_signature(jm)
2354 signature = _make_signature(jm)
1973 else:
2355 else:
1974 signature = ''
2356 signature = ''
1975 yield Completion(start=offset - delta,
2357 yield Completion(start=offset - delta,
1976 end=offset,
2358 end=offset,
1977 text=jm.name_with_symbols,
2359 text=jm.name_with_symbols,
1978 type=type_,
2360 type=type_,
1979 signature=signature,
2361 signature=signature,
1980 _origin='jedi')
2362 _origin='jedi')
1981
2363
1982 if time.monotonic() > deadline:
2364 if time.monotonic() > deadline:
1983 break
2365 break
1984
2366
1985 for jm in iter_jm:
2367 for jm in iter_jm:
1986 delta = len(jm.name_with_symbols) - len(jm.complete)
2368 delta = len(jm.name_with_symbols) - len(jm.complete)
1987 yield Completion(start=offset - delta,
2369 yield Completion(
1988 end=offset,
2370 start=offset - delta,
1989 text=jm.name_with_symbols,
2371 end=offset,
1990 type='<unknown>', # don't compute type for speed
2372 text=jm.name_with_symbols,
1991 _origin='jedi',
2373 type=_UNKNOWN_TYPE, # don't compute type for speed
1992 signature='')
2374 _origin="jedi",
1993
2375 signature="",
1994
2376 )
1995 start_offset = before.rfind(matched_text)
1996
2377
1997 # TODO:
2378 # TODO:
1998 # Suppress this, right now just for debug.
2379 # Suppress this, right now just for debug.
1999 if jedi_matches and matches and self.debug:
2380 if jedi_matches and non_jedi_results and self.debug:
2000 yield Completion(start=start_offset, end=offset, text='--jedi/ipython--',
2381 some_start_offset = before.rfind(
2001 _origin='debug', type='none', signature='')
2382 next(iter(non_jedi_results.values()))["matched_fragment"]
2383 )
2384 yield Completion(
2385 start=some_start_offset,
2386 end=offset,
2387 text="--jedi/ipython--",
2388 _origin="debug",
2389 type="none",
2390 signature="",
2391 )
2002
2392
2003 # I'm unsure if this is always true, so let's assert and see if it
2393 ordered = []
2004 # crash
2394 sortable = []
2005 assert before.endswith(matched_text)
2395
2006 for m, t in zip(matches, matches_origin):
2396 for origin, result in non_jedi_results.items():
2007 yield Completion(start=start_offset, end=offset, text=m, _origin=t, signature='', type='<unknown>')
2397 matched_text = result["matched_fragment"]
2398 start_offset = before.rfind(matched_text)
2399 is_ordered = result.get("ordered", False)
2400 container = ordered if is_ordered else sortable
2401
2402 # I'm unsure if this is always true, so let's assert and see if it
2403 # crash
2404 assert before.endswith(matched_text)
2405
2406 for simple_completion in result["completions"]:
2407 completion = Completion(
2408 start=start_offset,
2409 end=offset,
2410 text=simple_completion.text,
2411 _origin=origin,
2412 signature="",
2413 type=simple_completion.type or _UNKNOWN_TYPE,
2414 )
2415 container.append(completion)
2416
2417 yield from self._deduplicate(ordered + self._sort(sortable))
2008
2418
2009
2419
2010 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2420 def complete(self, text=None, line_buffer=None, cursor_pos=None) -> Tuple[str, Sequence[str]]:
2011 """Find completions for the given text and line context.
2421 """Find completions for the given text and line context.
2012
2422
2013 Note that both the text and the line_buffer are optional, but at least
2423 Note that both the text and the line_buffer are optional, but at least
2014 one of them must be given.
2424 one of them must be given.
2015
2425
2016 Parameters
2426 Parameters
2017 ----------
2427 ----------
2018 text : string, optional
2428 text : string, optional
2019 Text to perform the completion on. If not given, the line buffer
2429 Text to perform the completion on. If not given, the line buffer
2020 is split using the instance's CompletionSplitter object.
2430 is split using the instance's CompletionSplitter object.
2021 line_buffer : string, optional
2431 line_buffer : string, optional
2022 If not given, the completer attempts to obtain the current line
2432 If not given, the completer attempts to obtain the current line
2023 buffer via readline. This keyword allows clients which are
2433 buffer via readline. This keyword allows clients which are
2024 requesting for text completions in non-readline contexts to inform
2434 requesting for text completions in non-readline contexts to inform
2025 the completer of the entire text.
2435 the completer of the entire text.
2026 cursor_pos : int, optional
2436 cursor_pos : int, optional
2027 Index of the cursor in the full line buffer. Should be provided by
2437 Index of the cursor in the full line buffer. Should be provided by
2028 remote frontends where kernel has no access to frontend state.
2438 remote frontends where kernel has no access to frontend state.
2029
2439
2030 Returns
2440 Returns
2031 -------
2441 -------
2032 Tuple of two items:
2442 Tuple of two items:
2033 text : str
2443 text : str
2034 Text that was actually used in the completion.
2444 Text that was actually used in the completion.
2035 matches : list
2445 matches : list
2036 A list of completion matches.
2446 A list of completion matches.
2037
2447
2038 Notes
2448 Notes
2039 -----
2449 -----
2040 This API is likely to be deprecated and replaced by
2450 This API is likely to be deprecated and replaced by
2041 :any:`IPCompleter.completions` in the future.
2451 :any:`IPCompleter.completions` in the future.
2042
2452
2043 """
2453 """
2044 warnings.warn('`Completer.complete` is pending deprecation since '
2454 warnings.warn('`Completer.complete` is pending deprecation since '
2045 'IPython 6.0 and will be replaced by `Completer.completions`.',
2455 'IPython 6.0 and will be replaced by `Completer.completions`.',
2046 PendingDeprecationWarning)
2456 PendingDeprecationWarning)
2047 # potential todo, FOLD the 3rd throw away argument of _complete
2457 # potential todo, FOLD the 3rd throw away argument of _complete
2048 # into the first 2 one.
2458 # into the first 2 one.
2049 return self._complete(line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0)[:2]
2459 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
2460 # TODO: should we deprecate now, or does it stay?
2461
2462 results = self._complete(
2463 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
2464 )
2465
2466 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2467
2468 return self._arrange_and_extract(
2469 results,
2470 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
2471 skip_matchers={jedi_matcher_id},
2472 # this API does not support different start/end positions (fragments of token).
2473 abort_if_offset_changes=True,
2474 )
2475
2476 def _arrange_and_extract(
2477 self,
2478 results: Dict[str, MatcherResult],
2479 skip_matchers: Set[str],
2480 abort_if_offset_changes: bool,
2481 ):
2482
2483 sortable = []
2484 ordered = []
2485 most_recent_fragment = None
2486 for identifier, result in results.items():
2487 if identifier in skip_matchers:
2488 continue
2489 if not most_recent_fragment:
2490 most_recent_fragment = result["matched_fragment"]
2491 if (
2492 abort_if_offset_changes
2493 and result["matched_fragment"] != most_recent_fragment
2494 ):
2495 break
2496 if result.get("ordered", False):
2497 ordered.extend(result["completions"])
2498 else:
2499 sortable.extend(result["completions"])
2500
2501 if not most_recent_fragment:
2502 most_recent_fragment = "" # to satisfy typechecker (and just in case)
2503
2504 return most_recent_fragment, [
2505 m.text for m in self._deduplicate(ordered + self._sort(sortable))
2506 ]
2050
2507
2051 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2508 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
2052 full_text=None) -> _CompleteResult:
2509 full_text=None) -> _CompleteResult:
2053 """
2510 """
2054 Like complete but can also returns raw jedi completions as well as the
2511 Like complete but can also returns raw jedi completions as well as the
2055 origin of the completion text. This could (and should) be made much
2512 origin of the completion text. This could (and should) be made much
2056 cleaner but that will be simpler once we drop the old (and stateful)
2513 cleaner but that will be simpler once we drop the old (and stateful)
2057 :any:`complete` API.
2514 :any:`complete` API.
2058
2515
2059 With current provisional API, cursor_pos act both (depending on the
2516 With current provisional API, cursor_pos act both (depending on the
2060 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2517 caller) as the offset in the ``text`` or ``line_buffer``, or as the
2061 ``column`` when passing multiline strings this could/should be renamed
2518 ``column`` when passing multiline strings this could/should be renamed
2062 but would add extra noise.
2519 but would add extra noise.
2063
2520
2064 Parameters
2521 Parameters
2065 ----------
2522 ----------
2066 cursor_line
2523 cursor_line
2067 Index of the line the cursor is on. 0 indexed.
2524 Index of the line the cursor is on. 0 indexed.
2068 cursor_pos
2525 cursor_pos
2069 Position of the cursor in the current line/line_buffer/text. 0
2526 Position of the cursor in the current line/line_buffer/text. 0
2070 indexed.
2527 indexed.
2071 line_buffer : optional, str
2528 line_buffer : optional, str
2072 The current line the cursor is in, this is mostly due to legacy
2529 The current line the cursor is in, this is mostly due to legacy
2073 reason that readline could only give a us the single current line.
2530 reason that readline could only give a us the single current line.
2074 Prefer `full_text`.
2531 Prefer `full_text`.
2075 text : str
2532 text : str
2076 The current "token" the cursor is in, mostly also for historical
2533 The current "token" the cursor is in, mostly also for historical
2077 reasons. as the completer would trigger only after the current line
2534 reasons. as the completer would trigger only after the current line
2078 was parsed.
2535 was parsed.
2079 full_text : str
2536 full_text : str
2080 Full text of the current cell.
2537 Full text of the current cell.
2081
2538
2082 Returns
2539 Returns
2083 -------
2540 -------
2084 A tuple of N elements which are (likely):
2541 An ordered dictionary where keys are identifiers of completion
2085 matched_text: ? the text that the complete matched
2542 matchers and values are ``MatcherResult``s.
2086 matches: list of completions ?
2087 matches_origin: ? list same length as matches, and where each completion came from
2088 jedi_matches: list of Jedi matches, have it's own structure.
2089 """
2543 """
2090
2544
2091
2092 # if the cursor position isn't given, the only sane assumption we can
2545 # if the cursor position isn't given, the only sane assumption we can
2093 # make is that it's at the end of the line (the common case)
2546 # make is that it's at the end of the line (the common case)
2094 if cursor_pos is None:
2547 if cursor_pos is None:
2095 cursor_pos = len(line_buffer) if text is None else len(text)
2548 cursor_pos = len(line_buffer) if text is None else len(text)
2096
2549
2097 if self.use_main_ns:
2550 if self.use_main_ns:
2098 self.namespace = __main__.__dict__
2551 self.namespace = __main__.__dict__
2099
2552
2100 # if text is either None or an empty string, rely on the line buffer
2553 # if text is either None or an empty string, rely on the line buffer
2101 if (not line_buffer) and full_text:
2554 if (not line_buffer) and full_text:
2102 line_buffer = full_text.split('\n')[cursor_line]
2555 line_buffer = full_text.split('\n')[cursor_line]
2103 if not text: # issue #11508: check line_buffer before calling split_line
2556 if not text: # issue #11508: check line_buffer before calling split_line
2104 text = self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ''
2557 text = (
2105
2558 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
2106 if self.backslash_combining_completions:
2559 )
2107 # allow deactivation of these on windows.
2108 base_text = text if not line_buffer else line_buffer[:cursor_pos]
2109
2110 for meth in (self.latex_matches,
2111 self.unicode_name_matches,
2112 back_latex_name_matches,
2113 back_unicode_name_matches,
2114 self.fwd_unicode_match):
2115 name_text, name_matches = meth(base_text)
2116 if name_text:
2117 return _CompleteResult(name_text, name_matches[:MATCHES_LIMIT], \
2118 [meth.__qualname__]*min(len(name_matches), MATCHES_LIMIT), ())
2119
2120
2560
2121 # If no line buffer is given, assume the input text is all there was
2561 # If no line buffer is given, assume the input text is all there was
2122 if line_buffer is None:
2562 if line_buffer is None:
2123 line_buffer = text
2563 line_buffer = text
2124
2564
2565 # deprecated - do not use `line_buffer` in new code.
2125 self.line_buffer = line_buffer
2566 self.line_buffer = line_buffer
2126 self.text_until_cursor = self.line_buffer[:cursor_pos]
2567 self.text_until_cursor = self.line_buffer[:cursor_pos]
2127
2568
2128 # Do magic arg matches
2569 if not full_text:
2129 for matcher in self.magic_arg_matchers:
2570 full_text = line_buffer
2130 matches = list(matcher(line_buffer))[:MATCHES_LIMIT]
2571
2131 if matches:
2572 context = CompletionContext(
2132 origins = [matcher.__qualname__] * len(matches)
2573 full_text=full_text,
2133 return _CompleteResult(text, matches, origins, ())
2574 cursor_position=cursor_pos,
2575 cursor_line=cursor_line,
2576 token=text,
2577 )
2134
2578
2135 # Start with a clean slate of completions
2579 # Start with a clean slate of completions
2136 matches = []
2580 results = {}
2137
2581
2138 # FIXME: we should extend our api to return a dict with completions for
2582 custom_completer_matcher_id = _get_matcher_id(self.custom_completer_matcher)
2139 # different types of objects. The rlcomplete() method could then
2583 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
2140 # simply collapse the dict into a list for readline, but we'd have
2141 # richer completion semantics in other environments.
2142 is_magic_prefix = len(text) > 0 and text[0] == "%"
2143 completions: Iterable[Any] = []
2144 if self.use_jedi and not is_magic_prefix:
2145 if not full_text:
2146 full_text = line_buffer
2147 completions = self._jedi_matches(
2148 cursor_pos, cursor_line, full_text)
2149
2150 if self.merge_completions:
2151 matches = []
2152 for matcher in self.matchers:
2153 try:
2154 matches.extend([(m, matcher.__qualname__)
2155 for m in matcher(text)])
2156 except:
2157 # Show the ugly traceback if the matcher causes an
2158 # exception, but do NOT crash the kernel!
2159 sys.excepthook(*sys.exc_info())
2160 else:
2161 for matcher in self.matchers:
2162 matches = [(m, matcher.__qualname__)
2163 for m in matcher(text)]
2164 if matches:
2165 break
2166
2167 seen = set()
2168 filtered_matches = set()
2169 for m in matches:
2170 t, c = m
2171 if t not in seen:
2172 filtered_matches.add(m)
2173 seen.add(t)
2174
2584
2175 _filtered_matches = sorted(filtered_matches, key=lambda x: completions_sorting_key(x[0]))
2585 for matcher in self.matchers:
2586 api_version = _get_matcher_api_version(matcher)
2587 matcher_id = _get_matcher_id(matcher)
2176
2588
2177 custom_res = [(m, 'custom') for m in self.dispatch_custom_completer(text) or []]
2589 if matcher_id in results:
2178
2590 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
2179 _filtered_matches = custom_res or _filtered_matches
2180
2181 _filtered_matches = _filtered_matches[:MATCHES_LIMIT]
2182 _matches = [m[0] for m in _filtered_matches]
2183 origins = [m[1] for m in _filtered_matches]
2184
2591
2185 self.matches = _matches
2592 try:
2593 if api_version == 1:
2594 result = _convert_matcher_v1_result_to_v2(
2595 matcher(text), type=_UNKNOWN_TYPE
2596 )
2597 elif api_version == 2:
2598 # TODO: MATCHES_LIMIT was used inconsistently in previous version
2599 # (applied individually to latex/unicode and magic arguments matcher,
2600 # but not Jedi, paths, magics, etc). Jedi did not have a limit here at
2601 # all, but others had a total limit (retained in `_deduplicate_and_sort`).
2602 # 1) Was that deliberate or an omission?
2603 # 2) Should we include the limit in the API v2 signature to allow
2604 # more expensive matchers to return early?
2605 result = cast(matcher, MatcherAPIv2)(context)
2606 else:
2607 raise ValueError(f"Unsupported API version {api_version}")
2608 except:
2609 # Show the ugly traceback if the matcher causes an
2610 # exception, but do NOT crash the kernel!
2611 sys.excepthook(*sys.exc_info())
2612 continue
2186
2613
2187 return _CompleteResult(text, _matches, origins, completions)
2614 # set default value for matched fragment if suffix was not selected.
2188
2615 result["matched_fragment"] = result.get("matched_fragment", context.token)
2189 def fwd_unicode_match(self, text:str) -> Tuple[str, Sequence[str]]:
2616
2617 suppression_recommended = result.get("suppress_others", False)
2618
2619 should_suppress = (
2620 self.suppress_competing_matchers is True
2621 or suppression_recommended
2622 or (
2623 isinstance(self.suppress_competing_matchers, dict)
2624 and self.suppress_competing_matchers[matcher_id]
2625 )
2626 ) and len(result["completions"])
2627
2628 if should_suppress:
2629 new_results = {matcher_id: result}
2630 if (
2631 matcher_id == custom_completer_matcher_id
2632 and jedi_matcher_id in results
2633 ):
2634 # custom completer does not suppress Jedi (this may change in future versions).
2635 new_results[jedi_matcher_id] = results[jedi_matcher_id]
2636 results = new_results
2637 break
2638
2639 results[matcher_id] = result
2640
2641 _, matches = self._arrange_and_extract(
2642 results,
2643 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
2644 # if it was omission, we can remove the filtering step, otherwise remove this comment.
2645 skip_matchers={jedi_matcher_id},
2646 abort_if_offset_changes=False,
2647 )
2648
2649 # populate legacy stateful API
2650 self.matches = matches
2651
2652 return results
2653
2654 @staticmethod
2655 def _deduplicate(
2656 matches: Sequence[SimpleCompletion],
2657 ) -> Iterable[SimpleCompletion]:
2658 filtered_matches = {}
2659 for match in matches:
2660 text = match.text
2661 if (
2662 text not in filtered_matches
2663 or filtered_matches[text].type == _UNKNOWN_TYPE
2664 ):
2665 filtered_matches[text] = match
2666
2667 return filtered_matches.values()
2668
2669 @staticmethod
2670 def _sort(matches: Sequence[SimpleCompletion]):
2671 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
2672
2673 @context_matcher()
2674 def fwd_unicode_matcher(self, context):
2675 fragment, matches = self.latex_matches(context.token)
2676 return _convert_matcher_v1_result_to_v2(
2677 matches, type="unicode", fragment=fragment, suppress_if_matches=True
2678 )
2679
2680 def fwd_unicode_match(self, text: str) -> Tuple[str, Sequence[str]]:
2190 """
2681 """
2191 Forward match a string starting with a backslash with a list of
2682 Forward match a string starting with a backslash with a list of
2192 potential Unicode completions.
2683 potential Unicode completions.
2193
2684
2194 Will compute list list of Unicode character names on first call and cache it.
2685 Will compute list list of Unicode character names on first call and cache it.
2195
2686
2196 Returns
2687 Returns
2197 -------
2688 -------
2198 At tuple with:
2689 At tuple with:
2199 - matched text (empty if no matches)
2690 - matched text (empty if no matches)
2200 - list of potential completions, empty tuple otherwise)
2691 - list of potential completions, empty tuple otherwise)
2201 """
2692 """
2202 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2693 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
2203 # We could do a faster match using a Trie.
2694 # We could do a faster match using a Trie.
2204
2695
2205 # Using pygtrie the following seem to work:
2696 # Using pygtrie the following seem to work:
2206
2697
2207 # s = PrefixSet()
2698 # s = PrefixSet()
2208
2699
2209 # for c in range(0,0x10FFFF + 1):
2700 # for c in range(0,0x10FFFF + 1):
2210 # try:
2701 # try:
2211 # s.add(unicodedata.name(chr(c)))
2702 # s.add(unicodedata.name(chr(c)))
2212 # except ValueError:
2703 # except ValueError:
2213 # pass
2704 # pass
2214 # [''.join(k) for k in s.iter(prefix)]
2705 # [''.join(k) for k in s.iter(prefix)]
2215
2706
2216 # But need to be timed and adds an extra dependency.
2707 # But need to be timed and adds an extra dependency.
2217
2708
2218 slashpos = text.rfind('\\')
2709 slashpos = text.rfind('\\')
2219 # if text starts with slash
2710 # if text starts with slash
2220 if slashpos > -1:
2711 if slashpos > -1:
2221 # PERF: It's important that we don't access self._unicode_names
2712 # PERF: It's important that we don't access self._unicode_names
2222 # until we're inside this if-block. _unicode_names is lazily
2713 # until we're inside this if-block. _unicode_names is lazily
2223 # initialized, and it takes a user-noticeable amount of time to
2714 # initialized, and it takes a user-noticeable amount of time to
2224 # initialize it, so we don't want to initialize it unless we're
2715 # initialize it, so we don't want to initialize it unless we're
2225 # actually going to use it.
2716 # actually going to use it.
2226 s = text[slashpos + 1 :]
2717 s = text[slashpos + 1 :]
2227 sup = s.upper()
2718 sup = s.upper()
2228 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2719 candidates = [x for x in self.unicode_names if x.startswith(sup)]
2229 if candidates:
2720 if candidates:
2230 return s, candidates
2721 return s, candidates
2231 candidates = [x for x in self.unicode_names if sup in x]
2722 candidates = [x for x in self.unicode_names if sup in x]
2232 if candidates:
2723 if candidates:
2233 return s, candidates
2724 return s, candidates
2234 splitsup = sup.split(" ")
2725 splitsup = sup.split(" ")
2235 candidates = [
2726 candidates = [
2236 x for x in self.unicode_names if all(u in x for u in splitsup)
2727 x for x in self.unicode_names if all(u in x for u in splitsup)
2237 ]
2728 ]
2238 if candidates:
2729 if candidates:
2239 return s, candidates
2730 return s, candidates
2240
2731
2241 return "", ()
2732 return "", ()
2242
2733
2243 # if text does not start with slash
2734 # if text does not start with slash
2244 else:
2735 else:
2245 return '', ()
2736 return '', ()
2246
2737
2247 @property
2738 @property
2248 def unicode_names(self) -> List[str]:
2739 def unicode_names(self) -> List[str]:
2249 """List of names of unicode code points that can be completed.
2740 """List of names of unicode code points that can be completed.
2250
2741
2251 The list is lazily initialized on first access.
2742 The list is lazily initialized on first access.
2252 """
2743 """
2253 if self._unicode_names is None:
2744 if self._unicode_names is None:
2254 names = []
2745 names = []
2255 for c in range(0,0x10FFFF + 1):
2746 for c in range(0,0x10FFFF + 1):
2256 try:
2747 try:
2257 names.append(unicodedata.name(chr(c)))
2748 names.append(unicodedata.name(chr(c)))
2258 except ValueError:
2749 except ValueError:
2259 pass
2750 pass
2260 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2751 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
2261
2752
2262 return self._unicode_names
2753 return self._unicode_names
2263
2754
2264 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2755 def _unicode_name_compute(ranges:List[Tuple[int,int]]) -> List[str]:
2265 names = []
2756 names = []
2266 for start,stop in ranges:
2757 for start,stop in ranges:
2267 for c in range(start, stop) :
2758 for c in range(start, stop) :
2268 try:
2759 try:
2269 names.append(unicodedata.name(chr(c)))
2760 names.append(unicodedata.name(chr(c)))
2270 except ValueError:
2761 except ValueError:
2271 pass
2762 pass
2272 return names
2763 return names
@@ -1,1275 +1,1283
1 # encoding: utf-8
1 # encoding: utf-8
2 """Tests for the IPython tab-completion machinery."""
2 """Tests for the IPython tab-completion machinery."""
3
3
4 # Copyright (c) IPython Development Team.
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
5 # Distributed under the terms of the Modified BSD License.
6
6
7 import os
7 import os
8 import pytest
8 import pytest
9 import sys
9 import sys
10 import textwrap
10 import textwrap
11 import unittest
11 import unittest
12
12
13 from contextlib import contextmanager
13 from contextlib import contextmanager
14
14
15 from traitlets.config.loader import Config
15 from traitlets.config.loader import Config
16 from IPython import get_ipython
16 from IPython import get_ipython
17 from IPython.core import completer
17 from IPython.core import completer
18 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
18 from IPython.utils.tempdir import TemporaryDirectory, TemporaryWorkingDirectory
19 from IPython.utils.generics import complete_object
19 from IPython.utils.generics import complete_object
20 from IPython.testing import decorators as dec
20 from IPython.testing import decorators as dec
21
21
22 from IPython.core.completer import (
22 from IPython.core.completer import (
23 Completion,
23 Completion,
24 provisionalcompleter,
24 provisionalcompleter,
25 match_dict_keys,
25 match_dict_keys,
26 _deduplicate_completions,
26 _deduplicate_completions,
27 )
27 )
28
28
29 # -----------------------------------------------------------------------------
29 # -----------------------------------------------------------------------------
30 # Test functions
30 # Test functions
31 # -----------------------------------------------------------------------------
31 # -----------------------------------------------------------------------------
32
32
33 def recompute_unicode_ranges():
33 def recompute_unicode_ranges():
34 """
34 """
35 utility to recompute the largest unicode range without any characters
35 utility to recompute the largest unicode range without any characters
36
36
37 use to recompute the gap in the global _UNICODE_RANGES of completer.py
37 use to recompute the gap in the global _UNICODE_RANGES of completer.py
38 """
38 """
39 import itertools
39 import itertools
40 import unicodedata
40 import unicodedata
41 valid = []
41 valid = []
42 for c in range(0,0x10FFFF + 1):
42 for c in range(0,0x10FFFF + 1):
43 try:
43 try:
44 unicodedata.name(chr(c))
44 unicodedata.name(chr(c))
45 except ValueError:
45 except ValueError:
46 continue
46 continue
47 valid.append(c)
47 valid.append(c)
48
48
49 def ranges(i):
49 def ranges(i):
50 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
50 for a, b in itertools.groupby(enumerate(i), lambda pair: pair[1] - pair[0]):
51 b = list(b)
51 b = list(b)
52 yield b[0][1], b[-1][1]
52 yield b[0][1], b[-1][1]
53
53
54 rg = list(ranges(valid))
54 rg = list(ranges(valid))
55 lens = []
55 lens = []
56 gap_lens = []
56 gap_lens = []
57 pstart, pstop = 0,0
57 pstart, pstop = 0,0
58 for start, stop in rg:
58 for start, stop in rg:
59 lens.append(stop-start)
59 lens.append(stop-start)
60 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
60 gap_lens.append((start - pstop, hex(pstop), hex(start), f'{round((start - pstop)/0xe01f0*100)}%'))
61 pstart, pstop = start, stop
61 pstart, pstop = start, stop
62
62
63 return sorted(gap_lens)[-1]
63 return sorted(gap_lens)[-1]
64
64
65
65
66
66
67 def test_unicode_range():
67 def test_unicode_range():
68 """
68 """
69 Test that the ranges we test for unicode names give the same number of
69 Test that the ranges we test for unicode names give the same number of
70 results than testing the full length.
70 results than testing the full length.
71 """
71 """
72 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
72 from IPython.core.completer import _unicode_name_compute, _UNICODE_RANGES
73
73
74 expected_list = _unicode_name_compute([(0, 0x110000)])
74 expected_list = _unicode_name_compute([(0, 0x110000)])
75 test = _unicode_name_compute(_UNICODE_RANGES)
75 test = _unicode_name_compute(_UNICODE_RANGES)
76 len_exp = len(expected_list)
76 len_exp = len(expected_list)
77 len_test = len(test)
77 len_test = len(test)
78
78
79 # do not inline the len() or on error pytest will try to print the 130 000 +
79 # do not inline the len() or on error pytest will try to print the 130 000 +
80 # elements.
80 # elements.
81 message = None
81 message = None
82 if len_exp != len_test or len_exp > 131808:
82 if len_exp != len_test or len_exp > 131808:
83 size, start, stop, prct = recompute_unicode_ranges()
83 size, start, stop, prct = recompute_unicode_ranges()
84 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
84 message = f"""_UNICODE_RANGES likely wrong and need updating. This is
85 likely due to a new release of Python. We've find that the biggest gap
85 likely due to a new release of Python. We've find that the biggest gap
86 in unicode characters has reduces in size to be {size} characters
86 in unicode characters has reduces in size to be {size} characters
87 ({prct}), from {start}, to {stop}. In completer.py likely update to
87 ({prct}), from {start}, to {stop}. In completer.py likely update to
88
88
89 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
89 _UNICODE_RANGES = [(32, {start}), ({stop}, 0xe01f0)]
90
90
91 And update the assertion below to use
91 And update the assertion below to use
92
92
93 len_exp <= {len_exp}
93 len_exp <= {len_exp}
94 """
94 """
95 assert len_exp == len_test, message
95 assert len_exp == len_test, message
96
96
97 # fail if new unicode symbols have been added.
97 # fail if new unicode symbols have been added.
98 assert len_exp <= 138552, message
98 assert len_exp <= 138552, message
99
99
100
100
101 @contextmanager
101 @contextmanager
102 def greedy_completion():
102 def greedy_completion():
103 ip = get_ipython()
103 ip = get_ipython()
104 greedy_original = ip.Completer.greedy
104 greedy_original = ip.Completer.greedy
105 try:
105 try:
106 ip.Completer.greedy = True
106 ip.Completer.greedy = True
107 yield
107 yield
108 finally:
108 finally:
109 ip.Completer.greedy = greedy_original
109 ip.Completer.greedy = greedy_original
110
110
111
111
112 def test_protect_filename():
112 def test_protect_filename():
113 if sys.platform == "win32":
113 if sys.platform == "win32":
114 pairs = [
114 pairs = [
115 ("abc", "abc"),
115 ("abc", "abc"),
116 (" abc", '" abc"'),
116 (" abc", '" abc"'),
117 ("a bc", '"a bc"'),
117 ("a bc", '"a bc"'),
118 ("a bc", '"a bc"'),
118 ("a bc", '"a bc"'),
119 (" bc", '" bc"'),
119 (" bc", '" bc"'),
120 ]
120 ]
121 else:
121 else:
122 pairs = [
122 pairs = [
123 ("abc", "abc"),
123 ("abc", "abc"),
124 (" abc", r"\ abc"),
124 (" abc", r"\ abc"),
125 ("a bc", r"a\ bc"),
125 ("a bc", r"a\ bc"),
126 ("a bc", r"a\ \ bc"),
126 ("a bc", r"a\ \ bc"),
127 (" bc", r"\ \ bc"),
127 (" bc", r"\ \ bc"),
128 # On posix, we also protect parens and other special characters.
128 # On posix, we also protect parens and other special characters.
129 ("a(bc", r"a\(bc"),
129 ("a(bc", r"a\(bc"),
130 ("a)bc", r"a\)bc"),
130 ("a)bc", r"a\)bc"),
131 ("a( )bc", r"a\(\ \)bc"),
131 ("a( )bc", r"a\(\ \)bc"),
132 ("a[1]bc", r"a\[1\]bc"),
132 ("a[1]bc", r"a\[1\]bc"),
133 ("a{1}bc", r"a\{1\}bc"),
133 ("a{1}bc", r"a\{1\}bc"),
134 ("a#bc", r"a\#bc"),
134 ("a#bc", r"a\#bc"),
135 ("a?bc", r"a\?bc"),
135 ("a?bc", r"a\?bc"),
136 ("a=bc", r"a\=bc"),
136 ("a=bc", r"a\=bc"),
137 ("a\\bc", r"a\\bc"),
137 ("a\\bc", r"a\\bc"),
138 ("a|bc", r"a\|bc"),
138 ("a|bc", r"a\|bc"),
139 ("a;bc", r"a\;bc"),
139 ("a;bc", r"a\;bc"),
140 ("a:bc", r"a\:bc"),
140 ("a:bc", r"a\:bc"),
141 ("a'bc", r"a\'bc"),
141 ("a'bc", r"a\'bc"),
142 ("a*bc", r"a\*bc"),
142 ("a*bc", r"a\*bc"),
143 ('a"bc', r"a\"bc"),
143 ('a"bc', r"a\"bc"),
144 ("a^bc", r"a\^bc"),
144 ("a^bc", r"a\^bc"),
145 ("a&bc", r"a\&bc"),
145 ("a&bc", r"a\&bc"),
146 ]
146 ]
147 # run the actual tests
147 # run the actual tests
148 for s1, s2 in pairs:
148 for s1, s2 in pairs:
149 s1p = completer.protect_filename(s1)
149 s1p = completer.protect_filename(s1)
150 assert s1p == s2
150 assert s1p == s2
151
151
152
152
153 def check_line_split(splitter, test_specs):
153 def check_line_split(splitter, test_specs):
154 for part1, part2, split in test_specs:
154 for part1, part2, split in test_specs:
155 cursor_pos = len(part1)
155 cursor_pos = len(part1)
156 line = part1 + part2
156 line = part1 + part2
157 out = splitter.split_line(line, cursor_pos)
157 out = splitter.split_line(line, cursor_pos)
158 assert out == split
158 assert out == split
159
159
160
160
161 def test_line_split():
161 def test_line_split():
162 """Basic line splitter test with default specs."""
162 """Basic line splitter test with default specs."""
163 sp = completer.CompletionSplitter()
163 sp = completer.CompletionSplitter()
164 # The format of the test specs is: part1, part2, expected answer. Parts 1
164 # The format of the test specs is: part1, part2, expected answer. Parts 1
165 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
165 # and 2 are joined into the 'line' sent to the splitter, as if the cursor
166 # was at the end of part1. So an empty part2 represents someone hitting
166 # was at the end of part1. So an empty part2 represents someone hitting
167 # tab at the end of the line, the most common case.
167 # tab at the end of the line, the most common case.
168 t = [
168 t = [
169 ("run some/scrip", "", "some/scrip"),
169 ("run some/scrip", "", "some/scrip"),
170 ("run scripts/er", "ror.py foo", "scripts/er"),
170 ("run scripts/er", "ror.py foo", "scripts/er"),
171 ("echo $HOM", "", "HOM"),
171 ("echo $HOM", "", "HOM"),
172 ("print sys.pa", "", "sys.pa"),
172 ("print sys.pa", "", "sys.pa"),
173 ("print(sys.pa", "", "sys.pa"),
173 ("print(sys.pa", "", "sys.pa"),
174 ("execfile('scripts/er", "", "scripts/er"),
174 ("execfile('scripts/er", "", "scripts/er"),
175 ("a[x.", "", "x."),
175 ("a[x.", "", "x."),
176 ("a[x.", "y", "x."),
176 ("a[x.", "y", "x."),
177 ('cd "some_file/', "", "some_file/"),
177 ('cd "some_file/', "", "some_file/"),
178 ]
178 ]
179 check_line_split(sp, t)
179 check_line_split(sp, t)
180 # Ensure splitting works OK with unicode by re-running the tests with
180 # Ensure splitting works OK with unicode by re-running the tests with
181 # all inputs turned into unicode
181 # all inputs turned into unicode
182 check_line_split(sp, [map(str, p) for p in t])
182 check_line_split(sp, [map(str, p) for p in t])
183
183
184
184
185 class NamedInstanceClass:
185 class NamedInstanceClass:
186 instances = {}
186 instances = {}
187
187
188 def __init__(self, name):
188 def __init__(self, name):
189 self.instances[name] = self
189 self.instances[name] = self
190
190
191 @classmethod
191 @classmethod
192 def _ipython_key_completions_(cls):
192 def _ipython_key_completions_(cls):
193 return cls.instances.keys()
193 return cls.instances.keys()
194
194
195
195
196 class KeyCompletable:
196 class KeyCompletable:
197 def __init__(self, things=()):
197 def __init__(self, things=()):
198 self.things = things
198 self.things = things
199
199
200 def _ipython_key_completions_(self):
200 def _ipython_key_completions_(self):
201 return list(self.things)
201 return list(self.things)
202
202
203
203
204 class TestCompleter(unittest.TestCase):
204 class TestCompleter(unittest.TestCase):
205 def setUp(self):
205 def setUp(self):
206 """
206 """
207 We want to silence all PendingDeprecationWarning when testing the completer
207 We want to silence all PendingDeprecationWarning when testing the completer
208 """
208 """
209 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
209 self._assertwarns = self.assertWarns(PendingDeprecationWarning)
210 self._assertwarns.__enter__()
210 self._assertwarns.__enter__()
211
211
212 def tearDown(self):
212 def tearDown(self):
213 try:
213 try:
214 self._assertwarns.__exit__(None, None, None)
214 self._assertwarns.__exit__(None, None, None)
215 except AssertionError:
215 except AssertionError:
216 pass
216 pass
217
217
218 def test_custom_completion_error(self):
218 def test_custom_completion_error(self):
219 """Test that errors from custom attribute completers are silenced."""
219 """Test that errors from custom attribute completers are silenced."""
220 ip = get_ipython()
220 ip = get_ipython()
221
221
222 class A:
222 class A:
223 pass
223 pass
224
224
225 ip.user_ns["x"] = A()
225 ip.user_ns["x"] = A()
226
226
227 @complete_object.register(A)
227 @complete_object.register(A)
228 def complete_A(a, existing_completions):
228 def complete_A(a, existing_completions):
229 raise TypeError("this should be silenced")
229 raise TypeError("this should be silenced")
230
230
231 ip.complete("x.")
231 ip.complete("x.")
232
232
233 def test_custom_completion_ordering(self):
233 def test_custom_completion_ordering(self):
234 """Test that errors from custom attribute completers are silenced."""
234 """Test that errors from custom attribute completers are silenced."""
235 ip = get_ipython()
235 ip = get_ipython()
236
236
237 _, matches = ip.complete('in')
237 _, matches = ip.complete('in')
238 assert matches.index('input') < matches.index('int')
238 assert matches.index('input') < matches.index('int')
239
239
240 def complete_example(a):
240 def complete_example(a):
241 return ['example2', 'example1']
241 return ['example2', 'example1']
242
242
243 ip.Completer.custom_completers.add_re('ex*', complete_example)
243 ip.Completer.custom_completers.add_re('ex*', complete_example)
244 _, matches = ip.complete('ex')
244 _, matches = ip.complete('ex')
245 assert matches.index('example2') < matches.index('example1')
245 assert matches.index('example2') < matches.index('example1')
246
246
247 def test_unicode_completions(self):
247 def test_unicode_completions(self):
248 ip = get_ipython()
248 ip = get_ipython()
249 # Some strings that trigger different types of completion. Check them both
249 # Some strings that trigger different types of completion. Check them both
250 # in str and unicode forms
250 # in str and unicode forms
251 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
251 s = ["ru", "%ru", "cd /", "floa", "float(x)/"]
252 for t in s + list(map(str, s)):
252 for t in s + list(map(str, s)):
253 # We don't need to check exact completion values (they may change
253 # We don't need to check exact completion values (they may change
254 # depending on the state of the namespace, but at least no exceptions
254 # depending on the state of the namespace, but at least no exceptions
255 # should be thrown and the return value should be a pair of text, list
255 # should be thrown and the return value should be a pair of text, list
256 # values.
256 # values.
257 text, matches = ip.complete(t)
257 text, matches = ip.complete(t)
258 self.assertIsInstance(text, str)
258 self.assertIsInstance(text, str)
259 self.assertIsInstance(matches, list)
259 self.assertIsInstance(matches, list)
260
260
261 def test_latex_completions(self):
261 def test_latex_completions(self):
262 from IPython.core.latex_symbols import latex_symbols
262 from IPython.core.latex_symbols import latex_symbols
263 import random
263 import random
264
264
265 ip = get_ipython()
265 ip = get_ipython()
266 # Test some random unicode symbols
266 # Test some random unicode symbols
267 keys = random.sample(sorted(latex_symbols), 10)
267 keys = random.sample(sorted(latex_symbols), 10)
268 for k in keys:
268 for k in keys:
269 text, matches = ip.complete(k)
269 text, matches = ip.complete(k)
270 self.assertEqual(text, k)
270 self.assertEqual(text, k)
271 self.assertEqual(matches, [latex_symbols[k]])
271 self.assertEqual(matches, [latex_symbols[k]])
272 # Test a more complex line
272 # Test a more complex line
273 text, matches = ip.complete("print(\\alpha")
273 text, matches = ip.complete("print(\\alpha")
274 self.assertEqual(text, "\\alpha")
274 self.assertEqual(text, "\\alpha")
275 self.assertEqual(matches[0], latex_symbols["\\alpha"])
275 self.assertEqual(matches[0], latex_symbols["\\alpha"])
276 # Test multiple matching latex symbols
276 # Test multiple matching latex symbols
277 text, matches = ip.complete("\\al")
277 text, matches = ip.complete("\\al")
278 self.assertIn("\\alpha", matches)
278 self.assertIn("\\alpha", matches)
279 self.assertIn("\\aleph", matches)
279 self.assertIn("\\aleph", matches)
280
280
281 def test_latex_no_results(self):
281 def test_latex_no_results(self):
282 """
282 """
283 forward latex should really return nothing in either field if nothing is found.
283 forward latex should really return nothing in either field if nothing is found.
284 """
284 """
285 ip = get_ipython()
285 ip = get_ipython()
286 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
286 text, matches = ip.Completer.latex_matches("\\really_i_should_match_nothing")
287 self.assertEqual(text, "")
287 self.assertEqual(text, "")
288 self.assertEqual(matches, ())
288 self.assertEqual(matches, ())
289
289
290 def test_back_latex_completion(self):
290 def test_back_latex_completion(self):
291 ip = get_ipython()
291 ip = get_ipython()
292
292
293 # do not return more than 1 matches for \beta, only the latex one.
293 # do not return more than 1 matches for \beta, only the latex one.
294 name, matches = ip.complete("\\β")
294 name, matches = ip.complete("\\β")
295 self.assertEqual(matches, ["\\beta"])
295 self.assertEqual(matches, ["\\beta"])
296
296
297 def test_back_unicode_completion(self):
297 def test_back_unicode_completion(self):
298 ip = get_ipython()
298 ip = get_ipython()
299
299
300 name, matches = ip.complete("\\Ⅴ")
300 name, matches = ip.complete("\\Ⅴ")
301 self.assertEqual(matches, ("\\ROMAN NUMERAL FIVE",))
301 self.assertEqual(matches, ["\\ROMAN NUMERAL FIVE"])
302
302
303 def test_forward_unicode_completion(self):
303 def test_forward_unicode_completion(self):
304 ip = get_ipython()
304 ip = get_ipython()
305
305
306 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
306 name, matches = ip.complete("\\ROMAN NUMERAL FIVE")
307 self.assertEqual(matches, ["Ⅴ"]) # This is not a V
307 self.assertEqual(matches, ["Ⅴ"]) # This is not a V
308 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
308 self.assertEqual(matches, ["\u2164"]) # same as above but explicit.
309
309
310 def test_delim_setting(self):
310 def test_delim_setting(self):
311 sp = completer.CompletionSplitter()
311 sp = completer.CompletionSplitter()
312 sp.delims = " "
312 sp.delims = " "
313 self.assertEqual(sp.delims, " ")
313 self.assertEqual(sp.delims, " ")
314 self.assertEqual(sp._delim_expr, r"[\ ]")
314 self.assertEqual(sp._delim_expr, r"[\ ]")
315
315
316 def test_spaces(self):
316 def test_spaces(self):
317 """Test with only spaces as split chars."""
317 """Test with only spaces as split chars."""
318 sp = completer.CompletionSplitter()
318 sp = completer.CompletionSplitter()
319 sp.delims = " "
319 sp.delims = " "
320 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
320 t = [("foo", "", "foo"), ("run foo", "", "foo"), ("run foo", "bar", "foo")]
321 check_line_split(sp, t)
321 check_line_split(sp, t)
322
322
323 def test_has_open_quotes1(self):
323 def test_has_open_quotes1(self):
324 for s in ["'", "'''", "'hi' '"]:
324 for s in ["'", "'''", "'hi' '"]:
325 self.assertEqual(completer.has_open_quotes(s), "'")
325 self.assertEqual(completer.has_open_quotes(s), "'")
326
326
327 def test_has_open_quotes2(self):
327 def test_has_open_quotes2(self):
328 for s in ['"', '"""', '"hi" "']:
328 for s in ['"', '"""', '"hi" "']:
329 self.assertEqual(completer.has_open_quotes(s), '"')
329 self.assertEqual(completer.has_open_quotes(s), '"')
330
330
331 def test_has_open_quotes3(self):
331 def test_has_open_quotes3(self):
332 for s in ["''", "''' '''", "'hi' 'ipython'"]:
332 for s in ["''", "''' '''", "'hi' 'ipython'"]:
333 self.assertFalse(completer.has_open_quotes(s))
333 self.assertFalse(completer.has_open_quotes(s))
334
334
335 def test_has_open_quotes4(self):
335 def test_has_open_quotes4(self):
336 for s in ['""', '""" """', '"hi" "ipython"']:
336 for s in ['""', '""" """', '"hi" "ipython"']:
337 self.assertFalse(completer.has_open_quotes(s))
337 self.assertFalse(completer.has_open_quotes(s))
338
338
339 @pytest.mark.xfail(
339 @pytest.mark.xfail(
340 sys.platform == "win32", reason="abspath completions fail on Windows"
340 sys.platform == "win32", reason="abspath completions fail on Windows"
341 )
341 )
342 def test_abspath_file_completions(self):
342 def test_abspath_file_completions(self):
343 ip = get_ipython()
343 ip = get_ipython()
344 with TemporaryDirectory() as tmpdir:
344 with TemporaryDirectory() as tmpdir:
345 prefix = os.path.join(tmpdir, "foo")
345 prefix = os.path.join(tmpdir, "foo")
346 suffixes = ["1", "2"]
346 suffixes = ["1", "2"]
347 names = [prefix + s for s in suffixes]
347 names = [prefix + s for s in suffixes]
348 for n in names:
348 for n in names:
349 open(n, "w", encoding="utf-8").close()
349 open(n, "w", encoding="utf-8").close()
350
350
351 # Check simple completion
351 # Check simple completion
352 c = ip.complete(prefix)[1]
352 c = ip.complete(prefix)[1]
353 self.assertEqual(c, names)
353 self.assertEqual(c, names)
354
354
355 # Now check with a function call
355 # Now check with a function call
356 cmd = 'a = f("%s' % prefix
356 cmd = 'a = f("%s' % prefix
357 c = ip.complete(prefix, cmd)[1]
357 c = ip.complete(prefix, cmd)[1]
358 comp = [prefix + s for s in suffixes]
358 comp = [prefix + s for s in suffixes]
359 self.assertEqual(c, comp)
359 self.assertEqual(c, comp)
360
360
361 def test_local_file_completions(self):
361 def test_local_file_completions(self):
362 ip = get_ipython()
362 ip = get_ipython()
363 with TemporaryWorkingDirectory():
363 with TemporaryWorkingDirectory():
364 prefix = "./foo"
364 prefix = "./foo"
365 suffixes = ["1", "2"]
365 suffixes = ["1", "2"]
366 names = [prefix + s for s in suffixes]
366 names = [prefix + s for s in suffixes]
367 for n in names:
367 for n in names:
368 open(n, "w", encoding="utf-8").close()
368 open(n, "w", encoding="utf-8").close()
369
369
370 # Check simple completion
370 # Check simple completion
371 c = ip.complete(prefix)[1]
371 c = ip.complete(prefix)[1]
372 self.assertEqual(c, names)
372 self.assertEqual(c, names)
373
373
374 # Now check with a function call
374 # Now check with a function call
375 cmd = 'a = f("%s' % prefix
375 cmd = 'a = f("%s' % prefix
376 c = ip.complete(prefix, cmd)[1]
376 c = ip.complete(prefix, cmd)[1]
377 comp = {prefix + s for s in suffixes}
377 comp = {prefix + s for s in suffixes}
378 self.assertTrue(comp.issubset(set(c)))
378 self.assertTrue(comp.issubset(set(c)))
379
379
380 def test_quoted_file_completions(self):
380 def test_quoted_file_completions(self):
381 ip = get_ipython()
381 ip = get_ipython()
382
383 def _(text):
384 return ip.Completer._complete(
385 cursor_line=0, cursor_pos=len(text), full_text=text
386 )["IPCompleter.file_matcher"]["completions"]
387
382 with TemporaryWorkingDirectory():
388 with TemporaryWorkingDirectory():
383 name = "foo'bar"
389 name = "foo'bar"
384 open(name, "w", encoding="utf-8").close()
390 open(name, "w", encoding="utf-8").close()
385
391
386 # Don't escape Windows
392 # Don't escape Windows
387 escaped = name if sys.platform == "win32" else "foo\\'bar"
393 escaped = name if sys.platform == "win32" else "foo\\'bar"
388
394
389 # Single quote matches embedded single quote
395 # Single quote matches embedded single quote
390 text = "open('foo"
396 c = _("open('foo")[0]
391 c = ip.Completer._complete(
397 self.assertEqual(c.text, escaped)
392 cursor_line=0, cursor_pos=len(text), full_text=text
393 )[1]
394 self.assertEqual(c, [escaped])
395
398
396 # Double quote requires no escape
399 # Double quote requires no escape
397 text = 'open("foo'
400 c = _('open("foo')[0]
398 c = ip.Completer._complete(
401 self.assertEqual(c.text, name)
399 cursor_line=0, cursor_pos=len(text), full_text=text
400 )[1]
401 self.assertEqual(c, [name])
402
402
403 # No quote requires an escape
403 # No quote requires an escape
404 text = "%ls foo"
404 c = _("%ls foo")[0]
405 c = ip.Completer._complete(
405 self.assertEqual(c.text, escaped)
406 cursor_line=0, cursor_pos=len(text), full_text=text
407 )[1]
408 self.assertEqual(c, [escaped])
409
406
410 def test_all_completions_dups(self):
407 def test_all_completions_dups(self):
411 """
408 """
412 Make sure the output of `IPCompleter.all_completions` does not have
409 Make sure the output of `IPCompleter.all_completions` does not have
413 duplicated prefixes.
410 duplicated prefixes.
414 """
411 """
415 ip = get_ipython()
412 ip = get_ipython()
416 c = ip.Completer
413 c = ip.Completer
417 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
414 ip.ex("class TestClass():\n\ta=1\n\ta1=2")
418 for jedi_status in [True, False]:
415 for jedi_status in [True, False]:
419 with provisionalcompleter():
416 with provisionalcompleter():
420 ip.Completer.use_jedi = jedi_status
417 ip.Completer.use_jedi = jedi_status
421 matches = c.all_completions("TestCl")
418 matches = c.all_completions("TestCl")
422 assert matches == ["TestClass"], (jedi_status, matches)
419 assert matches == ["TestClass"], (jedi_status, matches)
423 matches = c.all_completions("TestClass.")
420 matches = c.all_completions("TestClass.")
424 assert len(matches) > 2, (jedi_status, matches)
421 assert len(matches) > 2, (jedi_status, matches)
425 matches = c.all_completions("TestClass.a")
422 matches = c.all_completions("TestClass.a")
426 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
423 assert matches == ['TestClass.a', 'TestClass.a1'], jedi_status
427
424
428 def test_jedi(self):
425 def test_jedi(self):
429 """
426 """
430 A couple of issue we had with Jedi
427 A couple of issue we had with Jedi
431 """
428 """
432 ip = get_ipython()
429 ip = get_ipython()
433
430
434 def _test_complete(reason, s, comp, start=None, end=None):
431 def _test_complete(reason, s, comp, start=None, end=None):
435 l = len(s)
432 l = len(s)
436 start = start if start is not None else l
433 start = start if start is not None else l
437 end = end if end is not None else l
434 end = end if end is not None else l
438 with provisionalcompleter():
435 with provisionalcompleter():
439 ip.Completer.use_jedi = True
436 ip.Completer.use_jedi = True
440 completions = set(ip.Completer.completions(s, l))
437 completions = set(ip.Completer.completions(s, l))
441 ip.Completer.use_jedi = False
438 ip.Completer.use_jedi = False
442 assert Completion(start, end, comp) in completions, reason
439 assert Completion(start, end, comp) in completions, reason
443
440
444 def _test_not_complete(reason, s, comp):
441 def _test_not_complete(reason, s, comp):
445 l = len(s)
442 l = len(s)
446 with provisionalcompleter():
443 with provisionalcompleter():
447 ip.Completer.use_jedi = True
444 ip.Completer.use_jedi = True
448 completions = set(ip.Completer.completions(s, l))
445 completions = set(ip.Completer.completions(s, l))
449 ip.Completer.use_jedi = False
446 ip.Completer.use_jedi = False
450 assert Completion(l, l, comp) not in completions, reason
447 assert Completion(l, l, comp) not in completions, reason
451
448
452 import jedi
449 import jedi
453
450
454 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
451 jedi_version = tuple(int(i) for i in jedi.__version__.split(".")[:3])
455 if jedi_version > (0, 10):
452 if jedi_version > (0, 10):
456 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
453 _test_complete("jedi >0.9 should complete and not crash", "a=1;a.", "real")
457 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
454 _test_complete("can infer first argument", 'a=(1,"foo");a[0].', "real")
458 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
455 _test_complete("can infer second argument", 'a=(1,"foo");a[1].', "capitalize")
459 _test_complete("cover duplicate completions", "im", "import", 0, 2)
456 _test_complete("cover duplicate completions", "im", "import", 0, 2)
460
457
461 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
458 _test_not_complete("does not mix types", 'a=(1,"foo");a[0].', "capitalize")
462
459
463 def test_completion_have_signature(self):
460 def test_completion_have_signature(self):
464 """
461 """
465 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
462 Lets make sure jedi is capable of pulling out the signature of the function we are completing.
466 """
463 """
467 ip = get_ipython()
464 ip = get_ipython()
468 with provisionalcompleter():
465 with provisionalcompleter():
469 ip.Completer.use_jedi = True
466 ip.Completer.use_jedi = True
470 completions = ip.Completer.completions("ope", 3)
467 completions = ip.Completer.completions("ope", 3)
471 c = next(completions) # should be `open`
468 c = next(completions) # should be `open`
472 ip.Completer.use_jedi = False
469 ip.Completer.use_jedi = False
473 assert "file" in c.signature, "Signature of function was not found by completer"
470 assert "file" in c.signature, "Signature of function was not found by completer"
474 assert (
471 assert (
475 "encoding" in c.signature
472 "encoding" in c.signature
476 ), "Signature of function was not found by completer"
473 ), "Signature of function was not found by completer"
477
474
475 def test_completions_have_type(self):
476 """
477 Lets make sure matchers provide completion type.
478 """
479 ip = get_ipython()
480 with provisionalcompleter():
481 ip.Completer.use_jedi = False
482 completions = ip.Completer.completions("%tim", 3)
483 c = next(completions) # should be `%time` or similar
484 assert c.type == "magic", "Type of magic was not assigned by completer"
485
478 @pytest.mark.xfail(reason="Known failure on jedi<=0.18.0")
486 @pytest.mark.xfail(reason="Known failure on jedi<=0.18.0")
479 def test_deduplicate_completions(self):
487 def test_deduplicate_completions(self):
480 """
488 """
481 Test that completions are correctly deduplicated (even if ranges are not the same)
489 Test that completions are correctly deduplicated (even if ranges are not the same)
482 """
490 """
483 ip = get_ipython()
491 ip = get_ipython()
484 ip.ex(
492 ip.ex(
485 textwrap.dedent(
493 textwrap.dedent(
486 """
494 """
487 class Z:
495 class Z:
488 zoo = 1
496 zoo = 1
489 """
497 """
490 )
498 )
491 )
499 )
492 with provisionalcompleter():
500 with provisionalcompleter():
493 ip.Completer.use_jedi = True
501 ip.Completer.use_jedi = True
494 l = list(
502 l = list(
495 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
503 _deduplicate_completions("Z.z", ip.Completer.completions("Z.z", 3))
496 )
504 )
497 ip.Completer.use_jedi = False
505 ip.Completer.use_jedi = False
498
506
499 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
507 assert len(l) == 1, "Completions (Z.z<tab>) correctly deduplicate: %s " % l
500 assert l[0].text == "zoo" # and not `it.accumulate`
508 assert l[0].text == "zoo" # and not `it.accumulate`
501
509
502 def test_greedy_completions(self):
510 def test_greedy_completions(self):
503 """
511 """
504 Test the capability of the Greedy completer.
512 Test the capability of the Greedy completer.
505
513
506 Most of the test here does not really show off the greedy completer, for proof
514 Most of the test here does not really show off the greedy completer, for proof
507 each of the text below now pass with Jedi. The greedy completer is capable of more.
515 each of the text below now pass with Jedi. The greedy completer is capable of more.
508
516
509 See the :any:`test_dict_key_completion_contexts`
517 See the :any:`test_dict_key_completion_contexts`
510
518
511 """
519 """
512 ip = get_ipython()
520 ip = get_ipython()
513 ip.ex("a=list(range(5))")
521 ip.ex("a=list(range(5))")
514 _, c = ip.complete(".", line="a[0].")
522 _, c = ip.complete(".", line="a[0].")
515 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
523 self.assertFalse(".real" in c, "Shouldn't have completed on a[0]: %s" % c)
516
524
517 def _(line, cursor_pos, expect, message, completion):
525 def _(line, cursor_pos, expect, message, completion):
518 with greedy_completion(), provisionalcompleter():
526 with greedy_completion(), provisionalcompleter():
519 ip.Completer.use_jedi = False
527 ip.Completer.use_jedi = False
520 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
528 _, c = ip.complete(".", line=line, cursor_pos=cursor_pos)
521 self.assertIn(expect, c, message % c)
529 self.assertIn(expect, c, message % c)
522
530
523 ip.Completer.use_jedi = True
531 ip.Completer.use_jedi = True
524 with provisionalcompleter():
532 with provisionalcompleter():
525 completions = ip.Completer.completions(line, cursor_pos)
533 completions = ip.Completer.completions(line, cursor_pos)
526 self.assertIn(completion, completions)
534 self.assertIn(completion, completions)
527
535
528 with provisionalcompleter():
536 with provisionalcompleter():
529 _(
537 _(
530 "a[0].",
538 "a[0].",
531 5,
539 5,
532 "a[0].real",
540 "a[0].real",
533 "Should have completed on a[0].: %s",
541 "Should have completed on a[0].: %s",
534 Completion(5, 5, "real"),
542 Completion(5, 5, "real"),
535 )
543 )
536 _(
544 _(
537 "a[0].r",
545 "a[0].r",
538 6,
546 6,
539 "a[0].real",
547 "a[0].real",
540 "Should have completed on a[0].r: %s",
548 "Should have completed on a[0].r: %s",
541 Completion(5, 6, "real"),
549 Completion(5, 6, "real"),
542 )
550 )
543
551
544 _(
552 _(
545 "a[0].from_",
553 "a[0].from_",
546 10,
554 10,
547 "a[0].from_bytes",
555 "a[0].from_bytes",
548 "Should have completed on a[0].from_: %s",
556 "Should have completed on a[0].from_: %s",
549 Completion(5, 10, "from_bytes"),
557 Completion(5, 10, "from_bytes"),
550 )
558 )
551
559
552 def test_omit__names(self):
560 def test_omit__names(self):
553 # also happens to test IPCompleter as a configurable
561 # also happens to test IPCompleter as a configurable
554 ip = get_ipython()
562 ip = get_ipython()
555 ip._hidden_attr = 1
563 ip._hidden_attr = 1
556 ip._x = {}
564 ip._x = {}
557 c = ip.Completer
565 c = ip.Completer
558 ip.ex("ip=get_ipython()")
566 ip.ex("ip=get_ipython()")
559 cfg = Config()
567 cfg = Config()
560 cfg.IPCompleter.omit__names = 0
568 cfg.IPCompleter.omit__names = 0
561 c.update_config(cfg)
569 c.update_config(cfg)
562 with provisionalcompleter():
570 with provisionalcompleter():
563 c.use_jedi = False
571 c.use_jedi = False
564 s, matches = c.complete("ip.")
572 s, matches = c.complete("ip.")
565 self.assertIn("ip.__str__", matches)
573 self.assertIn("ip.__str__", matches)
566 self.assertIn("ip._hidden_attr", matches)
574 self.assertIn("ip._hidden_attr", matches)
567
575
568 # c.use_jedi = True
576 # c.use_jedi = True
569 # completions = set(c.completions('ip.', 3))
577 # completions = set(c.completions('ip.', 3))
570 # self.assertIn(Completion(3, 3, '__str__'), completions)
578 # self.assertIn(Completion(3, 3, '__str__'), completions)
571 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
579 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
572
580
573 cfg = Config()
581 cfg = Config()
574 cfg.IPCompleter.omit__names = 1
582 cfg.IPCompleter.omit__names = 1
575 c.update_config(cfg)
583 c.update_config(cfg)
576 with provisionalcompleter():
584 with provisionalcompleter():
577 c.use_jedi = False
585 c.use_jedi = False
578 s, matches = c.complete("ip.")
586 s, matches = c.complete("ip.")
579 self.assertNotIn("ip.__str__", matches)
587 self.assertNotIn("ip.__str__", matches)
580 # self.assertIn('ip._hidden_attr', matches)
588 # self.assertIn('ip._hidden_attr', matches)
581
589
582 # c.use_jedi = True
590 # c.use_jedi = True
583 # completions = set(c.completions('ip.', 3))
591 # completions = set(c.completions('ip.', 3))
584 # self.assertNotIn(Completion(3,3,'__str__'), completions)
592 # self.assertNotIn(Completion(3,3,'__str__'), completions)
585 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
593 # self.assertIn(Completion(3,3, "_hidden_attr"), completions)
586
594
587 cfg = Config()
595 cfg = Config()
588 cfg.IPCompleter.omit__names = 2
596 cfg.IPCompleter.omit__names = 2
589 c.update_config(cfg)
597 c.update_config(cfg)
590 with provisionalcompleter():
598 with provisionalcompleter():
591 c.use_jedi = False
599 c.use_jedi = False
592 s, matches = c.complete("ip.")
600 s, matches = c.complete("ip.")
593 self.assertNotIn("ip.__str__", matches)
601 self.assertNotIn("ip.__str__", matches)
594 self.assertNotIn("ip._hidden_attr", matches)
602 self.assertNotIn("ip._hidden_attr", matches)
595
603
596 # c.use_jedi = True
604 # c.use_jedi = True
597 # completions = set(c.completions('ip.', 3))
605 # completions = set(c.completions('ip.', 3))
598 # self.assertNotIn(Completion(3,3,'__str__'), completions)
606 # self.assertNotIn(Completion(3,3,'__str__'), completions)
599 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
607 # self.assertNotIn(Completion(3,3, "_hidden_attr"), completions)
600
608
601 with provisionalcompleter():
609 with provisionalcompleter():
602 c.use_jedi = False
610 c.use_jedi = False
603 s, matches = c.complete("ip._x.")
611 s, matches = c.complete("ip._x.")
604 self.assertIn("ip._x.keys", matches)
612 self.assertIn("ip._x.keys", matches)
605
613
606 # c.use_jedi = True
614 # c.use_jedi = True
607 # completions = set(c.completions('ip._x.', 6))
615 # completions = set(c.completions('ip._x.', 6))
608 # self.assertIn(Completion(6,6, "keys"), completions)
616 # self.assertIn(Completion(6,6, "keys"), completions)
609
617
610 del ip._hidden_attr
618 del ip._hidden_attr
611 del ip._x
619 del ip._x
612
620
613 def test_limit_to__all__False_ok(self):
621 def test_limit_to__all__False_ok(self):
614 """
622 """
615 Limit to all is deprecated, once we remove it this test can go away.
623 Limit to all is deprecated, once we remove it this test can go away.
616 """
624 """
617 ip = get_ipython()
625 ip = get_ipython()
618 c = ip.Completer
626 c = ip.Completer
619 c.use_jedi = False
627 c.use_jedi = False
620 ip.ex("class D: x=24")
628 ip.ex("class D: x=24")
621 ip.ex("d=D()")
629 ip.ex("d=D()")
622 cfg = Config()
630 cfg = Config()
623 cfg.IPCompleter.limit_to__all__ = False
631 cfg.IPCompleter.limit_to__all__ = False
624 c.update_config(cfg)
632 c.update_config(cfg)
625 s, matches = c.complete("d.")
633 s, matches = c.complete("d.")
626 self.assertIn("d.x", matches)
634 self.assertIn("d.x", matches)
627
635
628 def test_get__all__entries_ok(self):
636 def test_get__all__entries_ok(self):
629 class A:
637 class A:
630 __all__ = ["x", 1]
638 __all__ = ["x", 1]
631
639
632 words = completer.get__all__entries(A())
640 words = completer.get__all__entries(A())
633 self.assertEqual(words, ["x"])
641 self.assertEqual(words, ["x"])
634
642
635 def test_get__all__entries_no__all__ok(self):
643 def test_get__all__entries_no__all__ok(self):
636 class A:
644 class A:
637 pass
645 pass
638
646
639 words = completer.get__all__entries(A())
647 words = completer.get__all__entries(A())
640 self.assertEqual(words, [])
648 self.assertEqual(words, [])
641
649
642 def test_func_kw_completions(self):
650 def test_func_kw_completions(self):
643 ip = get_ipython()
651 ip = get_ipython()
644 c = ip.Completer
652 c = ip.Completer
645 c.use_jedi = False
653 c.use_jedi = False
646 ip.ex("def myfunc(a=1,b=2): return a+b")
654 ip.ex("def myfunc(a=1,b=2): return a+b")
647 s, matches = c.complete(None, "myfunc(1,b")
655 s, matches = c.complete(None, "myfunc(1,b")
648 self.assertIn("b=", matches)
656 self.assertIn("b=", matches)
649 # Simulate completing with cursor right after b (pos==10):
657 # Simulate completing with cursor right after b (pos==10):
650 s, matches = c.complete(None, "myfunc(1,b)", 10)
658 s, matches = c.complete(None, "myfunc(1,b)", 10)
651 self.assertIn("b=", matches)
659 self.assertIn("b=", matches)
652 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
660 s, matches = c.complete(None, 'myfunc(a="escaped\\")string",b')
653 self.assertIn("b=", matches)
661 self.assertIn("b=", matches)
654 # builtin function
662 # builtin function
655 s, matches = c.complete(None, "min(k, k")
663 s, matches = c.complete(None, "min(k, k")
656 self.assertIn("key=", matches)
664 self.assertIn("key=", matches)
657
665
658 def test_default_arguments_from_docstring(self):
666 def test_default_arguments_from_docstring(self):
659 ip = get_ipython()
667 ip = get_ipython()
660 c = ip.Completer
668 c = ip.Completer
661 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
669 kwd = c._default_arguments_from_docstring("min(iterable[, key=func]) -> value")
662 self.assertEqual(kwd, ["key"])
670 self.assertEqual(kwd, ["key"])
663 # with cython type etc
671 # with cython type etc
664 kwd = c._default_arguments_from_docstring(
672 kwd = c._default_arguments_from_docstring(
665 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
673 "Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
666 )
674 )
667 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
675 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
668 # white spaces
676 # white spaces
669 kwd = c._default_arguments_from_docstring(
677 kwd = c._default_arguments_from_docstring(
670 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
678 "\n Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)\n"
671 )
679 )
672 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
680 self.assertEqual(kwd, ["ncall", "resume", "nsplit"])
673
681
674 def test_line_magics(self):
682 def test_line_magics(self):
675 ip = get_ipython()
683 ip = get_ipython()
676 c = ip.Completer
684 c = ip.Completer
677 s, matches = c.complete(None, "lsmag")
685 s, matches = c.complete(None, "lsmag")
678 self.assertIn("%lsmagic", matches)
686 self.assertIn("%lsmagic", matches)
679 s, matches = c.complete(None, "%lsmag")
687 s, matches = c.complete(None, "%lsmag")
680 self.assertIn("%lsmagic", matches)
688 self.assertIn("%lsmagic", matches)
681
689
682 def test_cell_magics(self):
690 def test_cell_magics(self):
683 from IPython.core.magic import register_cell_magic
691 from IPython.core.magic import register_cell_magic
684
692
685 @register_cell_magic
693 @register_cell_magic
686 def _foo_cellm(line, cell):
694 def _foo_cellm(line, cell):
687 pass
695 pass
688
696
689 ip = get_ipython()
697 ip = get_ipython()
690 c = ip.Completer
698 c = ip.Completer
691
699
692 s, matches = c.complete(None, "_foo_ce")
700 s, matches = c.complete(None, "_foo_ce")
693 self.assertIn("%%_foo_cellm", matches)
701 self.assertIn("%%_foo_cellm", matches)
694 s, matches = c.complete(None, "%%_foo_ce")
702 s, matches = c.complete(None, "%%_foo_ce")
695 self.assertIn("%%_foo_cellm", matches)
703 self.assertIn("%%_foo_cellm", matches)
696
704
697 def test_line_cell_magics(self):
705 def test_line_cell_magics(self):
698 from IPython.core.magic import register_line_cell_magic
706 from IPython.core.magic import register_line_cell_magic
699
707
700 @register_line_cell_magic
708 @register_line_cell_magic
701 def _bar_cellm(line, cell):
709 def _bar_cellm(line, cell):
702 pass
710 pass
703
711
704 ip = get_ipython()
712 ip = get_ipython()
705 c = ip.Completer
713 c = ip.Completer
706
714
707 # The policy here is trickier, see comments in completion code. The
715 # The policy here is trickier, see comments in completion code. The
708 # returned values depend on whether the user passes %% or not explicitly,
716 # returned values depend on whether the user passes %% or not explicitly,
709 # and this will show a difference if the same name is both a line and cell
717 # and this will show a difference if the same name is both a line and cell
710 # magic.
718 # magic.
711 s, matches = c.complete(None, "_bar_ce")
719 s, matches = c.complete(None, "_bar_ce")
712 self.assertIn("%_bar_cellm", matches)
720 self.assertIn("%_bar_cellm", matches)
713 self.assertIn("%%_bar_cellm", matches)
721 self.assertIn("%%_bar_cellm", matches)
714 s, matches = c.complete(None, "%_bar_ce")
722 s, matches = c.complete(None, "%_bar_ce")
715 self.assertIn("%_bar_cellm", matches)
723 self.assertIn("%_bar_cellm", matches)
716 self.assertIn("%%_bar_cellm", matches)
724 self.assertIn("%%_bar_cellm", matches)
717 s, matches = c.complete(None, "%%_bar_ce")
725 s, matches = c.complete(None, "%%_bar_ce")
718 self.assertNotIn("%_bar_cellm", matches)
726 self.assertNotIn("%_bar_cellm", matches)
719 self.assertIn("%%_bar_cellm", matches)
727 self.assertIn("%%_bar_cellm", matches)
720
728
721 def test_magic_completion_order(self):
729 def test_magic_completion_order(self):
722 ip = get_ipython()
730 ip = get_ipython()
723 c = ip.Completer
731 c = ip.Completer
724
732
725 # Test ordering of line and cell magics.
733 # Test ordering of line and cell magics.
726 text, matches = c.complete("timeit")
734 text, matches = c.complete("timeit")
727 self.assertEqual(matches, ["%timeit", "%%timeit"])
735 self.assertEqual(matches, ["%timeit", "%%timeit"])
728
736
729 def test_magic_completion_shadowing(self):
737 def test_magic_completion_shadowing(self):
730 ip = get_ipython()
738 ip = get_ipython()
731 c = ip.Completer
739 c = ip.Completer
732 c.use_jedi = False
740 c.use_jedi = False
733
741
734 # Before importing matplotlib, %matplotlib magic should be the only option.
742 # Before importing matplotlib, %matplotlib magic should be the only option.
735 text, matches = c.complete("mat")
743 text, matches = c.complete("mat")
736 self.assertEqual(matches, ["%matplotlib"])
744 self.assertEqual(matches, ["%matplotlib"])
737
745
738 # The newly introduced name should shadow the magic.
746 # The newly introduced name should shadow the magic.
739 ip.run_cell("matplotlib = 1")
747 ip.run_cell("matplotlib = 1")
740 text, matches = c.complete("mat")
748 text, matches = c.complete("mat")
741 self.assertEqual(matches, ["matplotlib"])
749 self.assertEqual(matches, ["matplotlib"])
742
750
743 # After removing matplotlib from namespace, the magic should again be
751 # After removing matplotlib from namespace, the magic should again be
744 # the only option.
752 # the only option.
745 del ip.user_ns["matplotlib"]
753 del ip.user_ns["matplotlib"]
746 text, matches = c.complete("mat")
754 text, matches = c.complete("mat")
747 self.assertEqual(matches, ["%matplotlib"])
755 self.assertEqual(matches, ["%matplotlib"])
748
756
749 def test_magic_completion_shadowing_explicit(self):
757 def test_magic_completion_shadowing_explicit(self):
750 """
758 """
751 If the user try to complete a shadowed magic, and explicit % start should
759 If the user try to complete a shadowed magic, and explicit % start should
752 still return the completions.
760 still return the completions.
753 """
761 """
754 ip = get_ipython()
762 ip = get_ipython()
755 c = ip.Completer
763 c = ip.Completer
756
764
757 # Before importing matplotlib, %matplotlib magic should be the only option.
765 # Before importing matplotlib, %matplotlib magic should be the only option.
758 text, matches = c.complete("%mat")
766 text, matches = c.complete("%mat")
759 self.assertEqual(matches, ["%matplotlib"])
767 self.assertEqual(matches, ["%matplotlib"])
760
768
761 ip.run_cell("matplotlib = 1")
769 ip.run_cell("matplotlib = 1")
762
770
763 # After removing matplotlib from namespace, the magic should still be
771 # After removing matplotlib from namespace, the magic should still be
764 # the only option.
772 # the only option.
765 text, matches = c.complete("%mat")
773 text, matches = c.complete("%mat")
766 self.assertEqual(matches, ["%matplotlib"])
774 self.assertEqual(matches, ["%matplotlib"])
767
775
768 def test_magic_config(self):
776 def test_magic_config(self):
769 ip = get_ipython()
777 ip = get_ipython()
770 c = ip.Completer
778 c = ip.Completer
771
779
772 s, matches = c.complete(None, "conf")
780 s, matches = c.complete(None, "conf")
773 self.assertIn("%config", matches)
781 self.assertIn("%config", matches)
774 s, matches = c.complete(None, "conf")
782 s, matches = c.complete(None, "conf")
775 self.assertNotIn("AliasManager", matches)
783 self.assertNotIn("AliasManager", matches)
776 s, matches = c.complete(None, "config ")
784 s, matches = c.complete(None, "config ")
777 self.assertIn("AliasManager", matches)
785 self.assertIn("AliasManager", matches)
778 s, matches = c.complete(None, "%config ")
786 s, matches = c.complete(None, "%config ")
779 self.assertIn("AliasManager", matches)
787 self.assertIn("AliasManager", matches)
780 s, matches = c.complete(None, "config Ali")
788 s, matches = c.complete(None, "config Ali")
781 self.assertListEqual(["AliasManager"], matches)
789 self.assertListEqual(["AliasManager"], matches)
782 s, matches = c.complete(None, "%config Ali")
790 s, matches = c.complete(None, "%config Ali")
783 self.assertListEqual(["AliasManager"], matches)
791 self.assertListEqual(["AliasManager"], matches)
784 s, matches = c.complete(None, "config AliasManager")
792 s, matches = c.complete(None, "config AliasManager")
785 self.assertListEqual(["AliasManager"], matches)
793 self.assertListEqual(["AliasManager"], matches)
786 s, matches = c.complete(None, "%config AliasManager")
794 s, matches = c.complete(None, "%config AliasManager")
787 self.assertListEqual(["AliasManager"], matches)
795 self.assertListEqual(["AliasManager"], matches)
788 s, matches = c.complete(None, "config AliasManager.")
796 s, matches = c.complete(None, "config AliasManager.")
789 self.assertIn("AliasManager.default_aliases", matches)
797 self.assertIn("AliasManager.default_aliases", matches)
790 s, matches = c.complete(None, "%config AliasManager.")
798 s, matches = c.complete(None, "%config AliasManager.")
791 self.assertIn("AliasManager.default_aliases", matches)
799 self.assertIn("AliasManager.default_aliases", matches)
792 s, matches = c.complete(None, "config AliasManager.de")
800 s, matches = c.complete(None, "config AliasManager.de")
793 self.assertListEqual(["AliasManager.default_aliases"], matches)
801 self.assertListEqual(["AliasManager.default_aliases"], matches)
794 s, matches = c.complete(None, "config AliasManager.de")
802 s, matches = c.complete(None, "config AliasManager.de")
795 self.assertListEqual(["AliasManager.default_aliases"], matches)
803 self.assertListEqual(["AliasManager.default_aliases"], matches)
796
804
797 def test_magic_color(self):
805 def test_magic_color(self):
798 ip = get_ipython()
806 ip = get_ipython()
799 c = ip.Completer
807 c = ip.Completer
800
808
801 s, matches = c.complete(None, "colo")
809 s, matches = c.complete(None, "colo")
802 self.assertIn("%colors", matches)
810 self.assertIn("%colors", matches)
803 s, matches = c.complete(None, "colo")
811 s, matches = c.complete(None, "colo")
804 self.assertNotIn("NoColor", matches)
812 self.assertNotIn("NoColor", matches)
805 s, matches = c.complete(None, "%colors") # No trailing space
813 s, matches = c.complete(None, "%colors") # No trailing space
806 self.assertNotIn("NoColor", matches)
814 self.assertNotIn("NoColor", matches)
807 s, matches = c.complete(None, "colors ")
815 s, matches = c.complete(None, "colors ")
808 self.assertIn("NoColor", matches)
816 self.assertIn("NoColor", matches)
809 s, matches = c.complete(None, "%colors ")
817 s, matches = c.complete(None, "%colors ")
810 self.assertIn("NoColor", matches)
818 self.assertIn("NoColor", matches)
811 s, matches = c.complete(None, "colors NoCo")
819 s, matches = c.complete(None, "colors NoCo")
812 self.assertListEqual(["NoColor"], matches)
820 self.assertListEqual(["NoColor"], matches)
813 s, matches = c.complete(None, "%colors NoCo")
821 s, matches = c.complete(None, "%colors NoCo")
814 self.assertListEqual(["NoColor"], matches)
822 self.assertListEqual(["NoColor"], matches)
815
823
816 def test_match_dict_keys(self):
824 def test_match_dict_keys(self):
817 """
825 """
818 Test that match_dict_keys works on a couple of use case does return what
826 Test that match_dict_keys works on a couple of use case does return what
819 expected, and does not crash
827 expected, and does not crash
820 """
828 """
821 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
829 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
822
830
823 keys = ["foo", b"far"]
831 keys = ["foo", b"far"]
824 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
832 assert match_dict_keys(keys, "b'", delims=delims) == ("'", 2, ["far"])
825 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
833 assert match_dict_keys(keys, "b'f", delims=delims) == ("'", 2, ["far"])
826 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
834 assert match_dict_keys(keys, 'b"', delims=delims) == ('"', 2, ["far"])
827 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
835 assert match_dict_keys(keys, 'b"f', delims=delims) == ('"', 2, ["far"])
828
836
829 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
837 assert match_dict_keys(keys, "'", delims=delims) == ("'", 1, ["foo"])
830 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
838 assert match_dict_keys(keys, "'f", delims=delims) == ("'", 1, ["foo"])
831 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
839 assert match_dict_keys(keys, '"', delims=delims) == ('"', 1, ["foo"])
832 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
840 assert match_dict_keys(keys, '"f', delims=delims) == ('"', 1, ["foo"])
833
841
834 match_dict_keys
842 match_dict_keys
835
843
836 def test_match_dict_keys_tuple(self):
844 def test_match_dict_keys_tuple(self):
837 """
845 """
838 Test that match_dict_keys called with extra prefix works on a couple of use case,
846 Test that match_dict_keys called with extra prefix works on a couple of use case,
839 does return what expected, and does not crash.
847 does return what expected, and does not crash.
840 """
848 """
841 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
849 delims = " \t\n`!@#$^&*()=+[{]}\\|;:'\",<>?"
842
850
843 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
851 keys = [("foo", "bar"), ("foo", "oof"), ("foo", b"bar"), ('other', 'test')]
844
852
845 # Completion on first key == "foo"
853 # Completion on first key == "foo"
846 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["bar", "oof"])
854 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["bar", "oof"])
847 assert match_dict_keys(keys, "\"", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["bar", "oof"])
855 assert match_dict_keys(keys, "\"", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["bar", "oof"])
848 assert match_dict_keys(keys, "'o", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["oof"])
856 assert match_dict_keys(keys, "'o", delims=delims, extra_prefix=("foo",)) == ("'", 1, ["oof"])
849 assert match_dict_keys(keys, "\"o", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["oof"])
857 assert match_dict_keys(keys, "\"o", delims=delims, extra_prefix=("foo",)) == ("\"", 1, ["oof"])
850 assert match_dict_keys(keys, "b'", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
858 assert match_dict_keys(keys, "b'", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
851 assert match_dict_keys(keys, "b\"", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
859 assert match_dict_keys(keys, "b\"", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
852 assert match_dict_keys(keys, "b'b", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
860 assert match_dict_keys(keys, "b'b", delims=delims, extra_prefix=("foo",)) == ("'", 2, ["bar"])
853 assert match_dict_keys(keys, "b\"b", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
861 assert match_dict_keys(keys, "b\"b", delims=delims, extra_prefix=("foo",)) == ("\"", 2, ["bar"])
854
862
855 # No Completion
863 # No Completion
856 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
864 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("no_foo",)) == ("'", 1, [])
857 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
865 assert match_dict_keys(keys, "'", delims=delims, extra_prefix=("fo",)) == ("'", 1, [])
858
866
859 keys = [('foo1', 'foo2', 'foo3', 'foo4'), ('foo1', 'foo2', 'bar', 'foo4')]
867 keys = [('foo1', 'foo2', 'foo3', 'foo4'), ('foo1', 'foo2', 'bar', 'foo4')]
860 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1',)) == ("'", 1, ["foo2", "foo2"])
868 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1',)) == ("'", 1, ["foo2", "foo2"])
861 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2')) == ("'", 1, ["foo3"])
869 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2')) == ("'", 1, ["foo3"])
862 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3')) == ("'", 1, ["foo4"])
870 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3')) == ("'", 1, ["foo4"])
863 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3', 'foo4')) == ("'", 1, [])
871 assert match_dict_keys(keys, "'foo", delims=delims, extra_prefix=('foo1', 'foo2', 'foo3', 'foo4')) == ("'", 1, [])
864
872
865 def test_dict_key_completion_string(self):
873 def test_dict_key_completion_string(self):
866 """Test dictionary key completion for string keys"""
874 """Test dictionary key completion for string keys"""
867 ip = get_ipython()
875 ip = get_ipython()
868 complete = ip.Completer.complete
876 complete = ip.Completer.complete
869
877
870 ip.user_ns["d"] = {"abc": None}
878 ip.user_ns["d"] = {"abc": None}
871
879
872 # check completion at different stages
880 # check completion at different stages
873 _, matches = complete(line_buffer="d[")
881 _, matches = complete(line_buffer="d[")
874 self.assertIn("'abc'", matches)
882 self.assertIn("'abc'", matches)
875 self.assertNotIn("'abc']", matches)
883 self.assertNotIn("'abc']", matches)
876
884
877 _, matches = complete(line_buffer="d['")
885 _, matches = complete(line_buffer="d['")
878 self.assertIn("abc", matches)
886 self.assertIn("abc", matches)
879 self.assertNotIn("abc']", matches)
887 self.assertNotIn("abc']", matches)
880
888
881 _, matches = complete(line_buffer="d['a")
889 _, matches = complete(line_buffer="d['a")
882 self.assertIn("abc", matches)
890 self.assertIn("abc", matches)
883 self.assertNotIn("abc']", matches)
891 self.assertNotIn("abc']", matches)
884
892
885 # check use of different quoting
893 # check use of different quoting
886 _, matches = complete(line_buffer='d["')
894 _, matches = complete(line_buffer='d["')
887 self.assertIn("abc", matches)
895 self.assertIn("abc", matches)
888 self.assertNotIn('abc"]', matches)
896 self.assertNotIn('abc"]', matches)
889
897
890 _, matches = complete(line_buffer='d["a')
898 _, matches = complete(line_buffer='d["a')
891 self.assertIn("abc", matches)
899 self.assertIn("abc", matches)
892 self.assertNotIn('abc"]', matches)
900 self.assertNotIn('abc"]', matches)
893
901
894 # check sensitivity to following context
902 # check sensitivity to following context
895 _, matches = complete(line_buffer="d[]", cursor_pos=2)
903 _, matches = complete(line_buffer="d[]", cursor_pos=2)
896 self.assertIn("'abc'", matches)
904 self.assertIn("'abc'", matches)
897
905
898 _, matches = complete(line_buffer="d['']", cursor_pos=3)
906 _, matches = complete(line_buffer="d['']", cursor_pos=3)
899 self.assertIn("abc", matches)
907 self.assertIn("abc", matches)
900 self.assertNotIn("abc'", matches)
908 self.assertNotIn("abc'", matches)
901 self.assertNotIn("abc']", matches)
909 self.assertNotIn("abc']", matches)
902
910
903 # check multiple solutions are correctly returned and that noise is not
911 # check multiple solutions are correctly returned and that noise is not
904 ip.user_ns["d"] = {
912 ip.user_ns["d"] = {
905 "abc": None,
913 "abc": None,
906 "abd": None,
914 "abd": None,
907 "bad": None,
915 "bad": None,
908 object(): None,
916 object(): None,
909 5: None,
917 5: None,
910 ("abe", None): None,
918 ("abe", None): None,
911 (None, "abf"): None
919 (None, "abf"): None
912 }
920 }
913
921
914 _, matches = complete(line_buffer="d['a")
922 _, matches = complete(line_buffer="d['a")
915 self.assertIn("abc", matches)
923 self.assertIn("abc", matches)
916 self.assertIn("abd", matches)
924 self.assertIn("abd", matches)
917 self.assertNotIn("bad", matches)
925 self.assertNotIn("bad", matches)
918 self.assertNotIn("abe", matches)
926 self.assertNotIn("abe", matches)
919 self.assertNotIn("abf", matches)
927 self.assertNotIn("abf", matches)
920 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
928 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
921
929
922 # check escaping and whitespace
930 # check escaping and whitespace
923 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
931 ip.user_ns["d"] = {"a\nb": None, "a'b": None, 'a"b': None, "a word": None}
924 _, matches = complete(line_buffer="d['a")
932 _, matches = complete(line_buffer="d['a")
925 self.assertIn("a\\nb", matches)
933 self.assertIn("a\\nb", matches)
926 self.assertIn("a\\'b", matches)
934 self.assertIn("a\\'b", matches)
927 self.assertIn('a"b', matches)
935 self.assertIn('a"b', matches)
928 self.assertIn("a word", matches)
936 self.assertIn("a word", matches)
929 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
937 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
930
938
931 # - can complete on non-initial word of the string
939 # - can complete on non-initial word of the string
932 _, matches = complete(line_buffer="d['a w")
940 _, matches = complete(line_buffer="d['a w")
933 self.assertIn("word", matches)
941 self.assertIn("word", matches)
934
942
935 # - understands quote escaping
943 # - understands quote escaping
936 _, matches = complete(line_buffer="d['a\\'")
944 _, matches = complete(line_buffer="d['a\\'")
937 self.assertIn("b", matches)
945 self.assertIn("b", matches)
938
946
939 # - default quoting should work like repr
947 # - default quoting should work like repr
940 _, matches = complete(line_buffer="d[")
948 _, matches = complete(line_buffer="d[")
941 self.assertIn('"a\'b"', matches)
949 self.assertIn('"a\'b"', matches)
942
950
943 # - when opening quote with ", possible to match with unescaped apostrophe
951 # - when opening quote with ", possible to match with unescaped apostrophe
944 _, matches = complete(line_buffer="d[\"a'")
952 _, matches = complete(line_buffer="d[\"a'")
945 self.assertIn("b", matches)
953 self.assertIn("b", matches)
946
954
947 # need to not split at delims that readline won't split at
955 # need to not split at delims that readline won't split at
948 if "-" not in ip.Completer.splitter.delims:
956 if "-" not in ip.Completer.splitter.delims:
949 ip.user_ns["d"] = {"before-after": None}
957 ip.user_ns["d"] = {"before-after": None}
950 _, matches = complete(line_buffer="d['before-af")
958 _, matches = complete(line_buffer="d['before-af")
951 self.assertIn("before-after", matches)
959 self.assertIn("before-after", matches)
952
960
953 # check completion on tuple-of-string keys at different stage - on first key
961 # check completion on tuple-of-string keys at different stage - on first key
954 ip.user_ns["d"] = {('foo', 'bar'): None}
962 ip.user_ns["d"] = {('foo', 'bar'): None}
955 _, matches = complete(line_buffer="d[")
963 _, matches = complete(line_buffer="d[")
956 self.assertIn("'foo'", matches)
964 self.assertIn("'foo'", matches)
957 self.assertNotIn("'foo']", matches)
965 self.assertNotIn("'foo']", matches)
958 self.assertNotIn("'bar'", matches)
966 self.assertNotIn("'bar'", matches)
959 self.assertNotIn("foo", matches)
967 self.assertNotIn("foo", matches)
960 self.assertNotIn("bar", matches)
968 self.assertNotIn("bar", matches)
961
969
962 # - match the prefix
970 # - match the prefix
963 _, matches = complete(line_buffer="d['f")
971 _, matches = complete(line_buffer="d['f")
964 self.assertIn("foo", matches)
972 self.assertIn("foo", matches)
965 self.assertNotIn("foo']", matches)
973 self.assertNotIn("foo']", matches)
966 self.assertNotIn('foo"]', matches)
974 self.assertNotIn('foo"]', matches)
967 _, matches = complete(line_buffer="d['foo")
975 _, matches = complete(line_buffer="d['foo")
968 self.assertIn("foo", matches)
976 self.assertIn("foo", matches)
969
977
970 # - can complete on second key
978 # - can complete on second key
971 _, matches = complete(line_buffer="d['foo', ")
979 _, matches = complete(line_buffer="d['foo', ")
972 self.assertIn("'bar'", matches)
980 self.assertIn("'bar'", matches)
973 _, matches = complete(line_buffer="d['foo', 'b")
981 _, matches = complete(line_buffer="d['foo', 'b")
974 self.assertIn("bar", matches)
982 self.assertIn("bar", matches)
975 self.assertNotIn("foo", matches)
983 self.assertNotIn("foo", matches)
976
984
977 # - does not propose missing keys
985 # - does not propose missing keys
978 _, matches = complete(line_buffer="d['foo', 'f")
986 _, matches = complete(line_buffer="d['foo', 'f")
979 self.assertNotIn("bar", matches)
987 self.assertNotIn("bar", matches)
980 self.assertNotIn("foo", matches)
988 self.assertNotIn("foo", matches)
981
989
982 # check sensitivity to following context
990 # check sensitivity to following context
983 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
991 _, matches = complete(line_buffer="d['foo',]", cursor_pos=8)
984 self.assertIn("'bar'", matches)
992 self.assertIn("'bar'", matches)
985 self.assertNotIn("bar", matches)
993 self.assertNotIn("bar", matches)
986 self.assertNotIn("'foo'", matches)
994 self.assertNotIn("'foo'", matches)
987 self.assertNotIn("foo", matches)
995 self.assertNotIn("foo", matches)
988
996
989 _, matches = complete(line_buffer="d['']", cursor_pos=3)
997 _, matches = complete(line_buffer="d['']", cursor_pos=3)
990 self.assertIn("foo", matches)
998 self.assertIn("foo", matches)
991 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
999 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
992
1000
993 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
1001 _, matches = complete(line_buffer='d[""]', cursor_pos=3)
994 self.assertIn("foo", matches)
1002 self.assertIn("foo", matches)
995 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1003 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
996
1004
997 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
1005 _, matches = complete(line_buffer='d["foo","]', cursor_pos=9)
998 self.assertIn("bar", matches)
1006 self.assertIn("bar", matches)
999 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1007 assert not any(m.endswith(("]", '"', "'")) for m in matches), matches
1000
1008
1001 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1009 _, matches = complete(line_buffer='d["foo",]', cursor_pos=8)
1002 self.assertIn("'bar'", matches)
1010 self.assertIn("'bar'", matches)
1003 self.assertNotIn("bar", matches)
1011 self.assertNotIn("bar", matches)
1004
1012
1005 # Can complete with longer tuple keys
1013 # Can complete with longer tuple keys
1006 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1014 ip.user_ns["d"] = {('foo', 'bar', 'foobar'): None}
1007
1015
1008 # - can complete second key
1016 # - can complete second key
1009 _, matches = complete(line_buffer="d['foo', 'b")
1017 _, matches = complete(line_buffer="d['foo', 'b")
1010 self.assertIn("bar", matches)
1018 self.assertIn("bar", matches)
1011 self.assertNotIn("foo", matches)
1019 self.assertNotIn("foo", matches)
1012 self.assertNotIn("foobar", matches)
1020 self.assertNotIn("foobar", matches)
1013
1021
1014 # - can complete third key
1022 # - can complete third key
1015 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1023 _, matches = complete(line_buffer="d['foo', 'bar', 'fo")
1016 self.assertIn("foobar", matches)
1024 self.assertIn("foobar", matches)
1017 self.assertNotIn("foo", matches)
1025 self.assertNotIn("foo", matches)
1018 self.assertNotIn("bar", matches)
1026 self.assertNotIn("bar", matches)
1019
1027
1020 def test_dict_key_completion_contexts(self):
1028 def test_dict_key_completion_contexts(self):
1021 """Test expression contexts in which dict key completion occurs"""
1029 """Test expression contexts in which dict key completion occurs"""
1022 ip = get_ipython()
1030 ip = get_ipython()
1023 complete = ip.Completer.complete
1031 complete = ip.Completer.complete
1024 d = {"abc": None}
1032 d = {"abc": None}
1025 ip.user_ns["d"] = d
1033 ip.user_ns["d"] = d
1026
1034
1027 class C:
1035 class C:
1028 data = d
1036 data = d
1029
1037
1030 ip.user_ns["C"] = C
1038 ip.user_ns["C"] = C
1031 ip.user_ns["get"] = lambda: d
1039 ip.user_ns["get"] = lambda: d
1032
1040
1033 def assert_no_completion(**kwargs):
1041 def assert_no_completion(**kwargs):
1034 _, matches = complete(**kwargs)
1042 _, matches = complete(**kwargs)
1035 self.assertNotIn("abc", matches)
1043 self.assertNotIn("abc", matches)
1036 self.assertNotIn("abc'", matches)
1044 self.assertNotIn("abc'", matches)
1037 self.assertNotIn("abc']", matches)
1045 self.assertNotIn("abc']", matches)
1038 self.assertNotIn("'abc'", matches)
1046 self.assertNotIn("'abc'", matches)
1039 self.assertNotIn("'abc']", matches)
1047 self.assertNotIn("'abc']", matches)
1040
1048
1041 def assert_completion(**kwargs):
1049 def assert_completion(**kwargs):
1042 _, matches = complete(**kwargs)
1050 _, matches = complete(**kwargs)
1043 self.assertIn("'abc'", matches)
1051 self.assertIn("'abc'", matches)
1044 self.assertNotIn("'abc']", matches)
1052 self.assertNotIn("'abc']", matches)
1045
1053
1046 # no completion after string closed, even if reopened
1054 # no completion after string closed, even if reopened
1047 assert_no_completion(line_buffer="d['a'")
1055 assert_no_completion(line_buffer="d['a'")
1048 assert_no_completion(line_buffer='d["a"')
1056 assert_no_completion(line_buffer='d["a"')
1049 assert_no_completion(line_buffer="d['a' + ")
1057 assert_no_completion(line_buffer="d['a' + ")
1050 assert_no_completion(line_buffer="d['a' + '")
1058 assert_no_completion(line_buffer="d['a' + '")
1051
1059
1052 # completion in non-trivial expressions
1060 # completion in non-trivial expressions
1053 assert_completion(line_buffer="+ d[")
1061 assert_completion(line_buffer="+ d[")
1054 assert_completion(line_buffer="(d[")
1062 assert_completion(line_buffer="(d[")
1055 assert_completion(line_buffer="C.data[")
1063 assert_completion(line_buffer="C.data[")
1056
1064
1057 # greedy flag
1065 # greedy flag
1058 def assert_completion(**kwargs):
1066 def assert_completion(**kwargs):
1059 _, matches = complete(**kwargs)
1067 _, matches = complete(**kwargs)
1060 self.assertIn("get()['abc']", matches)
1068 self.assertIn("get()['abc']", matches)
1061
1069
1062 assert_no_completion(line_buffer="get()[")
1070 assert_no_completion(line_buffer="get()[")
1063 with greedy_completion():
1071 with greedy_completion():
1064 assert_completion(line_buffer="get()[")
1072 assert_completion(line_buffer="get()[")
1065 assert_completion(line_buffer="get()['")
1073 assert_completion(line_buffer="get()['")
1066 assert_completion(line_buffer="get()['a")
1074 assert_completion(line_buffer="get()['a")
1067 assert_completion(line_buffer="get()['ab")
1075 assert_completion(line_buffer="get()['ab")
1068 assert_completion(line_buffer="get()['abc")
1076 assert_completion(line_buffer="get()['abc")
1069
1077
1070 def test_dict_key_completion_bytes(self):
1078 def test_dict_key_completion_bytes(self):
1071 """Test handling of bytes in dict key completion"""
1079 """Test handling of bytes in dict key completion"""
1072 ip = get_ipython()
1080 ip = get_ipython()
1073 complete = ip.Completer.complete
1081 complete = ip.Completer.complete
1074
1082
1075 ip.user_ns["d"] = {"abc": None, b"abd": None}
1083 ip.user_ns["d"] = {"abc": None, b"abd": None}
1076
1084
1077 _, matches = complete(line_buffer="d[")
1085 _, matches = complete(line_buffer="d[")
1078 self.assertIn("'abc'", matches)
1086 self.assertIn("'abc'", matches)
1079 self.assertIn("b'abd'", matches)
1087 self.assertIn("b'abd'", matches)
1080
1088
1081 if False: # not currently implemented
1089 if False: # not currently implemented
1082 _, matches = complete(line_buffer="d[b")
1090 _, matches = complete(line_buffer="d[b")
1083 self.assertIn("b'abd'", matches)
1091 self.assertIn("b'abd'", matches)
1084 self.assertNotIn("b'abc'", matches)
1092 self.assertNotIn("b'abc'", matches)
1085
1093
1086 _, matches = complete(line_buffer="d[b'")
1094 _, matches = complete(line_buffer="d[b'")
1087 self.assertIn("abd", matches)
1095 self.assertIn("abd", matches)
1088 self.assertNotIn("abc", matches)
1096 self.assertNotIn("abc", matches)
1089
1097
1090 _, matches = complete(line_buffer="d[B'")
1098 _, matches = complete(line_buffer="d[B'")
1091 self.assertIn("abd", matches)
1099 self.assertIn("abd", matches)
1092 self.assertNotIn("abc", matches)
1100 self.assertNotIn("abc", matches)
1093
1101
1094 _, matches = complete(line_buffer="d['")
1102 _, matches = complete(line_buffer="d['")
1095 self.assertIn("abc", matches)
1103 self.assertIn("abc", matches)
1096 self.assertNotIn("abd", matches)
1104 self.assertNotIn("abd", matches)
1097
1105
1098 def test_dict_key_completion_unicode_py3(self):
1106 def test_dict_key_completion_unicode_py3(self):
1099 """Test handling of unicode in dict key completion"""
1107 """Test handling of unicode in dict key completion"""
1100 ip = get_ipython()
1108 ip = get_ipython()
1101 complete = ip.Completer.complete
1109 complete = ip.Completer.complete
1102
1110
1103 ip.user_ns["d"] = {"a\u05d0": None}
1111 ip.user_ns["d"] = {"a\u05d0": None}
1104
1112
1105 # query using escape
1113 # query using escape
1106 if sys.platform != "win32":
1114 if sys.platform != "win32":
1107 # Known failure on Windows
1115 # Known failure on Windows
1108 _, matches = complete(line_buffer="d['a\\u05d0")
1116 _, matches = complete(line_buffer="d['a\\u05d0")
1109 self.assertIn("u05d0", matches) # tokenized after \\
1117 self.assertIn("u05d0", matches) # tokenized after \\
1110
1118
1111 # query using character
1119 # query using character
1112 _, matches = complete(line_buffer="d['a\u05d0")
1120 _, matches = complete(line_buffer="d['a\u05d0")
1113 self.assertIn("a\u05d0", matches)
1121 self.assertIn("a\u05d0", matches)
1114
1122
1115 with greedy_completion():
1123 with greedy_completion():
1116 # query using escape
1124 # query using escape
1117 _, matches = complete(line_buffer="d['a\\u05d0")
1125 _, matches = complete(line_buffer="d['a\\u05d0")
1118 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1126 self.assertIn("d['a\\u05d0']", matches) # tokenized after \\
1119
1127
1120 # query using character
1128 # query using character
1121 _, matches = complete(line_buffer="d['a\u05d0")
1129 _, matches = complete(line_buffer="d['a\u05d0")
1122 self.assertIn("d['a\u05d0']", matches)
1130 self.assertIn("d['a\u05d0']", matches)
1123
1131
1124 @dec.skip_without("numpy")
1132 @dec.skip_without("numpy")
1125 def test_struct_array_key_completion(self):
1133 def test_struct_array_key_completion(self):
1126 """Test dict key completion applies to numpy struct arrays"""
1134 """Test dict key completion applies to numpy struct arrays"""
1127 import numpy
1135 import numpy
1128
1136
1129 ip = get_ipython()
1137 ip = get_ipython()
1130 complete = ip.Completer.complete
1138 complete = ip.Completer.complete
1131 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1139 ip.user_ns["d"] = numpy.array([], dtype=[("hello", "f"), ("world", "f")])
1132 _, matches = complete(line_buffer="d['")
1140 _, matches = complete(line_buffer="d['")
1133 self.assertIn("hello", matches)
1141 self.assertIn("hello", matches)
1134 self.assertIn("world", matches)
1142 self.assertIn("world", matches)
1135 # complete on the numpy struct itself
1143 # complete on the numpy struct itself
1136 dt = numpy.dtype(
1144 dt = numpy.dtype(
1137 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1145 [("my_head", [("my_dt", ">u4"), ("my_df", ">u4")]), ("my_data", ">f4", 5)]
1138 )
1146 )
1139 x = numpy.zeros(2, dtype=dt)
1147 x = numpy.zeros(2, dtype=dt)
1140 ip.user_ns["d"] = x[1]
1148 ip.user_ns["d"] = x[1]
1141 _, matches = complete(line_buffer="d['")
1149 _, matches = complete(line_buffer="d['")
1142 self.assertIn("my_head", matches)
1150 self.assertIn("my_head", matches)
1143 self.assertIn("my_data", matches)
1151 self.assertIn("my_data", matches)
1144 # complete on a nested level
1152 # complete on a nested level
1145 with greedy_completion():
1153 with greedy_completion():
1146 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1154 ip.user_ns["d"] = numpy.zeros(2, dtype=dt)
1147 _, matches = complete(line_buffer="d[1]['my_head']['")
1155 _, matches = complete(line_buffer="d[1]['my_head']['")
1148 self.assertTrue(any(["my_dt" in m for m in matches]))
1156 self.assertTrue(any(["my_dt" in m for m in matches]))
1149 self.assertTrue(any(["my_df" in m for m in matches]))
1157 self.assertTrue(any(["my_df" in m for m in matches]))
1150
1158
1151 @dec.skip_without("pandas")
1159 @dec.skip_without("pandas")
1152 def test_dataframe_key_completion(self):
1160 def test_dataframe_key_completion(self):
1153 """Test dict key completion applies to pandas DataFrames"""
1161 """Test dict key completion applies to pandas DataFrames"""
1154 import pandas
1162 import pandas
1155
1163
1156 ip = get_ipython()
1164 ip = get_ipython()
1157 complete = ip.Completer.complete
1165 complete = ip.Completer.complete
1158 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1166 ip.user_ns["d"] = pandas.DataFrame({"hello": [1], "world": [2]})
1159 _, matches = complete(line_buffer="d['")
1167 _, matches = complete(line_buffer="d['")
1160 self.assertIn("hello", matches)
1168 self.assertIn("hello", matches)
1161 self.assertIn("world", matches)
1169 self.assertIn("world", matches)
1162
1170
1163 def test_dict_key_completion_invalids(self):
1171 def test_dict_key_completion_invalids(self):
1164 """Smoke test cases dict key completion can't handle"""
1172 """Smoke test cases dict key completion can't handle"""
1165 ip = get_ipython()
1173 ip = get_ipython()
1166 complete = ip.Completer.complete
1174 complete = ip.Completer.complete
1167
1175
1168 ip.user_ns["no_getitem"] = None
1176 ip.user_ns["no_getitem"] = None
1169 ip.user_ns["no_keys"] = []
1177 ip.user_ns["no_keys"] = []
1170 ip.user_ns["cant_call_keys"] = dict
1178 ip.user_ns["cant_call_keys"] = dict
1171 ip.user_ns["empty"] = {}
1179 ip.user_ns["empty"] = {}
1172 ip.user_ns["d"] = {"abc": 5}
1180 ip.user_ns["d"] = {"abc": 5}
1173
1181
1174 _, matches = complete(line_buffer="no_getitem['")
1182 _, matches = complete(line_buffer="no_getitem['")
1175 _, matches = complete(line_buffer="no_keys['")
1183 _, matches = complete(line_buffer="no_keys['")
1176 _, matches = complete(line_buffer="cant_call_keys['")
1184 _, matches = complete(line_buffer="cant_call_keys['")
1177 _, matches = complete(line_buffer="empty['")
1185 _, matches = complete(line_buffer="empty['")
1178 _, matches = complete(line_buffer="name_error['")
1186 _, matches = complete(line_buffer="name_error['")
1179 _, matches = complete(line_buffer="d['\\") # incomplete escape
1187 _, matches = complete(line_buffer="d['\\") # incomplete escape
1180
1188
1181 def test_object_key_completion(self):
1189 def test_object_key_completion(self):
1182 ip = get_ipython()
1190 ip = get_ipython()
1183 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1191 ip.user_ns["key_completable"] = KeyCompletable(["qwerty", "qwick"])
1184
1192
1185 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1193 _, matches = ip.Completer.complete(line_buffer="key_completable['qw")
1186 self.assertIn("qwerty", matches)
1194 self.assertIn("qwerty", matches)
1187 self.assertIn("qwick", matches)
1195 self.assertIn("qwick", matches)
1188
1196
1189 def test_class_key_completion(self):
1197 def test_class_key_completion(self):
1190 ip = get_ipython()
1198 ip = get_ipython()
1191 NamedInstanceClass("qwerty")
1199 NamedInstanceClass("qwerty")
1192 NamedInstanceClass("qwick")
1200 NamedInstanceClass("qwick")
1193 ip.user_ns["named_instance_class"] = NamedInstanceClass
1201 ip.user_ns["named_instance_class"] = NamedInstanceClass
1194
1202
1195 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1203 _, matches = ip.Completer.complete(line_buffer="named_instance_class['qw")
1196 self.assertIn("qwerty", matches)
1204 self.assertIn("qwerty", matches)
1197 self.assertIn("qwick", matches)
1205 self.assertIn("qwick", matches)
1198
1206
1199 def test_tryimport(self):
1207 def test_tryimport(self):
1200 """
1208 """
1201 Test that try-import don't crash on trailing dot, and import modules before
1209 Test that try-import don't crash on trailing dot, and import modules before
1202 """
1210 """
1203 from IPython.core.completerlib import try_import
1211 from IPython.core.completerlib import try_import
1204
1212
1205 assert try_import("IPython.")
1213 assert try_import("IPython.")
1206
1214
1207 def test_aimport_module_completer(self):
1215 def test_aimport_module_completer(self):
1208 ip = get_ipython()
1216 ip = get_ipython()
1209 _, matches = ip.complete("i", "%aimport i")
1217 _, matches = ip.complete("i", "%aimport i")
1210 self.assertIn("io", matches)
1218 self.assertIn("io", matches)
1211 self.assertNotIn("int", matches)
1219 self.assertNotIn("int", matches)
1212
1220
1213 def test_nested_import_module_completer(self):
1221 def test_nested_import_module_completer(self):
1214 ip = get_ipython()
1222 ip = get_ipython()
1215 _, matches = ip.complete(None, "import IPython.co", 17)
1223 _, matches = ip.complete(None, "import IPython.co", 17)
1216 self.assertIn("IPython.core", matches)
1224 self.assertIn("IPython.core", matches)
1217 self.assertNotIn("import IPython.core", matches)
1225 self.assertNotIn("import IPython.core", matches)
1218 self.assertNotIn("IPython.display", matches)
1226 self.assertNotIn("IPython.display", matches)
1219
1227
1220 def test_import_module_completer(self):
1228 def test_import_module_completer(self):
1221 ip = get_ipython()
1229 ip = get_ipython()
1222 _, matches = ip.complete("i", "import i")
1230 _, matches = ip.complete("i", "import i")
1223 self.assertIn("io", matches)
1231 self.assertIn("io", matches)
1224 self.assertNotIn("int", matches)
1232 self.assertNotIn("int", matches)
1225
1233
1226 def test_from_module_completer(self):
1234 def test_from_module_completer(self):
1227 ip = get_ipython()
1235 ip = get_ipython()
1228 _, matches = ip.complete("B", "from io import B", 16)
1236 _, matches = ip.complete("B", "from io import B", 16)
1229 self.assertIn("BytesIO", matches)
1237 self.assertIn("BytesIO", matches)
1230 self.assertNotIn("BaseException", matches)
1238 self.assertNotIn("BaseException", matches)
1231
1239
1232 def test_snake_case_completion(self):
1240 def test_snake_case_completion(self):
1233 ip = get_ipython()
1241 ip = get_ipython()
1234 ip.Completer.use_jedi = False
1242 ip.Completer.use_jedi = False
1235 ip.user_ns["some_three"] = 3
1243 ip.user_ns["some_three"] = 3
1236 ip.user_ns["some_four"] = 4
1244 ip.user_ns["some_four"] = 4
1237 _, matches = ip.complete("s_", "print(s_f")
1245 _, matches = ip.complete("s_", "print(s_f")
1238 self.assertIn("some_three", matches)
1246 self.assertIn("some_three", matches)
1239 self.assertIn("some_four", matches)
1247 self.assertIn("some_four", matches)
1240
1248
1241 def test_mix_terms(self):
1249 def test_mix_terms(self):
1242 ip = get_ipython()
1250 ip = get_ipython()
1243 from textwrap import dedent
1251 from textwrap import dedent
1244
1252
1245 ip.Completer.use_jedi = False
1253 ip.Completer.use_jedi = False
1246 ip.ex(
1254 ip.ex(
1247 dedent(
1255 dedent(
1248 """
1256 """
1249 class Test:
1257 class Test:
1250 def meth(self, meth_arg1):
1258 def meth(self, meth_arg1):
1251 print("meth")
1259 print("meth")
1252
1260
1253 def meth_1(self, meth1_arg1, meth1_arg2):
1261 def meth_1(self, meth1_arg1, meth1_arg2):
1254 print("meth1")
1262 print("meth1")
1255
1263
1256 def meth_2(self, meth2_arg1, meth2_arg2):
1264 def meth_2(self, meth2_arg1, meth2_arg2):
1257 print("meth2")
1265 print("meth2")
1258 test = Test()
1266 test = Test()
1259 """
1267 """
1260 )
1268 )
1261 )
1269 )
1262 _, matches = ip.complete(None, "test.meth(")
1270 _, matches = ip.complete(None, "test.meth(")
1263 self.assertIn("meth_arg1=", matches)
1271 self.assertIn("meth_arg1=", matches)
1264 self.assertNotIn("meth2_arg1=", matches)
1272 self.assertNotIn("meth2_arg1=", matches)
1265
1273
1266 def test_percent_symbol_restrict_to_magic_completions(self):
1274 def test_percent_symbol_restrict_to_magic_completions(self):
1267 ip = get_ipython()
1275 ip = get_ipython()
1268 completer = ip.Completer
1276 completer = ip.Completer
1269 text = "%a"
1277 text = "%a"
1270
1278
1271 with provisionalcompleter():
1279 with provisionalcompleter():
1272 completer.use_jedi = True
1280 completer.use_jedi = True
1273 completions = completer.completions(text, len(text))
1281 completions = completer.completions(text, len(text))
1274 for c in completions:
1282 for c in completions:
1275 self.assertEqual(c.text[0], "%")
1283 self.assertEqual(c.text[0], "%")
General Comments 0
You need to be logged in to leave comments. Login now